EPSRC Reference: |
EP/F028598/1 |
Title: |
VALUE: Vision, Action, and Language Unified by Embodiment |
Principal Investigator: |
Harley, Professor T |
Other Investigators: |
|
Researcher Co-Investigators: |
|
Project Partners: |
|
Department: |
Psychology |
Organisation: |
University of Dundee |
Scheme: |
Standard Research |
Starts: |
01 November 2008 |
Ends: |
31 March 2012 |
Value (£): |
297,092
|
EPSRC Research Topic Classifications: |
Cognitive Science Appl. in ICT |
Robotics & Autonomy |
|
EPSRC Industrial Sector Classifications: |
No relevance to Underpinning Sectors |
|
|
Related Grants: |
|
Panel History: |
Panel Date | Panel Name | Outcome |
06 Dec 2007
|
ICT Prioritisation Panel (Technology)
|
Announced
|
|
Summary on Grant Application Form |
The primary aim of this project is to develop a simulation of the processes involved in solving the following problem: how to select, based on the agent's knowledge and representations of the world, one object from several, grasp the object and use it in an appropriate manner. This mundane activity in fact requires the simultaneous solution of several deep problems at various levels. The agent's visual system must represent potential target objects, the target must be selected based on task instructions or the agent's knowledge of the functions of the represented objects, and the hand (in this case) must be moved to the target and shaped so as to grip it in a manner appropriate for its use. We propose to develop a robotic simulation model inspired by recent theories of embodied cognition, in which the vision, action and semantic systems are linked together, in a dynamic and mutually interactive manner, within a connectionist architecture. Human experimental work will constrain the temporal and dynamic properties of the system in an effort to develop a psychologically plausible model of embodied selection for action. As much of the cognitive mechanisms leading to the integration between action and vision for actions such as object assembly tasks are not fully known, new empirical studies in this project will also improve our insight of these embodied cognitive dynamics. New experiments and the use of the embodied cognitive model will also be used to further our understanding of language and cognition integration e.g. by providing further predictions and insights on the dynamics of language and action knowledge in object representation.This is an interdisciplinary project which involves expertise and methodologies from cognitive psychology, motor control, and computational/robotics modelling. The interdisciplinary nature of the project and the design and experimentation of cognitive agents make the project highly relevant to the Cognitive Systems Foresight programmeBehavioural studies, as proposed here, will be based on the eye-tracking methodology. This permits the identification of the time-course of visuo-attentional processes in action and language processing and will provide converging evidence from stimulus-response compatibility studies on object selection. Eye tracking data will also be used to constrain the behavioural and attentional strategies used by simulated cognitive robots during tasks involving object naming and selection. In eye-tracking experiments we will show arrays of novel objects and study three levels of action representation. At the encoding level, we manipulate the location and onset time of a visual detection probe in this array to reveal how observers attend and prepare their actions (Fischer et al., in press). At the representational/linguistic level, we present auditory object names and register the observer's eye movements towards the named objects (visual world paradigm, e.g. Altmann & Kamide, 2004). Linguistic manipulations, such as using phonological competitors ( candle-candy ), reveal the time course of the interplay between covert and overt attention and the relative strength of top-down (linguistic) vs bottom-up visual control over action prediction. Finally, at the execution level, we instruct participants to pick up the named object and record their overt manual responses (e.g., Chambers et al., 2002, 2004). Orthogonal to these three levels of embodiment, we gradually associate each novel object with a particular name and manual response, and we design object arrays with congruent and incongruent response requirements. This learning approach enables us to track embodied concept acquisition and its implications for action control, separately at the encoding, linguistic/representational, and execution level.
|
Key Findings |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Potential use in non-academic contexts |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Impacts |
Description |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk |
Summary |
|
Date Materialised |
|
|
Sectors submitted by the Researcher |
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
|
Project URL: |
|
Further Information: |
|
Organisation Website: |
http://www.dundee.ac.uk |