IEEE Transactions on Autonomous Mental Development

Homepage with submission instructions: http://cis.ieee.org/ieee-transactions-on-autonomous-mental-development.html
Submission website: http://mc.manuscriptcentral.com/tamd-ieee
Website for Table of Contents, Abstracts & Authors' emails:  http://research.microsoft.com/~zhang/IEEE-TAMD/

Table of Contents

Volume: 4   Issue: 3

Date of Publication: September 2012

Link to IEEE: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6298011&punumber=4563672 

(Previous issue: Vol. 4, No. 2, June 2012)

Editorial: Impact Factor and Outstanding Paper Awards
Zhang, Z.
Page(s): 189
Digital Object Identifier: 10.1109/TAMD.2012.2211475  

Abstract: Impact Factor 2.310. IEEE TAMD Outstanding Paper Award approved by the IEEE Computational Intelligence Society, but pending IEEE’s final approval.

Full Text from IEEE: PDF (84KB); Contact the author by email for a copy.

Guest Editorial: Biologically Inspired Human–Robot Interactions—Developing More Natural Ways to Communicate with our Machines
Harris, C.; Krichmar, L.; Siegelmann, T.; Wagatsuma, H.
Page(s): 190-191
Digital Object Identifier: 10.1109/TAMD.2012.2216703

Abstract: As robots become more common in our daily activities, im-proving human–robot interactions (HRI) and human–computer interfaces (HCI) is becoming increasingly important. Despite considerable progress in this relatively newfield, very few re-searchers have paid sufficient attention to how the brain, cogni-tion, and underlying biological mechanisms are crucial for the success of such interactions.

Full Text from IEEE: PDF (96KB); Contact the author by email for a copy.

Long Summer Days: Grounded Learning of Words for the Uneven Cycles of Real World Events
Heath, S.; Schulz, R.; Ball, D.; Wiles, J.
Page(s): 192 - 203
Digital Object Identifier: 10.1109/TAMD.2012.2207455  

Abstract: Time and space are fundamental to human language and embodied cognition. In our early work we investigated how Lingodroids, robots with the ability to build their own maps, could evolve their own geopersonal spatial language. In subsequent studies we extended the framework developed for learning spatial concepts and words to learning temporal intervals. This paper considers a new aspect of time, the naming of concepts like morning, afternoon, dawn, and dusk, which are events that are part of day-night cycles, but are not defined by specific time points on a clock. Grounding of such terms refers to events and features of the diurnal cycle, such as light levels. We studied event-based time in which robots experienced day-night cycles that varied with the seasons throughout a year. Then we used meet-at tasks to demonstrate that the words learned were grounded, where the times to meet were morning and afternoon, rather than specific clock times. The studies show how words and concepts for a novel aspect of cyclic time can be grounded through experience with events rather than by times as measured by clocks or calendars.

Full Text from IEEE: PDF (1372KB); Contact the author by email for a copy.

Learning Through Imitation: a Biological Approach to Robotics
Chersi, F.
Page(s): 204 - 214
Digital Object Identifier: 10.1109/TAMD.2012.2200250  

Abstract: Humans are very efficient in learning new skills through imitation and social interaction with other individuals. Recent experimental findings on the functioning of the mirror neuron system in humans and animals and on the coding of intentions, have led to the development of more realistic and powerful models of action understanding and imitation. This paper describes the implementation on a humanoid robot of a spiking neuron model of the mirror system. The proposed architecture is validated in an imitation task where the robot has to observe and understand manipulative action sequences executed by a human demonstrator and reproduce them on demand utilizing its own motor repertoire. To instruct the robot what to observe and to learn, and when to imitate, the demonstrator utilizes a simple form of sign language. Two basic principles underlie the functioning of the system: 1) imitation is primarily directed toward reproducing the goals of observed actions rather than the exact hand trajectories; and 2) the capacity to understand the motor intentions of another individual is based on the resonance of the same neural populations that are active during action execution. Experimental findings show that the use of even a very simple form of gesture-based communication allows to develop robotic architectures that are efficient, simple and user friendly.

Full Text from IEEE: PDF (1338KB); Contact the author by email for a copy.

Context-Based Bayesian Intent Recognition
Kelley, R.; Tavakkoli, A.; King, C.; Ambardekar, A.; Nicolescu, M.; Nicolescu, M.
Page(s): 215 - 225
Digital Object Identifier: 10.1109/TAMD.2012.2211871

Abstract: One of the foundations of social interaction among humans is the ability to correctly identify interactions and infer the intentions of others. To build robots that reliably function in the human social world, we must develop models that robots can use to mimic the intent recognition skills found in humans. We propose a framework that uses contextual information in the form of object affordances and object state to improve the performance of an underlying intent recognition system. This system represents objects and their affordances using a directed graph that is automatically extracted from a large corpus of natural language text. We validate our approach on a physical robot that classifies intentions in a number of scenarios.

Full Text from IEEE: PDF (1075KB); Contact the author by email for a copy.

Reciprocity and Retaliation in Social Games With Adaptive Agents
Asher, D.E.; Zaldivar, A.; Barton, B.; Brewer, A.A.; Krichmar, J.L.
Page(s): 226 - 238
Digital Object Identifier: 10.1109/TAMD.2012.2202658

Abstract: Game theory has been useful for understanding risk-taking and cooperative behavior. However, in studies of the neural basis of decision-making during games of conflict, subjects typically play against opponents with predetermined strategies. The present study introduces a neurobiologically plausible model of action selection and neuromodulation, which adapts to its opponent's strategy and environmental conditions. The model is based on the assumption that dopaminergic and serotonergic systems track expected rewards and costs, respectively. The model controlled both simulated and robotic agents playing Hawk-Dove and Chicken games against subjects. When playing against an aggressive version of the model, there was a significant shift in the subjects' strategy from Win-Stay-Lose-Shift to Tit-For-Tat. Subjects became retaliatory when confronted with agents that tended towards risky behavior. These results highlight the important interactions between subjects and agents utilizing adaptive behavior. Moreover, they reveal neuromodulatory mechanisms that give rise to cooperative and competitive behaviors.

Full Text from IEEE: PDF (1789KB); Contact the author by email for a copy.

Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and Executing Actions and Shared Plans
Lallee, S.; Pattacini, U.; Lemaignan, S.; Lenz, A.; Melhuish, C.; Natale, L.; Skachek, S.; Hamann, K.; Steinwender, J.; Sisbot, E.A.; Metta, G.; Guitton, J.; Alami, R.; Warnier, M.; Pipe, T.; Warneken, F.; Dominey, P.F.
Page(s): 239 - 253
Digital Object Identifier: 10.1109/TAMD.2012.2199754

Abstract: Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.

Full Text from IEEE: PDF (2059KB); Contact the author by email for a copy.