Homepage with submission
Submission website: http://mc.manuscriptcentral.com/tamd-ieee
Website for Table of Contents, Abstracts & Authors' emails: http://research.microsoft.com/~zhang/IEEE-TAMD/
Date of Publication: September 2012
Link to IEEE: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6298011&punumber=4563672
(Previous issue: Vol. 4, No. 2, June 2012)
Abstract: Impact Factor 2.310. IEEE TAMD Outstanding Paper Award approved by the IEEE Computational Intelligence Society, but pending IEEE’s final approval.
Full Text from IEEE: PDF (84KB); Contact the author by email for a copy.
Abstract: As robots become more common in our daily activities, im-proving human–robot interactions (HRI) and human–computer interfaces (HCI) is becoming increasingly important. Despite considerable progress in this relatively newfield, very few re-searchers have paid sufficient attention to how the brain, cogni-tion, and underlying biological mechanisms are crucial for the success of such interactions.
Full Text from IEEE: PDF (96KB); Contact the author by email for a copy.
Abstract: Time and space are fundamental to human language and embodied cognition. In our early work we investigated how Lingodroids, robots with the ability to build their own maps, could evolve their own geopersonal spatial language. In subsequent studies we extended the framework developed for learning spatial concepts and words to learning temporal intervals. This paper considers a new aspect of time, the naming of concepts like morning, afternoon, dawn, and dusk, which are events that are part of day-night cycles, but are not defined by specific time points on a clock. Grounding of such terms refers to events and features of the diurnal cycle, such as light levels. We studied event-based time in which robots experienced day-night cycles that varied with the seasons throughout a year. Then we used meet-at tasks to demonstrate that the words learned were grounded, where the times to meet were morning and afternoon, rather than specific clock times. The studies show how words and concepts for a novel aspect of cyclic time can be grounded through experience with events rather than by times as measured by clocks or calendars.
Full Text from IEEE: PDF (1372KB); Contact the author by email for a copy.
Abstract: Humans are very efficient in learning new skills through imitation and social interaction with other individuals. Recent experimental findings on the functioning of the mirror neuron system in humans and animals and on the coding of intentions, have led to the development of more realistic and powerful models of action understanding and imitation. This paper describes the implementation on a humanoid robot of a spiking neuron model of the mirror system. The proposed architecture is validated in an imitation task where the robot has to observe and understand manipulative action sequences executed by a human demonstrator and reproduce them on demand utilizing its own motor repertoire. To instruct the robot what to observe and to learn, and when to imitate, the demonstrator utilizes a simple form of sign language. Two basic principles underlie the functioning of the system: 1) imitation is primarily directed toward reproducing the goals of observed actions rather than the exact hand trajectories; and 2) the capacity to understand the motor intentions of another individual is based on the resonance of the same neural populations that are active during action execution. Experimental findings show that the use of even a very simple form of gesture-based communication allows to develop robotic architectures that are efficient, simple and user friendly.
Full Text from IEEE: PDF (1338KB); Contact the author by email for a copy.
Abstract: One of the foundations of social interaction among humans is the ability to correctly identify interactions and infer the intentions of others. To build robots that reliably function in the human social world, we must develop models that robots can use to mimic the intent recognition skills found in humans. We propose a framework that uses contextual information in the form of object affordances and object state to improve the performance of an underlying intent recognition system. This system represents objects and their affordances using a directed graph that is automatically extracted from a large corpus of natural language text. We validate our approach on a physical robot that classifies intentions in a number of scenarios.
Full Text from IEEE: PDF (1075KB); Contact the author by email for a copy.
Abstract: Game theory has been useful for understanding risk-taking and cooperative behavior. However, in studies of the neural basis of decision-making during games of conflict, subjects typically play against opponents with predetermined strategies. The present study introduces a neurobiologically plausible model of action selection and neuromodulation, which adapts to its opponent's strategy and environmental conditions. The model is based on the assumption that dopaminergic and serotonergic systems track expected rewards and costs, respectively. The model controlled both simulated and robotic agents playing Hawk-Dove and Chicken games against subjects. When playing against an aggressive version of the model, there was a significant shift in the subjects' strategy from Win-Stay-Lose-Shift to Tit-For-Tat. Subjects became retaliatory when confronted with agents that tended towards risky behavior. These results highlight the important interactions between subjects and agents utilizing adaptive behavior. Moreover, they reveal neuromodulatory mechanisms that give rise to cooperative and competitive behaviors.
Full Text from IEEE: PDF (1789KB); Contact the author by email for a copy.
Abstract: Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.
Full Text from IEEE: PDF (2059KB); Contact the author by email for a copy.