IEEE Transactions on Autonomous Mental Development

Homepage with submission instructions: http://cis.ieee.org/ieee-transactions-on-autonomous-mental-development.html
Submission website: http://mc.manuscriptcentral.com/tamd-ieee

Table of Contents

Volume: 3 Issue: 1 Date: March 2011

Link: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=5729985

(Previous issue:Vol. 2, No. 4, December 2010)

 

Editorial: Healthy and Prosperous Development
Zhang, Z.
Pages: 1-2
Digital Object Identifier : 10.1109/TAMD.2011.2117490

Full Text from IEEE:PDF (256 KB); from the author: PDF; Contact the author by email

 

The Impact of Participants' Beliefs on Motor Interference and Motor Coordination in Human-Humanoid Interactions
Shen, Q. Kose-Bagci, H. Saunders, J. Dautenhahn, K.
Page(s): 6-16
Digital Object Identifier 10.1109/TAMD.2010.2089790

Abstract:This study compared the responses of human participants studying motor interference and motor coordination when they were interacting with three different types of visual stimuli: a humanoid robot, a pendulum, and a virtual moving dot. Participants' responses indicated that participants' beliefs about the engagement of the robot affected the elicitation of the motor interference effects. Together with research supporting the importance of other elements of robot appearance and behavior, such as bottom-up effects and biological motion profile, we hypothesize that it may be the overall perception (in this study, by the term "overall perception," we mean the human observer's overall perception of the robot in terms of appearance, motion, and observer's beliefs) of a robot as a "social entity" instead of any individual appearance or motion feature that is critical to elicit the interference effect in human-humanoid interaction. Moreover, motor coordination responses indicated that the participants tended to synchronize with agents with better overall perception, which were generally in-line with the above hypothesis. Based on all the results from this experimental study, the authors suggest that a humanoid robot with good overall perception as a "social entity" may facilitate "engaging" interactions with a human.

Full Text from IEEE:PDF (1073 KB) ; Contact the author by email

Integration of Speech and Action in Humanoid Robots: iCub Simulation Experiments
Tikhanoff, V. Cangelosi, A. Metta, G.
Page(s): 17-29
Digital Object Identifier 10.1109/TAMD.2010.2100390

Abstract:Building intelligent systems with human level competence is the ultimate grand challenge for science and technology in general, and especially for cognitive developmental robotics. This paper proposes a new approach to the design of cognitive skills in a robot able to interact with, and communicate about, the surrounding physical world and manipulate objects in an adaptive manner. The work is based on robotic simulation experiments showing that a humanoid robot (iCub platform) is able to acquire behavioral, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to understand basic instructions, and to adapt its abilities to changes in internal and environmental conditions.

Full Text from IEEE:PDF (1731 KB) ; Contact the author by email

Using the Rhythm of Nonverbal Human-Robot Interaction as a Signal for Learning
Andry, P. Blanchard, A. Gaussier, P.
Page(s): 30-42
Digital Object Identifier 10.1109/TAMD.2010.2097260

Abstract:Human-robot interaction is a key issue in order to build robots for everyone. The difficulty for people to understand how robots work and how they must be controlled will be one of the mains limit for broad robotics. In this paper, we study a new way of interacting with robots without needing to understand how robots work or to give them explicit instructions. This work is based on psychological data showing that synchronization and rhythm are very important features for pleasant interaction. We propose a biologically inspired architecture using rhythm detection to build an internal reward for learning. After showing the results of keyboard interactions, we present and discuss the results of real human-robots (Aibo and Nao) interactions. We show that our minimalist control architecture allows the discovery and learning of arbitrary sensorimotor associations games with expert users. With nonexpert users, we show that using only the rhythm information is not sufficient for learning all the associations due to the different strategies used by the human. Nevertheless, this last experiment shows that the rhythm is still allowing the discovery of subsets of associations, being one of the promising signal of tomorrow social applications.

Full Text from IEEE: PDF (1238KB); Contact the author by email 

Implicit Sensorimotor Mapping of the Peripersonal Space by Gazing and Reaching
Chinellato, E. Antonelli, M. Grzyb, B.J. del Pobil, A.P.
Page(s): 43-53
Digital Object Identifier 10.1109/TAMD.2011.2106781

Abstract: Primates often perform coordinated eye and arm movements, contextually fixating and reaching towards nearby objects. This combination of looking and reaching to the same target is used by infants to establish an implicit visuomotor representation of the peripersonal space, useful for both oculomotor and arm motor control. In this work, taking inspiration from such behavior and from primate visuomotor mechanisms, a shared sensorimotor map of the environment, built on a radial basis function framework, is configured and trained by the coordinated control of eye and arm movements. Computational results confirm that the approach seems especially suitable for the problem at hand, and for its implementation on a real humanoid robot. By exploratory gazing and reaching actions, either free or goal-based, the artificial agent learns to perform direct and inverse transformations between stereo vision, oculomotor, and joint-space representations. The integrated sensorimotor map that allows to contextually represent the peripersonal space through different vision and motor parameters is never made explicit, but rather emerges thanks to the interaction of the agent with the environment.

Full Text from IEEE: PDF (1092KB); Contact the author by email

Towards an Understanding of Hierarchical Architectures
Goerick, C.
Page(s): 54-63
Digital Object Identifier 10.1109/TAMD.2010.2089982

Abstract: Cognitive systems research aims to understand how cognitive abilities can be created in artificial systems. One key issue is the architecture of the system. It organizes the interplay between the different system elements and thus, determines the principle limits for the performance of the system. In this contribution, we focus on important properties of hierarchical cognitive systems. Therefore, we first present a framework for modeling hierarchical systems. Based on this framework, we formulate and discuss some crucial issues that should be treated explicitly in the design of a system. On this basis, we analyze and compare several well-established cognitive architectures with respect to their internal structure.

Full Text from IEEE: PDF (793KB); Contact the author by email

Cognitive Development in Partner Robots for Information Support to Elderly People
Yorita, A. Kubota, N.
Page(s): 64-73
10.1109/TAMD.2011.2105868

Abstract: This paper discusses an utterance system based on the associative memory of partner robots developed through interaction with people. Human interaction based on gestures is quite important to the expression of natural communication, and the meaning of gestures can be understood through intentional interactions with a human. We therefore propose a method for associative learning based on intentional interaction and conversation that can realize such natural communication. Steady-state genetic algorithms (SSGA) are applied in order to detect the human face and objects via image processing. Spiking neural networks are applied in order to memorize the spatio-temporal patterns of human hand motions and various relationships among the perceptual information that is conveyed. The experimental results show that the proposed method can refine the relationships among this varied perceptual information that can then inform an updated relationship to natural communication with a human. We also present methods of assisting memory and assessing a human's state.

Full Text from IEEE: PDF (996KB); Contact the author by email

Dynamic Neural Fields as Building Blocks of a Cortex-Inspired Architecture for Robotic Scene Representation
Zibner, S.K.U. Faubel, C. Iossifidis, I. Schoner, G.
Page(s): 74-91
10.1109/TAMD.2011.2109714

Abstract: Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation.

Full Text from IEEE: PDF (1260KB); Contact the author by email

Visual Attention for Robotic Cognition: A Survey
Begum, M. Karray, F.
Page(s): 92-105
10.1109/TAMD.2010.2096505

Abstract: The goal of the cognitive robotics research is to design robots with human-like cognition (albeit reduced complexity) in perception, reasoning, action planning, and decision making. Such a venture of cognitive robotics has developed robots with redundant number of sensors and actuators in order to perceive the world and act up on it in a human-like fashion. A major challenge to deal with these robots is managing the enormous amount of information continuously arriving through multiple sensors. The primates master this information management skill through their custom-built attention mechanism. Mimicking the attention behavior of the primates, therefore, has gained tremendous popularity in robotic research in the recent years ( Bar-Cohen , Biologically Inspired Intelligent Robots, 2003, and B. Webb , Biorobotics, 2003). The difficulties of redundant information management, however, is the most severe in case of visual perception of the robots. Even a moderate size image of the natural scene generally contains enough visual information to easily overload the on-line decision making process of an autonomous robot. Modeling primates-like visual attention mechanism for the robot, therefore, is becoming more popular among the robotic researchers. A visual attention model enables the robot to selectively (and autonomously) choose a “behaviorally relevant” segment of visual information for further processing while relative exclusion of the others. This paper sheds light on the ongoing journey of robotics research to achieve a visual attention model which will serve as a component of cognition of the modern-day robots.

Full Text from IEEE: PDF (686KB); Contact the author by email