IEEE Transactions on Autonomous Mental Development 

Homepage with submission instructions: http://cis.ieee.org/ieee-transactions-on-autonomous-mental-development.html
Submission website: http://mc.manuscriptcentral.com/tamd-ieee
All published issues: index.html

Table of Contents

Volume: 2 Issue: 3 Date: September 2010

Link: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=5568771

(Previous issue: Vol. 2, No. 2, June 2010)

Spatio-Temporal Multimodal Developmental Learning
Zhang, Y.; Weng, J.;
Page(s): 149-166
Digital Object Identifier 10.1109/TAMD.2010.2051437

Abstract: It is elusive how the skull-enclosed brain enables spatio-temporal multimodal developmental learning. By multimodal, we mean that the system has at least two sensory modalities, e.g., visual and auditory in our experiments. By spatio-temporal, we mean that the behavior from the system depends not only on the spatial pattern in the current sensory inputs, but also those of the recent past. Traditional machine learning requires humans to train every module using hand-transcribed data, using handcrafted symbols among modules, and hand-link modules internally. Such a system is limited by a static set of symbols and static module performance. A key characteristic of developmental learning is that the "brain" is "skull-closed" after birth-not directly manipulatable by the system designer-so that the system can continue to learn incrementally without the need for reprogramming. In this paper, we propose an architecture for multimodal developmental learning-parallel modality pathways all situate between a sensory end and the motor end. Motor signals are not only used as output behaviors, but also as part of input to all the related pathways. For example, the proposed developmental learning does not use silence as cut points for speech processing or motion static points as key frames for visual processing.

Full Text from IEEE:PDF (2634 KB) ; Contact the author by email

Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics
Cangelosi, A.; Metta, G.; Sagerer, G.; Nolfi, S.; Nehaniv, C.; Fischer, K.; Tani, J.; Belpaeme, T.; Sandini, G.; Nori, F.; Fadiga, L.; Wrede, B.; Rohlfing, K.; Tuci, E.; Dautenhahn, K.; Saunders, J.; Zeschel, A.;
Page(s): 167-195
Digital Object Identifier 10.1109/TAMD.2010.2053034

Abstract: This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.

Full Text from IEEE:PDF (428KB) ; Contact the author by email

Top-Down Gaze Movement Control in Target Search Using Population Cell Coding of Visual Context
Miao, J.; Qing, L.; Zou, B.; Duan, L.; Gao, W.;
Page(s): 196-215
Digital Object Identifier 10.1109/TAMD.2010.2053365

Abstract: Visual context plays an important role in humans' top-down gaze movement control for target searching. Exploring the mental development mechanism in terms of incremental visual context encoding by population cells is an interesting issue. This paper presents a biologically inspired computational model. The visual contextual cues were used in this model for top-down eye-motion control on searching targets in images. We proposed a population cell coding mechanism for visual context encoding and decoding. The model was implemented in a neural network system. A developmental learning mechanism was simulated in this system by dynamically generating new coding neurons to incrementally encode visual context during training. The encoded context was decoded with population neurons in a top-down mode. This allowed the model to control the gaze motion to the centers of the targets. The model was developed with pursuing low encoding quantity and high target locating accuracy. Its performance has been evaluated by a set of experiments to search different facial objects in a human face image set. Theoretical analysis and experimental results show that the proposed visual context encoding algorithm without weight updating is fast, efficient and stable, and the population-cell coding generally performs better than single-cell coding and k-nearest-neighbor (k-NN)-based coding.

Full Text from IEEE: PDF (2440KB); Contact the author by email 

Goal Babbling Permits Direct Learning of Inverse Kinematics
Rolf, M.; Steil, J.J.; Gienger, M.;
Page(s): 216-229
Digital Object Identifier 10.1109/TAMD.2010.2062511

Abstract: We present an approach to learn inverse kinematics of redundant systems without prior- or expert-knowledge. The method allows for an iterative bootstrapping and refinement of the inverse kinematics estimate. The essential novelty lies in a path-based sampling approach: we generate training data along paths, which result from execution of the currently learned estimate along a desired path towards a goal. The information structure thereby induced enables an efficient detection and resolution of inconsistent samples solely from directly observable data. We derive and illustrate the exploration and learning process with a low-dimensional kinematic example that provides direct insight into the bootstrapping process. We further show that the method scales for high dimensional problems, such as the Honda humanoid robot or hyperredundant planar arms with up to 50 degrees of freedom.

Full Text from IEEE: PDF (1543KB); Contact the author by email

Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010)
Schimidhuber, J.;
Page(s): 230-247
Digital Object Identifier 10.1109/TAMD.2010.2056368

Abstract: The simple, but general formal theory of fun and intrinsic motivation and creativity (1990–2010) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression. It generalizes the traditional field of active learning, and is related to old, but less formal ideas in aesthetics theory and developmental psychology. It has been argued that the theory explains many essential aspects of intelligence including autonomous development, science, art, music, and humor. This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities. Emphasis is put on the importance of limited computational resources for online prediction and compression. Discrete and continuous time formulations are given. Previous practical, but nonoptimal implementations (1991, 1995, and 1997–2002) are reviewed, as well as several recent variants by others (2005–2010). A simplified typology addresses current confusion concerning the precise nature of intrinsic motivation.

Full Text from IEEE: PDF (349KB); Contact the author by email

Top-Down Connections in Self-Organizing Hebbian Networks: Topographic Class Grouping
Luciw, M.; Weng, J.
Page(s): 248-261
Digital Object Identifier 10.1109/TAMD.2010.2072150

Abstract: We investigate the effects of top–down input connections from a later layer to an earlier layer in a biologically inspired network. The incremental learning method combines optimal Hebbian learning for stable feature extraction, competitive lateral inhibition for sparse coding, and neighborhood-based self-organization for topographic map generation. The computational studies reported indicate top–down connections encourage features that reduce uncertainty at the lower layer with respect to the features in the higher layer, enable relevant information to be uncovered at the lower layer so that irrelevant information can preferentially be discarded [a necessary property for autonomous mental development (AMD)], and cause topographic class grouping. Class groups have been observed in cortex, e.g., in the fusiform face area and parahippocampal place area. This paper presents the first computational account, as far as we know, explaining these three phenomena by a single biologically inspired network. Visual recognition experiments show that top–down-enabled networks reduce error rates for limited network sizes, show class grouping, and can refine lower layer representation after new conceptual information is learned. These findings may shed light on how the brain self-organizes cortical areas, and may contribute to computational understanding of how autonomous agents might build and maintain an organized internal representation over its lifetime of experiences.

Full Text from IEEE: PDF (1397KB); Contact the author by email