IEEE Transactions on Autonomous Mental Development 

IEEE TAMD

Homepage with submission instructions: http://cis.ieee.org/ieee-transactions-on-autonomous-mental-development.html
Submission website:  http://mc.manuscriptcentral.com/tamd-ieee
All published issues: index.html

Table of Contents

Volume: 1  Issue: 1   Date: May 2009

Link: http://ieeexplore.ieee.org/servlet/opac?punumber=4563672

Autonomous Mental Development: A New Interdisciplinary Transactions for Natural and Artificial Intelligence
Zhang, Z.
Page(s): 1-11
Digital Object Identifier 10.1109/TAMD.2009.2021201

Full Text from IEEE: PDF (5454 KB) ; from Zhang's website: PDF; Contact the author by email.

Cognitive Developmental Robotics: A Survey
Asada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino, M.; Yoshida, C.
Page(s): 12-34
Digital Object Identifier 10.1109/TAMD.2009.2021702

Abstract: Cognitive developmental robotics (CDR) aims to provide new understanding of how human's higher cognitive functions develop by means of a synthetic approach that developmentally constructs cognitive functions. The core idea of CDR is “physical embodiment” that enables information structuring through interactions with the environment, including other agents. The idea is shaped based on the hypothesized development model of human cognitive functions from body representation to social behavior. Along with the model, studies of CDR and related works are introduced, and discussion on the model and future issues are argued.

Full Text from IEEE: PDF (2529 KB); Contact the author by email.

Modeling Unsupervised Perceptual Category Learning
Lake, B. M.; Vallabha, G. K.; McClelland, J. L.
Page(s): 35-43
Digital Object Identifier 10.1109/TAMD.2009.2021703

Abstract: During the learning of speech sounds and other perceptual categories, category labels are not provided, the number of categories is unknown, and the stimuli are encountered sequentially. These constraints provide a challenge for models, but they have been recently addressed in the online mixture estimation model of unsupervised vowel category learning (see Vallabha in the reference section). The model treats categories as Gaussian distributions, proposing both the number and the parameters of the categories. While the model has been shown to successfully learn vowel categories, it has not been evaluated as a model of the learning process. We account for several results: acquired distinctiveness between categories and acquired similarity within categories, a faster increase in discrimination for more acoustically dissimilar vowels, and gradual unsupervised learning of category structure in simple visual stimuli.

Full Text from IEEE: PDF (815 KB); Contact the author by email.

Computational Analysis of Motionese Toward Scaffolding Robot Action Learning
Nagai, Y.; Rohlfing, K. J.
Page(s): 44-54
Digital Object Identifier 10.1109/TAMD.2009.2021090

Abstract: A difficulty in robot action learning is that robots do not know where to attend when observing action demonstration. Inspired by human parent-infant interaction, we suggest that parental action demonstration to infants, called motionese, can scaffold robot learning as well as infants'. Since infants' knowledge about the context is limited, which is comparable to robots, parents are supposed to properly guide their attention by emphasizing the important aspects of the action. Our analysis employing a bottom-up attention model revealed that motionese has the effects of highlighting the initial and final states of the action, indicating significant state changes in it, and underlining the properties of objects used in the action. Suppression and addition of parents' body movement and their frequent social signals to infants produced these effects. Our findings are discussed toward designing robots that can take advantage of parental teaching.

Full Text from IEEE: PDF (989 KB); Contact the author by email.

Attention via Synchrony: Making Use of Multimodal Cues in Social Learning
Rolf, M.; Hanheide, M.; Rohlfing, K. J.
Page(s): 55-67
Digital Object Identifier 10.1109/TAMD.2009.2021091

Abstract: Infants learning about their environment are confronted with many stimuli of different modalities. Therefore, a crucial problem is how to discover which stimuli are related, for instance, in learning words. In making these multimodal “bindings,” infants depend on social interaction with a caregiver to guide their attention towards relevant stimuli. The caregiver might, for example, visually highlight an object by shaking it while vocalizing the object's name. These cues are known to help structuring the continuous stream of stimuli. To detect and exploit them, we propose a model of bottom-up attention by multimodal signal-level synchrony. We focus on the guidance of visual attention from audio-visual synchrony informed by recent adult–infant interaction studies. Consequently, we demonstrate that our model is receptive to parental cues during child-directed tutoring. The findings discussed in this paper are consistent with recent results from developmental psychology but for the first time are obtained employing an objective, computational model. The presence of “multimodal motherese” is verified directly on the audio-visual signal. Lastly, we hypothesize how our computational model facilitates tutoring interaction and discuss its application in interactive learning scenarios, enabling social robots to benefit from adult-like tutoring.

Full Text from IEEE: PDF (1593 KB); Contact the author by email.

Dually Optimal Neuronal Layers: Lobe Component Analysis
Weng, J.; Luciw, M.
Page(s): 68-85
Digital Object Identifier 10.1109/TAMD.2009.2021698

Abstract: Development imposes great challenges. Internal “cortical” representations must be autonomously generated from interactive experiences. The eventual quality of these developed representations is of course important. Additionally, learning must be as fast as possible—to quickly derive better representation from limited experiences. Those who achieve both of these will have competitive advantages. We present a cortex-inspired theory called lobe component analysis (LCA) guided by the aforementioned dual criteria. A lobe component represents a high concentration of probability density of the neuronal input space. We explain how lobe components can achieve a dual—spatiotemporal (“best” and “fastest”)—optimality, through mathematical analysis, in which we describe how lobe components' plasticity can be temporally scheduled to take into account the history of observations in the best possible way. This contrasts with using only the last observation in gradient-based adaptive learning algorithms. Since they are based on two cell-centered mechanisms—Hebbian learning and lateral inhibition—lobe components develop in-place, meaning every networked neuron is individually responsible for the learning of its signal-processing characteristics within its connected network environment. There is no need for a separate learning network. We argue that in-place learning algorithms will be crucial for real-world large-size developmental applications due to their simplicity, low computational complexity, and generality. Our experimental results show that the learning speed of the LCA algorithm is drastically faster than other Hebbian-based updating methods and independent component analysis algorithms, thanks to its dual optimality, and it does not need to use any second- or higher order statistics. We also- - introduce the new principle of fast learning from stable representation.

Full Text from IEEE: PDF (1294 KB); Contact the author by email.

Contingency Perception and Agency Measure in Visuo-Motor Spiking Neural Networks
Pitti, A.; Mori, H.; Kouzuma, S.; Kuniyoshi, Y.
Page(s): 86-97
Digital Object Identifier 10.1109/TAMD.2009.2021506

Abstract: Agency is the sense that I am the cause or author of a movement. Babies develop early this feeling by perceiving the contingency between afferent (sensor) and efferent (motor) information. A comparator model is hypothesized to be associated with many brain regions to monitor and simulate the concordance between self-produced actions and their consequences. In this paper, we propose that the biological mechanism of spike timing-dependent plasticity, that synchronizes the neural dynamics almost everywhere in the central nervous system, constitutes the perfect algorithm to detect contingency in sensorimotor networks. The coherence or the dissonance in the sensorimotor information flow imparts then the agency level. In a head-neck-eyes robot, we replicate three developmental experiments illustrating how particular perceptual experiences can modulate the overall level of agency inside the system; i.e., 1) by adding a delay between proprioceptive and visual feedback information, 2) by facing a mirror, and 3) a person. We show that the system learns to discriminate animated objects (self-image and other persons) from other type of stimuli. This suggests a basic stage representing the self in relation to others from low-level sensorimotor processes. We discuss then the relevance of our findings with neurobiological evidences and development psychological observations for developmental robots.

Full Text from IEEE: PDF (1239 KB); Contact the author by email.