Homepage with submission instructions:
Submission website: http://mc.manuscriptcentral.com/tamd-ieee
All published issues: index.html
Volume: 1 Issue: 1 Date: May 2009
Full Text from IEEE: PDF (5454 KB) ; from Zhang's website: PDF; Contact the author by email.
Abstract: Cognitive developmental robotics (CDR) aims to provide new understanding of how human's higher cognitive functions develop by means of a synthetic approach that developmentally constructs cognitive functions. The core idea of CDR is “physical embodiment” that enables information structuring through interactions with the environment, including other agents. The idea is shaped based on the hypothesized development model of human cognitive functions from body representation to social behavior. Along with the model, studies of CDR and related works are introduced, and discussion on the model and future issues are argued.
Full Text from IEEE: PDF (2529 KB); Contact the author by email.
Abstract: During the learning of speech sounds and other
perceptual categories, category labels are not provided, the number of
categories is unknown, and the stimuli are encountered sequentially. These
constraints provide a challenge for models, but they have been recently
addressed in the online mixture estimation model of unsupervised vowel category
learning (see Vallabha
Full Text from IEEE: PDF (815 KB); Contact the author by email.
Abstract: A difficulty in robot action learning is that
robots do not know where to attend when observing action demonstration. Inspired
by human parent-infant interaction, we suggest that parental action
demonstration to infants, called
Full Text from IEEE: PDF (989 KB); Contact the author by email.
Abstract: Infants learning about their environment are confronted with many stimuli of different modalities. Therefore, a crucial problem is how to discover which stimuli are related, for instance, in learning words. In making these multimodal “bindings,” infants depend on social interaction with a caregiver to guide their attention towards relevant stimuli. The caregiver might, for example, visually highlight an object by shaking it while vocalizing the object's name. These cues are known to help structuring the continuous stream of stimuli. To detect and exploit them, we propose a model of bottom-up attention by multimodal signal-level synchrony. We focus on the guidance of visual attention from audio-visual synchrony informed by recent adult–infant interaction studies. Consequently, we demonstrate that our model is receptive to parental cues during child-directed tutoring. The findings discussed in this paper are consistent with recent results from developmental psychology but for the first time are obtained employing an objective, computational model. The presence of “multimodal motherese” is verified directly on the audio-visual signal. Lastly, we hypothesize how our computational model facilitates tutoring interaction and discuss its application in interactive learning scenarios, enabling social robots to benefit from adult-like tutoring.
Full Text from IEEE: PDF (1593 KB); Contact the author by email.
Abstract: Development imposes great challenges. Internal “cortical”
representations must be autonomously generated from interactive experiences. The
eventual quality of these developed representations is of course important.
Additionally, learning must be as fast as possible—to quickly derive better
representation from limited experiences. Those who achieve both of these will
have competitive advantages. We present a cortex-inspired theory called lobe
component analysis (LCA) guided by the aforementioned dual criteria. A lobe
component represents a high concentration of probability density of the neuronal
input space. We explain how lobe components can achieve a dual—spatiotemporal
(“best” and “fastest”)—optimality, through mathematical analysis, in which we
describe how lobe components' plasticity can be temporally scheduled to take
into account the
Full Text from IEEE: PDF (1294 KB); Contact the author by email.
Abstract: Agency is the sense that I am the cause or author of a movement. Babies develop early this feeling by perceiving the contingency between afferent (sensor) and efferent (motor) information. A comparator model is hypothesized to be associated with many brain regions to monitor and simulate the concordance between self-produced actions and their consequences. In this paper, we propose that the biological mechanism of spike timing-dependent plasticity, that synchronizes the neural dynamics almost everywhere in the central nervous system, constitutes the perfect algorithm to detect contingency in sensorimotor networks. The coherence or the dissonance in the sensorimotor information flow imparts then the agency level. In a head-neck-eyes robot, we replicate three developmental experiments illustrating how particular perceptual experiences can modulate the overall level of agency inside the system; i.e., 1) by adding a delay between proprioceptive and visual feedback information, 2) by facing a mirror, and 3) a person. We show that the system learns to discriminate animated objects (self-image and other persons) from other type of stimuli. This suggests a basic stage representing the self in relation to others from low-level sensorimotor processes. We discuss then the relevance of our findings with neurobiological evidences and development psychological observations for developmental robots.
Full Text from IEEE: PDF (1239 KB); Contact the author by email.