IEEE Transactions on Autonomous Mental Development

Homepage with submission instructions:
Submission website:
Website for Table of Contents, Abstracts & Authors' emails:

Table of Contents

Volume: 3 Issue: 2 Date: June 2011

Link to IEEE:

(Previous issue:Vol. 3, No. 1, March 2011)

Grounding Language in Action
Rohlfing, K.J. Tani, J.
Page(s): 109-112

Abstract:The topic of this Special Issue is that action and lan guage are interwoven. Driven by traditional approaches that prevail in our education, we might be surprised about the connection between language and action, since we are inclined to view language as a symbolic system as it connects entities in the world with the corresponding conceptions that a perceiver has in mind. Action, on the other hand, was considered to be an event in the world that has to be perceived first. Only then, could it also be labeled by a perceiver, so a particular conception of it could be represented in the mind. The research from perspective on how cognition develops, however, contributed to findings suggesting that what we know about action, language and interaction emerge in parallel and have an impact on each other. This parallel development seems to provide a ground for further mental growth. For a system to develop, it requires different processes to interact not only with each other, but also with the physical world. This coupling is also a valuable source of intelligence as it provides knowledge that shapes the system's performance without actually being part of the system.

Full Text from IEEE:PDF (442 KB) ; Contact the author by email

Language Does Something: Body Action and Language in Maternal Input to Three-Month-Olds
Nomikou, I. Rohlfing, K.J.
Page(s): 113-128
Digital Object Identifier 10.1109/TAMD.2011.2140113

Abstract:We conducted a naturalistic study in which German mothers interacted with their three-month-old infants during diaper changing as an everyday activity. Following the idea that "acoustic packaging" educates infants' attention, we explored whether the verbal input to the infants in natural interactions simultaneously contains action information. Applying a microanalysis method, we first analyzed the data qualitatively by identifying classes of body movement and vocal activities (that we called vocal types). We used these categories to observe the multimodal interaction practices of mothers and to describe the interaction ecology of everyday activity. Second, we analyzed the co-occurrence of language (in the form of different vocal activities) and action (in the form of body movements) quantitatively. We found that during early interaction with infants, German mothers vocalize in a tight temporal relationship with action over a considerable part of the overall interaction time, thereby making the vocal signal both perceivable and tangible to the infants.

Full Text from IEEE:PDF (3082 KB) ; Contact the author by email

Temporal, Environmental, and Social Constraints of Word-Referent Learning in Young Infants: A Neurorobotic Model of Multimodal Habituation
Veale, R. Schermerhorn, P. Scheutz, M.
Page(s): 129-145
Digital Object Identifier 10.1109/TAMD.2010.2100043

Abstract:Infants are able to adaptively associate auditory stimuli with visual stimuli even in their first year of life, as demonstrated by multimodal habituation studies. Different from language acquisition during later developmental stages, this adaptive learning in young infants is temporary and still very much stimulus-driven. Hence, temporal aspects of environmental and social factors figure crucially in the formation of prelexical multimodal associations. Study of these associations can offer important clues regarding how semantics are bootstrapped in real-world embodied infants. In this paper, we present a neuroanatomically based embodied computational model of multimodal habituation to explore the temporal and social constraints on the learning observed in very young infants. In particular, the model is able to explain empirical results showing that auditory word stimuli must be presented synchronously with visual stimulus movement for the two to be associated.

Full Text from IEEE: PDF (1024KB); Contact the author by email  

Emergence of Protosentences in Artificial Communicating Systems
Uno, R. Marocco, D. Nolfi, S. Ikegami, T.
Page(s): 146-153

Abstract: This paper investigates the relationship between embodied interaction and symbolic communication. We report about an experiment in which simulated autonomous robotic agents, whose control systems were evolved through an artificial evolutionary process, use abstract communication signals to coordinate their behavior in a context independent way. This use of signals includes some fundamental aspects of sentences in natural languages which are discussed by using the concept of joint attention in relation to the grammatical structure of sentences.

Full Text from IEEE: PDF (697KB); Contact the author by email

Acoustic Packaging: Maternal Speech and Action Synchrony
Meyer, M. Hard, B. Brand, R.J. McGarvey, M. Baldwin, D.A.
Page(s): 154-162
Digital Object Identifier 10.1109/TAMD.2010.2103941

Abstract: The current study addressed the degree to which maternal speech and action are synchronous in interactions with infants. English-speaking mothers demonstrated the function of two toys, stacking rings and nesting cups to younger infants (6-9.5 months) and older infants (9.5-13 months). Action and speech units were identified, and speech units were coded as being ongoing action descriptions or nonaction descriptions (examples of nonaction descriptions include attention-getting utterances such as "Look!" or statements of action completion such as "Yay, we did it!"). Descriptions of ongoing actions were found to be more synchronous with the actions themselves in comparison to other types of utterances, suggesting that: 1) mothers align speech and action to provide synchronous "acoustic packaging" during action demonstrations; and 2) mothers selectively pair utterances directly related to actions with the action units themselves rather than simply aligning speech in general with actions. Our results complement past studies of acoustic packaging in two ways. First, we provide a quantitative temporal measure of the degree to which speech and action onsets and offsets are aligned. Second, we offer a semantically based analysis of the phenomenon, which we argue may be meaningful to infants known to process global semantic messages in infant-directed speech. In support of this possibility, we determined that adults were capable of classifying low-pass filtered action- and nonaction-describing utterances at rates above chance.

Full Text from IEEE: PDF (842KB); Contact the author by email

Are We There Yet? Grounding Temporal Concepts in Shared Journeys
Schulz, R. Wyeth, G. Wiles, J.
Page(s): 163-175

Abstract: An understanding of time and temporal concepts is critical for interacting with the world and with other agents in the world. What does a robot need to know to refer to the temporal aspects of events-could a robot gain a grounded understanding of "a long journey," or "soon?" Cognitive maps constructed by individual agents from their own journey experiences have been used for grounding spatial concepts in robot languages. In this paper, we test whether a similar methodology can be applied to learning temporal concepts and an associated lexicon to answer the question "how long" did it take to complete a journey. Using evolutionary language games for specific and generic journeys, successful communication was established for concepts based on representations of time, distance, and amount of change. The studies demonstrate that a lexicon for journey duration can be grounded using a variety of concepts. Spatial and temporal terms are not identical, but the studies show that both can be learned using similar language evolution methods, and that time, distance, and change can serve as proxies for each other under noisy conditions. Effective concepts and names for duration provide a first step towards a grounded lexicon for temporal interval logic.

Full Text from IEEE: PDF (1377KB); Contact the author by email

An Experiment on Behavior Generalization and the Emergence of Linguistic Compositionality in Evolving Robots
Tuci, E. Ferrauto, T. Zeschel, A. Massera, G. Nolfi, S.
Page(s): 176-189

Abstract: Populations of simulated agents controlled by dynamical neural networks are trained by artificial evolution to access linguistic instructions and to execute them by indicating, touching, or moving specific target objects. During training the agent experiences only a subset of all object/action pairs. During postevaluation, some of the successful agents proved to be able to access and execute also linguistic instructions not experienced during training. This owes to the development of a semantic space, grounded in the sensory motor capability of the agent and organized in a systematized way in order to facilitate linguistic compositionality and behavioral generalization. Compositionality seems to be underpinned by a capability of the agents to access and execute the instructions by temporally decomposing their linguistic and behavioral aspects into their constituent parts (i.e., finding the target object and executing the required action). The comparison between two experimental conditions, in one of which the agents are required to ignore rather than to indicate objects, shows that the composition of the behavioral set significantly influences the development of compositional semantic structures.

Full Text from IEEE: PDF (1694KB); Contact the author by email