IEEE Transactions on Autonomous Mental Development

Homepage with submission instructions: http://cis.ieee.org/ieee-transactions-on-autonomous-mental-development.html
Submission website: http://mc.manuscriptcentral.com/tamd-ieee
Website for Table of Contents, Abstracts & Authors' emails:  http://research.microsoft.com/~zhang/IEEE-TAMD/

Table of Contents

Volume: 4   Issue: 2

Date of Publication: June 2012

Link to IEEE: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6214672

(Previous issue: Vol. 4, No. 1, March 2012)

The "Interaction Engine": A Common Pragmatic Competence Across Linguistic and Nonlinguistic Interactions
Pezzulo, G.
Page(s): 105-123
Digital Object Identifier: 10.1109/TAMD.2011.2166261

Abstract: Recent research in cognitive psychology, neuroscience, and robotics has widely explored the tight relations between language and action systems in primates. However, the link between the pragmatics of linguistic and nonlinguistic interactions has received less attention up to now. In this paper, we argue that cognitive agents exploit the same cognitive processes and neural substrate-a general pragmatic competence-across linguistic and nonlinguistic interactive contexts. Elaborating on Levinson's idea of an "interaction engine" that permits to convey and recognize communicative intentions in both linguistic and nonlinguistic interactions, we offer a computationally guided analysis of pragmatic competence, suggesting that the core abilities required for successful linguistic interactions could derive from more primitive architectures for action control, nonlinguistic interactions, and joint actions. Furthermore, we make the case for a novel, embodied approach to human-robot interaction and communication, in which the ability to carry on face-to-face communication develops in coordination with the pragmatic competence required for joint action.

Full Text from IEEE: PDF (882KB); Contact the author by email for a copy.

Interactive Learning in Continuous Multimodal Space: A Bayesian Approach to Action-Based Soft Partitioning and Learning
Firouzi, H.; Ahmadabadi, M.N.; Araabi, B.N.; Amizadeh, S.; Mirian, M.S.; Siegwart, R.
Page(s): 124-138
Digital Object Identifier: 10.1109/TAMD.2011.2170213

Abstract: A probabilistic framework for interactive learning in continuous and multimodal perceptual spaces is proposed. In this framework, the agent learns the task along with adaptive partitioning of its multimodal perceptual space. The learning process is formulated in a Bayesian reinforcement learning setting to facilitate the adaptive partitioning. The partitioning is gradually and softly done using Gaussian distributions. The parameters of distributions are adapted based on the agent's estimate of its actions' expected values. The probabilistic nature of the method results in experience generalization in addition to robustness against uncertainty and noise. To benefit from experience generalization diversity in different perceptual subspaces, the learning is performed in multiple perceptual subspaces-including the original space-in parallel. In every learning step, the policies learned in the subspaces are fused to select the final action. This concurrent learning in multiple spaces and the decision fusion result in faster learning, possibility of adding and/or removing sensors-i.e., gradual expansion or contraction of the perceptual space-, and appropriate robustness against probable failure of or ambiguity in the data of sensors. Results of two sets of simulations in addition to some experiments are reported to demonstrate the key properties of the framework.

Full Text from IEEE: PDF (3210KB); Contact the author by email for a copy.

Tool-Body Assimilation of Humanoid Robot Using a Neurodynamical System
Nishide, S.; Tani, J.; Takahashi, T.; Okuno, H.G.; Ogata, T.
Page(s): 139-149
Digital Object Identifier: 10.1109/TAMD.2011.2177660

Abstract: Researches in the brain science field have uncovered the human capability to use tools as if they are part of the human bodies (known as tool-body assimilation) through trial and experience. This paper presents a method to apply a robot's active sensing experience to create the tool-body assimilation model. The model is composed of a feature extraction module, dynamics learning module, and a tool-body assimilation module. Self-organizing map (SOM) is used for the feature extraction module to extract object features from raw images. Multiple time-scales recurrent neural network (MTRNN) is used as the dynamics learning module. Parametric bias (PB) nodes are attached to the weights of MTRNN as second-order network to modulate the behavior of MTRNN based on the properties of the tool. The generalization capability of neural networks provide the model the ability to deal with unknown tools. Experiments were conducted with the humanoid robot HRP-2 using no tool, I-shaped, T-shaped, and L-shaped tools. The distribution of PB values have shown that the model has learned that the robot's dynamic properties change when holding a tool. Motion generation experiments show that the tool-body assimilation model is capable of applying to unknown tools to generate goal-oriented motions.

Full Text from IEEE: PDF (1429KB); Contact the author by email for a copy.

Are Robots Appropriate for Troublesome and Communicative Tasks in a City Environment?
Hayashi, K.; Shiomi, M.; Kanda, T.; Hagita, N.
Page(s): 150-160
Digital Object Identifier: 10.1109/TAMD.2011.2178846

Abstract: We studied people's acceptance of robots that per- form tasks in a city. Three different beings (a human, a human wearing a mascot costume, and a robot) performed tasks in three different scenarios: endless guidance, responding to irrational complaints, and removing an accidentally discarded key from the trash. All of these tasks involved beings interacting with visitors in troublesome situations: dull, stressful, and dirty. For this paper, 30 participants watched nine videos (three tasks performed by three beings) and evaluated each being's appropriateness for the task and its human-likeness. The results indicate that people prefer that a robot rather than a human perform these troublesome tasks, even though they require much interaction with people. In addition, comparisons with the costumed-human suggest that people's beliefs that a being deserves human rights rather than having a human-like appearance and behavior or cognitive capability is one explanation for their judgments about appropriateness.

Full Text from IEEE: PDF (1456KB); Contact the author by email for a copy.

Brain-Like Emergent Spatial Processing
Juyang Weng; Luciw, M.
Page(s): 161-185
Digital Object Identifier: 10.1109/TAMD.2011.2174636

Abstract: This is a theoretical, modeling, and algorithmic paper about the spatial aspect of brain-like information processing, modeled by the developmental network (DN) model. The new brain architecture allows the external environment (including teachers) to interact with the sensory ends and the motor ends of the skull-closed brain through development. It does not allow the human programmer to hand-pick extra-body concepts or to handcraft the concept boundaries inside the brain . Mathematically, the brain spatial processing performs real-time mapping from to , through network updates, where the contents of all emerge from experience. Using its limited resource, the brain does increasingly better through experience. A new principle is that the effector ends serve as hubs for concept learning and abstraction. The effector ends serve also as input and the sensory ends serve also as output. As DN embodiments, the Where-What Networks (WWNs) present three major function novelties-new concept abstraction, concept as emergent goals, and goal-directed perception. The WWN series appears to be the first general purpose emergent systems for detecting and recognizing multiple objects in complex backgrounds. Among others, the most significant new mechanism is general-purpose top-down attention.

Full Text from IEEE: PDF (2355KB); Contact the author by email for a copy.