IEEE Transactions on Autonomous Mental Development

Homepage with submission instructions: http://cis.ieee.org/ieee-transactions-on-autonomous-mental-development.html
Submission website: http://mc.manuscriptcentral.com/tamd-ieee
Website for Table of Contents, Abstracts & Authors' emails:  http://research.microsoft.com/~zhang/IEEE-TAMD/

Table of Contents

Volume: 3 Issue: 3 Date: September 2011

Link to IEEE: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=5873552

(Previous issue:Vol. 3, No. 2, June 2011)

Editorial: TAMD Update
Zhang, Z.
Page(s): 193
Digital Object Identifier 10.1109/TAMD.2011.2166169

Full Text from IEEE: PDF (81KB); Available from here.

Noise and the Emergence of Rules in Category Learning: A Connectionist Model
Cowell, R.A. French, R.M.
Page(s): 194-206
Digital Object Identifier 10.1109/TAMD.2010.2099225

Abstract:We present a neural network model of category learning that addresses the question of how rules for category membership are acquired. The architecture of the model comprises a set of statistical learning synapses and a set of rule-learning synapses, whose weights, crucially, emerge from the statistical network. The network is implemented with a neurobiologically plausible Hebbian learning mechanism. The statistical weights form category representations on the basis of perceptual similarity, whereas the rule weights gradually extract rules from the information contained in the statistical weights. These rules are weightings of individual features; weights are stronger for features that convey more information about category membership. The most significant contribution of this model is that it relies on a novel mechanism involving feeding noise through the system to generate these rules. We demonstrate that the model predicts a cognitive advantage in classifying perceptually ambiguous stimuli over a system that relies only on perceptual similarity. In addition, we simulate reaction times from an experiment by (Thibaut et al. Proc. 20th Annu. Conf. Cong. Sci. Soc., pg. 1055-1060, 1998) in which both perceptual (i.e., statistical) and rule based information are available for the classification of perceptual stimuli.

Full Text from IEEE:PDF (860KB) ; Contact the author by email

Using Object Affordances to Improve Object Recognition
Castellini, C. Tommasi, T. Noceti, N. Odone, F. Caputo, B.
Page(s): 207-215
Digital Object Identifier 10.1109/TAMD.2011.2106782

Abstract:The problem of object recognition has not yet been solved in its general form. The most successful approach to it so far relies on object models obtained by training a statistical method on visual features obtained from camera images. The images must necessarily come from huge visual datasets, in order to circumvent all problems related to changing illumination, point of view, etc. We hereby propose to also consider, in an object model, a simple model of how a human being would grasp that object (its affordance). This knowledge is represented as a function mapping visual features of an object to the kinematic features of a hand while grasping it. The function is practically enforced via regression on a human grasping database. After describing the database (which is publicly available) and the proposed method, we experimentally evaluate it, showing that a standard object classifier working on both sets of features (visual and motor) has a significantly better recognition rate than that of a visual-only classifier.

Full Text from IEEE:PDF (925KB) ; Contact the author by email

Learning Generalizable Control Programs
Hart, S. Grupen, R.
Page(s): 216-231
Digital Object Identifier 10.1109/TAMD.2010.2103311

Abstract:In this paper, we present a framework for guiding autonomous learning in robot systems. The paradigm we introduce allows a robot to acquire new skills according to an intrinsic motivation function that finds behavioral affordances. Affordances-in the sense of (Gibson, Toward and Ecological Psychology, Hillsdale, NJ, 1977)-describe the latent possibilities for action in the environment and provide a direct means of organizing functional knowledge in embodied systems. We begin by showing how a robot can assemble closed-loop action primitives from its sensory and motor resources, and then show how these primitives can be sequenced into multi-objective policies. We then show how these policies can be assembled hierarchically to support incremental and cumulative learning. The main contribution of this paper demonstrates how the proposed intrinsic motivator for affordance discovery can cause a robot to both acquire such hierarchical policies using reinforcement learning and then to generalize these policies to new contexts. As the framework is described, its effectiveness and applicability is demonstrated through a longitudinal learning experiment on a bimanual robot.

Full Text from IEEE: PDF (1126KB); Contact the author by email  

A Biologically Inspired Architecture for an Autonomous and Social Robot
Malfaz, M. Castro-Gonzalez, A. Barber, R. Salichs, M.A.
Page(s): 232-246
10.1109/TAMD.2011.2112766

Abstract:Lately, lots of effort has been put into the construction of robots able to live among humans. This fact has favored the development of personal or social robots, which are expected to behave in a natural way. This implies that these robots could meet certain requirements, for example, to be able to decide their own actions (autonomy), to be able to make deliberative plans (reasoning), or to be able to have an emotional behavior in order to facilitate human-robot interaction. In this paper, the authors present a bioinspired control architecture for an autonomous and social robot, which tries to accomplish some of these features. In order to develop this new architecture, authors have used as a base a prior hybrid control architecture (AD) that is also biologically inspired. Nevertheless, in the later, the task to be accomplished at each moment is determined by a fix sequence processed by the Main Sequencer. Therefore, the main sequencer of the architecture coordinates the previously programmed sequence of skills that must be executed. In the new architecture, the main sequencer is substituted by a decision making system based on drives, motivations, emotions, and self-learning, which decides the proper action at every moment according to robot's state. Consequently, the robot improves its autonomy since the added decision making system will determine the goal and consequently the skills to be executed. A basic version of this new architecture has been implemented on a real robotic platform. Some experiments are shown at the end of the paper.

Full Text from IEEE: PDF (1165KB); Contact the author by email

Improved Binocular Vergence Control via a Neural Network That Maximizes an Internally Defined Reward
Yiwen Wang Shi, B.E.
Page(s): 247-256
Digital Object Identifier 10.1109/TAMD.2011.2128318

Abstract:We describe the autonomous development of binocular vergence control in an active robotic vision system through attention-gated reinforcement learning (AGREL). The control policy is implemented by a neural network, which maps the outputs from a population of disparity energy neurons to a set of vergence commands. The network learns to maximize a reward signal that is based on an internal representation of the visual input: the total activation in the population of disparity energy neurons. This system extends previous work using Q learning by increasing the complexity of the policy in two ways. First, the input state space is continuous, rather than discrete, and is based upon a larger diversity of neurons. Second, we increase the number of possible actions. We evaluate the network learning and performance on natural images and with real objects in a cluttered environment. The policies learned by the network outperform policies by Q learning in two ways: the mean squared errors are smaller and the closed loop frequency response has larger bandwidth.

Full Text from IEEE: PDF (1415KB); Contact the author by email

Emergence of Memory in Reactive Agents Equipped With Environmental Markers
Ji Ryang Chung; Yoonsuck Choe
Page(s): 257-271
10.1109/TAMD.2011.2132800

Abstract:In the neuronal circuits of natural and artificial agents, memory is usually implemented with recurrent connections, since recurrence allows past agent state to affect the present, on-going behavior. Here, an interesting question arises in the context of evolution: how reactive agents could have evolved into cognitive ones with internalized memory? Our idea is that reactive agents with simple feedforward circuits could have achieved behavior comparable to internal memory if they can drop and detect external markers (e.g., pheromones or excretions) in the environment. We tested this idea in two tasks (ball-catching and food-foraging task) where agents needed memory to be successful. We evolved feedforward neural network controllers with a dropper and a detector, and compared their performance with recurrent neural network controllers. The results show that feedforward controllers with external material interaction show adequate performance compared to recurrent controllers in both tasks. This means that even memoryless feedforward networks can evolve behavior that can solve tasks requiring memory, when material interaction is allowed. These results are expected to help us better understand the possible evolutionary route from reactive to cognitive agents.

Full Text from IEEE: PDF (2608KB); Contact the author by email