Homepage with submission
Submission website: http://mc.manuscriptcentral.com/tamd-ieee
Website for Table of Contents, Abstracts & Authors' emails: http://research.microsoft.com/~zhang/IEEE-TAMD/
Date of Publication: March 2012
Link to IEEE: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=6167349
(Previous issue: Vol. 3, No. 4, December 2011)
Abstract: The article presents an approach to providing a cognitive robot with a long-term memory of experiences-a memory, inspired by the concept of episodic memory (in humans) or episodic-like memory (in animals), respectively. The memory provides means to store experiences, integrate them into more abstract constructs, and recall such content. The paper presents an analysis of key characteristics of natural episodic memory systems. Based on this analysis, conceptual and technical requirements for an episodic-like memory for cognitive robots are specified. The paper provides a formal design that meets these requirements, and discusses its full implementation in a cognitive architecture for mobile robots. It reports results of simulation experiments which show that the approach can run efficiently in robot applications involving several hours of experience.
Full Text from IEEE: PDF (1004KB); Contact the author by email for a copy.
Abstract: Imitation is a very complicated function which requires a body mapping (a mapping from observed body motions to motor commands) that can discriminate between self motions and those of others. The developmental mechanism of this sophisticated capability, and the order in which the required abilities arise, is poorly understood. In this paper, we present a mechanism for the development of imitation through a simulation of infant-caregiver interaction. A model was created to acquire a body mapping, which is necessary for successful mutual imitation in infant-caregiver interaction, while discriminating self-motion from the motion of the other. The ability to predict motions and the time delay between performing a motion and observing any correlated motion provides clues to assist the development of the body mapping. The simulation results show that the development of imitation capabilities depends on a predictability preference (a function of how an agent feels regarding its options of "what to imitate," given its ability to predict motions). In addition, the simulated infants in our system are able to develop the components of a healthy body mapping in order, that is, relating self motion first, followed by an understanding of others' motions. This order of development emerges spontaneously without the need for any explicit mechanism or any partitioning of the interaction. These results suggest that this predictability preference is an important factor in infant development.
Full Text from IEEE: PDF (1830KB); Contact the author by email for a copy.
Abstract: There exists a large conceptual gap between symbolic models and emergent models for the mind. Many emergent models work on low-level sensory data, while many symbolic models deal with high-level abstract (i.e., action) symbols. There has been relatively little study on intermediate representations, mainly because of a lack of knowledge about how representations fully autonomously emerge inside the closed brain skull, using information from the exposed two ends (the sensory end and the motor end). As reviewed here, this situation is changing. A fundamental challenge for emergent models is abstraction, which symbolic models enjoy through human handcrafting. The term abstract refers to properties disassociated with any particular form. Emergent abstraction seems possible, although the brain appears to never receive a computer symbol (e.g., ASCII code) or produce such a symbol. This paper reviews major agent models with an emphasis on representation. It suggests two different ways to relate symbolic representations with emergent representations: One is based on their categorical definitions. The other considers that a symbolic representation corresponds to a brain's outside behaviors observed and handcrafted by other outside human observers; but an emergent representation is inside the brain.
Full Text from IEEE: PDF (2243KB); Contact the author by email for a copy.
Abstract: This paper introduces a framework that allows a robot to form a single behavior-grounded object categorization after it uses multiple exploratory behaviors to interact with objects and multiple sensory modalities to detect the outcomes that each behavior produces. Our robot observed acoustic and visual outcomes from six different exploratory behaviors performed on 20 objects (containers and noncontainers). Its task was to learn 12 different object categorizations (one for each behavior-modality combination), and then to unify these categorizations into a single one. In the end, the object categorization acquired by the robot matched closely the object labels provided by a human. In addition, the robot acquired a visual model of containers and noncontainers based on its unified categorization, which it used to label correctly 29 out of 30 novel objects.
Full Text from IEEE: PDF (2646KB); Contact the author by email for a copy.
Abstract: How can an agent bootstrap up from a low-level representation to autonomously learn high-level states and actions using only domain-general knowledge? In this paper, we assume that the learning agent has a set of continuous variables describing the environment. There exist methods for learning models of the environment, and there also exist methods for planning. However, for autonomous learning, these methods have been used almost exclusively in discrete environments. We propose attacking the problem of learning high-level states and actions in continuous environments by using a qualitative representation to bridge the gap between continuous and discrete variable representations. In this approach, the agent begins with a broad discretization and initially can only tell if the value of each variable is increasing, decreasing, or remaining steady. The agent then simultaneously learns a qualitative representation (discretization) and a set of predictive models of the environment. These models are converted into plans to perform actions. The agent then uses those learned actions to explore the environment. The method is evaluated using a simulated robot with realistic physics. The robot is sitting at a table that contains a block and other distractor objects that are out of reach. The agent autonomously explores the environment without being given a task. After learning, the agent is given various tasks to determine if it learned the necessary states and actions to complete them. The results show that the agent was able to use this method to autonomously learn to perform the tasks.
Full Text from IEEE: PDF (1044KB); Contact the author by email for a copy.
Abstract: The selective attention mechanism is employed by humans and primates to realize a truly intelligent perception system, which has the cognitive capability of learning and thinking about how to perceive the environment autonomously. The attention mechanism involves the top-down and bottom-up ways that correspond to the goal-directed and automatic perceptual behaviors, respectively. Rather than considering the automatic perception, this paper presents an artificial system of the goal-directed visual perception by using the object-based top-down visual attention mechanism. This cognitive system can guide the perception to an object of interest according to the current task, context and learned knowledge. It consists of three successive stages: preattentive processing, top-down attentional selection and post-attentive perception. The preattentive processing stage divides the input scene into homogeneous proto-objects, one of which is then selected by the top-down attention and finally sent to the post-attentive perception stage for high-level analysis. Experimental results of target detection in the cluttered environments are shown to validate this system.
Full Text from IEEE: PDF (2857KB); Contact the author by email for a copy.