Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
External Research: Digital Memories (Memex) 2005 Awards

Microsoft Research Digital Memories (Memex) 2005 Awards

Microsoft Research announced the seven recipients of the Digital Memories (Memex) 2005 awards, totaling $350,000 in funding, plus an additional seven recipients of hardware and software only. The objective of the Digital Memories (Memex) award is to help further research and teaching of the fundamental aspects of Digital Memories (Memex) research, including capture, annotation, links between items, and extensive use of metadata. The Digital Memories (Memex) research kit includes a SenseCam, a camera enhanced by sensors to automatically take pictures at �good� times and a software package developed by the Microsoft Research MyLifeBits, VIBE, and Phlat groups. 

Proposals Selected to Receive SenseCams and Software with Funding

Proposals Selected to Receive SenseCams and Software

Proposals Selected to Receive SenseCams and Software with Funding

Personal Audio Life Logs
Dan Ellis
Columbia University

For under a hundred dollars you can buy a recording MP3 player, weighing ounces, that has the memory and battery life to record everything you hear throughout a 12-hour day. Upload these recordings to your computer every night and you have a complete personal audio life log. It�s cheap and easy to collect this data, but it�s almost impossible to use the recordings for anything without the development of new tools. This project is concerned with investigating the implications of ubiquitous personal audio recorders, and developing tools to exploit this kind of data.

Beyond Human Memory: SenseCam Use in Veterinary College and as Assistive Technology
Manuel A. P�rez-Qui�ones, Edward Fox
Virginia Tech

We propose a research, development, and education program at Virginia Tech that will study the use of SenseCams and MyLifeBits in two areas. In area one, we will study how students in the Virginia- Maryland Regional College of Veterinary Medicine (VMRCVM) integrate their personal information to assist them in their day-to-day study and practice. In particular, we will focus on the use of SenseCams by pairs of collaborating students in labs, and will explore how data from multiple SenseCams can be integrated. In application area number two, we will explore how students with motor disabilities can benefit, and will use a SenseCam to track location throughout the day. The information collected by the SenseCam will be used by a caregiver to help identify locations on campus where handicap access is problematic. The focus on this second area is to provide a record of daily activities for a student with disabilities and to enhance student/caregiver interaction to better provide services.

Content-Based Similarity Search with MyLifeBits
Kai Li
Princeton University

Several systems have recently been built to fulfill Bush�s Memex vision, but their search capabilities have been limited to annotation-based search or content-based search for text documents. A challenge is to build an efficient content-based similarity search engine for massive amounts of feature-rich Memex data such as audio recordings and digital photos. This proposal proposes a project to leverage our ongoing work at Princeton to design and implement a content-based similarity search engine for the MyLifeBits system. The research goals include integration with annotation-based search, scalable content-based similarity search, and similarity search quality evaluation.

Integration of Memex and PlaceLab Datasets for Personal Investigations of Health and Living Patterns
Stephen Intille
Massachusetts Institute of Technology

Advances in ubiquitous computing and sensor technologies enable novel, longitudinal health monitoring applications in the home. Many home monitoring technologies have been proposed to detect health crises, support aging-in-place, and improve medical care. Health professionals and potential end users in the lay public, however, sometimes question whether home health monitoring is justified given the cost and potential invasion of privacy. Our recent work suggests that ubiquitous �monitoring� systems may be more readily adopted if they are developed as tools for personalized, longitudinal self investigation that help end users learn about the conditions and variables that impact their social, cognitive, and physical health. If done well, these same tools can then be used by researchers for ethnographic studies of people and their behaviors in non-laboratory settings. In this work we will use a live-in laboratory (the PlaceLab) to collect a two-month personal information store using multi-modal sensors, including the SenseCam. We will use this dataset to qualitatively explore the potential of sensor-driven diary systems to be used for personal health investigations and ethnographic health behavior research.

Landmark Generation from SenseCam Images
Alan Smeaton
Dublin City University

The problem we address is selecting, from a (large) set of SenseCam images plus other logged data, a representative or summary of landmarks, or significant events from a daily, weekly, or longer log. The problem arises, and is well-acknowledged, because of the large number of images captured by a SenseCam. Our approach in this proposal is based fusing together multiple sources of diverse information including low-level image similarity, image similarity based on salience maps, GPS and location information, and biometric readings.

MyHealthBits: Advanced Personal Health Record
Bambang Parmanto
University of Pittsburgh

In this project, we plan to extend the MyLifeBits framework by adding two important features: interaction and information sharing between patient and healthcare providers. The MyLifeBits research toolkit will allow us to develop an advanced personal health record (PHR) that will go beyond the traditional PHR system. The proposed MyHealthBits system will retain the two important characteristics of the traditional PHR (consumer control and longitudinal record) and will add novel features including: interaction, record sharing, multimedia, and accommodating records from wearable devices.

What Did We See? Facilitating the Interaction of Personal and Community Journaling of Natural Spaces
Chris Pal, Sarah Dorner, Jerry Schoen
University of Massachusetts

This project will develop a cross-platform, multi-user, multi-media system for recording personal and community experiences in natural environments. We will develop the technology required to take visual and audio images of what was seen and heard during nature walks, add this information to a personal life list of natural phenomena, validate observations (e.g., of species of flora or fauna), and enter them in a community journal. We wish to transform ones observations and idle questions into a preserved element of ones life memory, a data point for the scientific and local community; and an informal learning experience about the local environment. Memex style software tools will be extended by developing methods for integrated text, audio and image analysis for cross-referencing and retrieving geospatially- and temporally-referenced images, annotations and information concerning objects in natural settings. This project will also deploy the developed technology in the field to store and enhance personal journals and construct a community journal involving input from scientific and naturalist experts, high school age students, and members of the community who share a natural space (i.e., a watershed, in this pilot project).

Proposals selected to receive SenseCams and software

Automatic Vs. Manual Capture of Health-Related Experiences
Brian Smith, Penn State University
Jeana Frost, Boston University

Behavioral factors play a critical role in health management. Capturing and sharing these behaviors may provide benefits during patient-physician consultations. In past research, we have introduced photography into diabetes self-management routines to help patients make their behaviors explicit and work with physicians to see possible correlations between them and long-term health. The proposed research will compare past approaches of manually collecting images with the automatic experience capture provided by Microsoft�s SenseCam. We will integrate Microsoft�s Digital Memories system with our visualizations of behavioral photographs and data collected from continuous glucose monitoring systems (CGMS). Qualitative studies of patients using automatic and manual image capture will be conducted to see their effects on patient-physician communication. The goal is to understand how experience capture can be used to improve the diagnoses and treatment of illnesses that are highly influenced by behavioral routines.

Development of a Platform for Continuous and Discrete Recording and Retrieval of Personal Life
Kiyoharu Aizawa
The University of Tokyo

In this research project, we propose an integration of MyLifeBits and our Life Log system which we have been investigating for these six years. Due to the development and improvement of wearable tiny sensors and computers, there have been presently quite a few researches to capture and retrieve our daily lives to realize Memex, which was predicted by Vannevar Bush in 1945. MyLifeBits project is one of the successful researches among them and we also have been developing a Life Log system. MyLifeBits provides an excellent database framework, but it seems mainly concerned about desktop activities using personal computers and a few electronic devices. In other words, only desktop activities with some discrete moments of personal life are recorded and retrieved in the system. Although SenseCam, which is wearable camera, is one of the MyLifeBits project components, it can capture only still images being triggered by environmental change. In order to achieve a more idealistic Memex system, continuous recording of our whole lives is desirable. In this regard, we have been developing a Life Log system aiming at capture and analysis of continuous video and audio streams using wearable computers in conjunction with various wearable sensors not to miss any moment of our lives. Namely, our system can deal with continuous personal experience including �off-the-desk� activities regardless of where the users are.

Listen to Dream to Know
Mark Bolas
University of Southern California

New and existing audio analysis techniques will be created and applied to the problem of deriving context and meaning from SenseCam data with a specific emphasis on acoustic room and environment modeling and identification. Further efforts will collaboratively define a batch-mode processing architecture for SenseCam data termed the Data Recovery Engine for Augmented Memories. Finally, SenseCams shall be used in interactive media course work or thesis studies by MFA students to test results and find unexpected applications and issues.

Memex Metadata (M2) for Personal Educational Portfolio
Jane Greenberg, John Oberlin, Peter White
University of North Carolina at Chapel Hill

The Memex Metadata (M2) for Personal Educational Portfolios project (hereafter referred to as M2) will build toward a contextual retrieval environment for educational information. We will develop, implement, and evaluate a context awareness metadata schema. This schema will support integration of the Microsoft Memex research kit and the Context Awareness Framework (CAF) currently being developed by ITS. The CAF links a software agent on a student�s computer with ontologies and rule sets specific to the university environment. Through M2 we will integrate the CAF with the Memex software so as to enable automatic annotation of captured personal educational information with contextual metadata. Personal educational information includes information disseminated in class sessions, such as lectures, discussions, whiteboard notes, slides, handouts, Web pages, and emails, as well as personal notes and work products such as quizzes, papers (and revisions), evaluations, and IM conversations with classmates. The goal of our integration work is to annotate captured personal educational information with sufficient contextual metadata to facilitate effective retrieval.

Supporting Alzheimer�s Patients Through Memory Augmentation
Anind Dey
Carnegie Mellon University

Alzheimer�s disease is an irreversible neurodegenerative disorder that progressively degrades the brains ability to maintain normal executive, attention and memory functions. The current number of people worldwide affected with Alzheimer�s disease is estimated at 18 million and is expected to double in the next 20 years. From the time of diagnosis, sufferes live only half as long as non-sufferers of a similar age (Larson, 2004). It costs American businesses 61 billion dollars a year (Koppel, 2002) and costs Medicare 91 billion dollars a year, and this is expected to rise 75% by 2010 (Lewin Group, 2004). A treatment that could delay the onset of Alzheimer�s by 5 years would reduce the number of sufferers by 50% in 50 years (Brookmeyer, 1998). To assist Alzheimers patients in recalling and remembering memories, we propose to build and evaluate a context-enabled memory prosthesis system, using the SenseCam as a central component. We will first conduct ethnographies of patients with early-onset Alzheimer�s. We will build on previous work that successfully used temporal context to aid patients, and will determine empirically which other contextual cues are important (e.g., location, weather, identities of nearby people, occasion) to base our prosthesis on. We will then analyze the use of the prosthesis for aiding in memory retrieval.

Using Context to Evaluate Augmentative Communication Technology
Rich Simpson
University of Pittsburgh

Optimizing the fit between augmentative and alternative communication (AAC) technology and the user is critical, but it is difficult to evaluate how well AAC devices work outside of the clinic. To address this need, investigators and manufacturers have integrated data collection features into AAC devices, but existing data logging approaches only record the input events generated by the user and provide no information about the context in which communication occurs. Hence, existing data collection approaches have difficulty determining when a conversation began or ended, who the conversation partners were, or in what environments the conversation took place. More importantly, existing approaches only record the part of the conversation contributed by the user through his or her AAC device, and do not record any oral or gestural contributions from the AAC user or conversation partners. We propose to use the Digital Memex hardware and software to create a significantly more complete picture of an AAC user�s communication activities. We will use the SenseCam to record when conversations begin and end, who participates in the conversations, and in what environments the conversations take place. This information will be presented for analysis by the clinician in a system based on MyLifeBits. As future versions of SensCam are developed, the system can be further augmented to record audio and video from all conversation partners and the AAC user. 

Multi-Sensory Analysis, Summarization for Stroke-Patient Rehabilitation in Biofeedback Environments
Hari Sundaram, Todd Ingalls
Arizona State University

This proposal focuses on the development of an analytical framework for the long-term analysis and summarization in the context of stroke patient rehabilitation in an immersive multimodal environment. A long term quantitative analysis of functional motor improvement in stroke patients would provide a critical tool for rehabilitation. Stroke patient rehabilitation is a lengthy process carried out over six to nine months with multiple hours long sessions per week. Currently, the therapist relies on his or her experience to make judgments on the efficacy of the rehabilitation, during each session. However, gradual changes to the patient abilities are difficult to discern across sessions. An annotation/summarization framework that allows the therapist to evaluate long-term progress, as well as compare patients at similar stages of therapy, will provide richer insight into the rehabilitation process. As part of the proposed work we plan to develop algorithms to model the long term temporal evolution of key parameters as well as facilitate annotation and summarization of key events. The models and summarization tools shall be developed in close coordination with colleagues in bioengineering, and clinicians, and shall be evaluated using data from actual stroke patients, who have applied to be part of our research.

 > Collaboration > Opportunities