Learn more about the biomedical computing projects selected by Microsoft Research.
Autism spectrum disorder (ASD) is a developmental disorder that afflicts more than 500,000 children in the United States and is characterized by a wide variety of possible symptoms such as developmental disabilities, extreme withdrawal, lack of social behavior, severe language and attention deficits, and repetitive behaviors. One of the primary impairments is difficulty with communication: between a third to a half autism suffers do not have functional verbal communication skills. It is estimated that as many as 80% of children with autism below the age of 5 do not speak.
The goal of our system is to enable non-verbal autistic children to speak using a mobile device. The project has two components: 1) the software application for the handheld device used to compose messages, track and review reports communication behavior, and 2) a webpage that can be used by the parents or caregivers for formatting personal images that are downloaded onto the handheld device.
Our approach leverages the same factors that make current cumbersome paper-based solutions successful: it can be taught fairly rapidly, no need for complex motor movements, portable to many settings, and the listener/receiver of the message is not required to be familiar with the system. The target user group is young or severe autistic children who need either a first learning device or who will never go beyond using image-based communication.
This project is a multidisciplinary approach for the detection, monitoring and treatment of diseases using a Body Sensor Network (BSN). The BSN, made up of both non-invasive and “in-vivo” sensors, make it possible to monitor physiological activities occurring inside of the body and simultaneously probe the outside environment for harmful chemicals, dangerous radiation levels, and a more general score of other hostile events. There are a broad range of biomedical and telemedicine applications for this type of system ranging from disaster recovery to real-time patient monitoring/treatment.
There are two components to be developed in conjunction with this project: Wireless Embedded medical SysTems (WESTs) and HealthNet.
WESTs will investigate:
HealthNet will investigate:
This project uses bone conduction as a new interface between computers and human users. The development of this technology will have three broad implications: 1) new user interfaces for hands-free operation of wearable or mobile computing devices, 2) highly secure data transmission and user authentication for reliable and protected access to confidential computing resources and 3) new mechanisms for wearable and mobile computing devices to provide diagnostic services.
Communication is established within the skeletal system of the body itself enabling straightforward communications between implantable devices and devices attached to other parts of the body surface. The power requirements change if the wearer consciously recognizes the functionality of the system or he/she is unconscious of it, as would be for a constant health monitoring system of critically ill patients. Signal transmission depends on the patient’s anatomy and physiology enabling user authentication. The same technology enables a diagnostic reading of these parts of the human body.
The objective is to meet the two challenges raised by integrating wireless sensor networks (WSNs) in health care: (1) delivery of collected data from patients to the monitoring personnel and (2) unobtrusiveness. Cell phones are a natural choice since they do not impair patient mobility.
The most convenient wireless interface between the cell phones and sensors is Bluetooth. However, a Bluetooth connection allows for a limited number of nodes in the local network (up to seven), and uses a rather complex communications protocol resulting in higher software overhead and power consumption. Many networked sensor systems use the low power IEEE 802.11.4 instead of Bluetooth. The goal of this project is to create a wireless gateway called “BlueGate” that enables communications between the two types of devices.
Sensors monitoring patients must be small, lightweight, have long lifetime with no need of changing or charging the batteries, and low cost to be available to large population. Current sensory devices appear to have a large form factor and are unattractive to patients or medical personnel. We propose developing a new and compact body network sensor device that can be attached to patient's clothing or worn as a wrist watch. The device would collect vital data and report it to the rest of the WSN. The scope of the work includes low power hardware design from COTS components as well as power efficient communications protocols for such mobile nodes that may roam in and out of the reach of the network.
One objective of this project is to identify activity pattern within the home using a Wireless Sensor Network, some motion sensors, and a pattern mining software application for smart healthcare. The study would be based on the recent concept of Circadian Activity Rhythms (CAR), combined with advanced data mining techniques. This project will extend the CAR model by adding a pre-processing K-Means Clustering method. This should enable the differentiation of the days of the week into similar “behavioral days” instead of relying on the assumption that the only significant differences occur between weekends and weekdays. An alarm classifier will also be added to limit the number of false detections. The result will be a refinement of patient monitoring and ultimately improvements in clinical information and medical diagnoses.
An additional goal is to adapt the power consumption of motes depending on the behavior of the resident learned through the new extended CAR patterns. A distributed power management (pm) scheme will be established, implemented into the WSN, and tested on the Alarm-Net experimental platform. This will result in a new power-efficient way to apprehend energy consumption and increase motes life time by mapping the behavior of the motes to the behavior of the resident in WSNs applied to the medical domain.
Cortically-coupled computing is the idea of harnessing the brain’s information processing capabilities for solving difficult tasks and hard computational problems. There are a number of applications for such a technology – direct brain control of robotic devices (e.g., prosthetics), brain-based cursor control for communication (e.g., for the paralyzed), high-throughput image search for triage purposes, monitoring cognitive load for user interface research, etc.
Much of the past research in cortically-coupled computing has relied on measuring brain activity using electroencephalography (EEG), which involves recording electrical signals from different locations on the scalp. EEG is popular because it is a non-invasive and relatively inexpensive method for recording brain signals. However, EEG signals are also notoriously noisy and their relation to neural activity is still poorly understood.
We propose to investigate the relationship between brain surface recordings (electrocorticography or ECoG) and scalp recordings (EEG). We seek to build on past research and shed new light on the relationship between EEG and ECoG by collecting and analyzing simultaneously recorded EEG and ECoG data.