Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Our research
Content type
+
Downloads (454)
+
Events (444)
 
Groups (151)
+
News (2729)
 
People (741)
 
Projects (1104)
+
Publications (12546)
+
Videos (5665)
Labs
Research areas
Algorithms and theory47205 (82)
Communication and collaboration47188 (102)
Computational linguistics47189 (49)
Computational sciences47190 (80)
Computer systems and networking47191 (279)
Computer vision208594 (76)
Data mining and data management208595 (16)
Economics and computation47192 (19)
Education47193 (35)
Gaming47194 (45)
Graphics and multimedia47195 (132)
Hardware and devices47196 (98)
Health and well-being47197 (30)
Human-computer interaction47198 (296)
Machine learning and intelligence47200 (173)
Mobile computing208596 (14)
Quantum computing208597 (0)
Search, information retrieval, and knowledge management47199 (199)
Security and privacy47202 (88)
Social media208598 (9)
Social sciences47203 (101)
Software development, programming principles, tools, and languages47204 (197)
Speech recognition, synthesis, and dialog systems208599 (11)
Technology for emerging markets208600 (4)
1–25 of 296
Sort
Show 25 | 50 | 100
1234567Next 
We present a new interactive approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment, whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems, where capture, labeling and batch learning often takes hours or even days to perform, our approach is fully online.
Project details
Microsoft Research is conducting a study of how people share both physical and digital things in home settings. We’re interested in finding out what makes it easy (or difficult) for people to share ownership of physical or digital belongings in order to better understand how to support people in sharing digital things in the home.
Project details
Labs: Cambridge
To facilitate ethics reviews for MSR research projects, we create a new ethics framework specific to Computer Science research.
Project details
The RoomAlive Toolkit is an open source SDK that enables developers to calibrate a network of multiple Kinect sensors and video projectors. The toolkit also provides a simple projection mapping sample that can be used as a basis to develop new immersive augmented reality experiences similar to those of the IllumiRoom and RoomAlive research projects.
Project details
Labs: Redmond
Microsoft believes the Surface Hub will be as empowering and as transformative to teams and the shared work environment as the PC was to individuals and the desk. The Surface Hub creates new modalities for creating and brainstorming with its unique large-screen productivity apps and capabilities. We believe it will be a critical component for the modern workplace, home, or other venue where people need to come together to think, ideate, and produce. This RFP is now closed.
Project details
Labs: Redmond
Project details
Labs: Redmond
The Eye Gaze keyboard is a project to enable people who are unable to speak or use a physical keyboard to communicate using only their eyes. Our initial prototypes are based around an on screen qwerty keyboard very similar to the 'taptip' keyboard built into Windows 8 which has been extended to response to eye gaze input from a sensor bar like the Tobii EyeX. Our goal is to improve communication speed by 25% compared to experienced users of off the shelf Speech Generating Devices.
This project aims to enable people to converse with their devices. We are trying to teach devices to engage with humans using human language in ways that appear seamless and natural to humans. Our research focuses on statistical methods by which devices can learn from human-human conversational interactions and can situate responses in the verbal context and in physical or virtual environments.
Project details
Labs: Redmond
Project Blush explorers the materiality of digital ephemera and people's receptiveness to 'digital jewellery' - exploring the materials and aesthetics that may allow wearables to become jewellables.Project Blush is a research project that originates from the Human Experience and Design group (HXD). HXD specialise in designing and fabricating new human experiences with computing. These play on many different kinds of human values, from amplifying efficiency and effectiveness to creating delight a
Project details
Labs: Cambridge
Team Three Rs is a group of Microsoft Researchers working on the Global Learning XPRIZE challenge, which aims to create software to help children in the developing world achieve success in learning the "Three Rs" (Reading, Writing, and Arithmetic.
Project details
We envision using Eye Gaze technology to bring independent mobility to people living with disabilities who are unable to use a joystick.
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen-and-touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures, from touch gestures produced by the hand holding the pen, and we can sense which hand grips the tablet, and determine the screen's relative orientation to the pen.
Project details
Labs: Redmond
Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face-to-face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches is users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction.
Project details
Labs: Redmond
We present a new real-time articulated hand tracker which can enable new possibilities for human-computer interaction (HCI). Our system accurately reconstructs complex hand poses across a variety of subjects using only a single depth camera. It also allows for a high-degree of robustness, continually recovering from tracking failures. However, the most unique aspect of our tracker is its flexibility in terms of camera placement and operating range.
Project details
Project details
Labs: Redmond
The physical charts are an attempt to make data and data visualisations legible to ordinary people in their daily lives. In response to the increasing sophistication of data visualisations and the seemingly unquestioning quest for novelty, the charts make playful use of long established and highly familiar representations like pie charts and bar graphs. Rather than estrange viewers, the objective is to enable them to, at a glance, engage with and comprehend data.
Project details
Labs: Cambridge
EmotoCouch is a furniture prototype that uses lights, changing patterns and haptic feedback to change its appearance and thereby convey emotion. EmotoCouch is built using the Lab of Things platform.
Project details
Quick interaction between a human teacher and a learning machine presents numerous benefits and challenges when working with web-scale data. The human teacher guides the machine towards accomplishing the task of interest. The system leverages big data to find examples that maximize the training value of its interaction with the teacher.
Project details
Labs: New York | Redmond
We are looking for participants to engage in a personalised online shopping experience. You will receive a £40 shopping voucher for your participation and get the opportunity to purchase a book at 90% discount. The experiment involves a session of online shopping during which we will measure your eye movements and bodily responses. The shopping session is followed by an interview and we will ask you to fill out a final questionnaire to give us feedback on the study.
Project details
Labs: Cambridge
We present a machine learning technique for estimating absolute, per-pixel depth using any conventional monocular 2D camera, with minor hardware modifications. Our approach targets close-range human capture and interaction where dense 3D estimation of hands and faces is desired. We use hybrid classification-regression forests to learn how to map from near infrared intensity images to absolute, metric depth in real-time. We demonstrate a variety of human computer interaction scenarios.
Project details
Climatology gives you climate information for anywhere on Earth: temperature, rain and sunniness. Whether finding where are the warm, dry places to go on holiday in December, or avoiding rain for your wedding, to finding out what the climate is like in Kazakhstan in April, Climatology allows you to discover the information you want.
Project details
Labs: Cambridge
Microsoft Research is looking for 20 high school students to participate in a study exploring existing and potentially new uses of social media and communication technologies to stay connected with friends and share experiences.
Project details
Labs: Redmond
Embedding professional services in productivity tools. Learn more at http://writingassistant.cloudapp.net/
Project details
Labs: FUSE Labs
An app that lets people check-in to the commuting trips that they take, and communicate with their fellow travelers. The app for the locations we pass through on the way to where we're going.
Project details
Labs: FUSE Labs
This paper presents a method for acquiring dense nonrigid shape and deformation from a single monocular depth sensor. We focus on modeling the human hand, and assume that a single rough template model is available. We combine and extend existing work on model-based tracking, subdivision surface fitting, and mesh deformation to acquire detailed hand models from as few as 15 frames of depth data.
Project details
Labs: Cambridge
1–25 of 296
Sort
Show 25 | 50 | 100
1234567Next 
> Our research