Our research
Content type
+
Downloads (441)
+
Events (396)
 
Groups (150)
+
News (2592)
 
People (803)
 
Projects (1065)
+
Publications (11992)
+
Videos (5233)
Labs
Research areas
Algorithms and theory47205 (81)
Communication and collaboration47188 (100)
Computational linguistics47189 (44)
Computational sciences47190 (79)
Computer systems and networking47191 (273)
Computer vision208594 (22)
Data mining and data management208595 (11)
Economics and computation47192 (20)
Education47193 (31)
Gaming47194 (42)
Graphics and multimedia47195 (132)
Hardware and devices47196 (93)
Health and well-being47197 (26)
Human-computer interaction47198 (285)
Machine learning and intelligence47200 (163)
Mobile computing208596 (8)
Quantum computing208597 (0)
Search, information retrieval, and knowledge management47199 (194)
Security and privacy47202 (86)
Social media208598 (6)
Social sciences47203 (100)
Software development, programming principles, tools, and languages47204 (191)
Speech recognition, synthesis, and dialog systems208599 (7)
Technology for emerging markets208600 (0)
1–25 of 285
Sort
Show 25 | 50 | 100
1234567Next 
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen-and-touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures, from touch gestures produced by the hand holding the pen, and we can sense which hand grips the tablet, and determine the screen's relative orientation to the pen.
Project details
Labs: Redmond
Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face-to-face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches is users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction.
Project details
Labs: Redmond
We present a new real-time articulated hand tracker which can enable new possibilities for human-computer interaction (HCI). Our system accurately reconstructs complex hand poses across a variety of subjects using only a single depth camera. It also allows for a high-degree of robustness, continually recovering from tracking failures. However, the most unique aspect of our tracker is its flexibility in terms of camera placement and operating range.
Project details
RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented, magical entertainment experience. RoomAlive presents a unified, scalable approach for interactive projection mapping that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment.
Project details
Labs: Redmond
The physical charts are an attempt to make data and data visualisations legible to ordinary people in their daily lives. In response to the increasing sophistication of data visualisations and the seemingly unquestioning quest for novelty, the charts make playful use of long established and highly familiar representations like pie charts and bar graphs. Rather than estrange viewers, the objective is to enable them to, at a glance, engage with and comprehend data.
Project details
Labs: Cambridge
EmotoCouch is a furniture prototype that uses lights, changing patterns and haptic feedback to change its appearance and thereby convey emotion. EmotoCouch is built using the Lab of Things platform.
Project details
Quick interaction between a human teacher and a learning machine presents numerous benefits and challenges when working with web-scale data. The human teacher guides the machine towards accomplishing the task of interest. The system leverages big data to find examples that maximize the training value of its interaction with the teacher.
Project details
Labs: New York | Redmond
We are looking for participants to engage in a personalised online shopping experience. You will receive a £40 shopping voucher for your participation and get the opportunity to purchase a book at 90% discount. The experiment involves a session of online shopping during which we will measure your eye movements and bodily responses. The shopping session is followed by an interview and we will ask you to fill out a final questionnaire to give us feedback on the study.
Project details
Labs: Cambridge
We present a machine learning technique for estimating absolute, per-pixel depth using any conventional monocular 2D camera, with minor hardware modifications. Our approach targets close-range human capture and interaction where dense 3D estimation of hands and faces is desired. We use hybrid classification-regression forests to learn how to map from near infrared intensity images to absolute, metric depth in real-time. We demonstrate a variety of human computer interaction scenarios.
Project details
Climatology gives you climate information for anywhere on Earth: temperature, rain and sunniness. Whether finding where are the warm, dry places to go on holiday in December, or avoiding rain for your wedding, to finding out what the climate is like in Kazakhstan in April, Climatology allows you to discover the information you want.
Project details
Labs: Cambridge
Microsoft Research is looking for 20 high school students to participate in a study exploring existing and potentially new uses of social media and communication technologies to stay connected with friends and share experiences.
Project details
Labs: Redmond
Embedding professional services in productivity tools
Project details
Labs: FUSE Labs
An app that lets people check-in to the commuting trips that they take, and communicate with their fellow travelers. The app for the locations we pass through on the way to where we're going.
Project details
Labs: FUSE Labs
This paper presents a method for acquiring dense nonrigid shape and deformation from a single monocular depth sensor. We focus on modeling the human hand, and assume that a single rough template model is available. We combine and extend existing work on model-based tracking, subdivision surface fitting, and mesh deformation to acquire detailed hand models from as few as 15 frames of depth data.
Project details
Labs: Cambridge
Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure.
Project details
Labs: Cambridge
We present Kinectrack, a new six degree-of-freedom (6-DoF) tracker which allows real-time and low-cost pose estimation using only commodity hardware.
Project details
Labs: Cambridge
We contribute a thin, transparent, and low-cost design for electric field sensing, allowing for 3D finger and hand tracking, as well as in-air gestures on mobile devices.
Project details
Labs: Cambridge
We present a new type of augmented mechanical keyboard, sensing rich and expressive motion gestures performed both on and directly above the device.
Project details
Labs: Cambridge
We present RetroDepth, a new vision-based system for accurately sensing the 3D silhouettes of hands, styluses, and other objects, as they interact on and above physical surfaces.
Project details
Labs: Cambridge
Computer Aided Language Learning (CALL)
Project details
Labs: Asia
NewsPad is a collaborative news editor designed to empower small communities to write articles collaboratively through: community sourcing, structured stories, and the ability to embed the story anywhere.
Project details
Labs: FUSE Labs
Site under construction
Project details
Labs: Redmond
ViiBoard uses vision techniques to significantly enhance the user experience on large touch displays (e.g. Microsoft Perceptive Pixels) in two directions: human computer interaction and immersive remote collaboration. the first
Project details
Labs: Redmond
Eventful is a system that helps produce news reports by recruiting and guiding remote and locative crowd workers who attend events in person to perform information collection missions. Eventful explores and hopes to problematize the concept of journalism as a service. For initial details please read the short paper below. Full paper coming soon.
Project details
Labs: FUSE Labs
1–25 of 285
Sort
Show 25 | 50 | 100
1234567Next 
> Our research