NUIgraph is a prototype Windows 10 app for visually exploring data in order to discover and share insight.
In order to increase user awareness of third party tracking we have investigated designs for visualizing cookie traffic as the users browse the Web.
Platform for Situated Interaction
Instructions for Mechanical Turk tasks on classifying images
We present a new interactive approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment, whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems, where capture, labeling and batch learning often takes hours or even days to perform, our approach is fully online.
To facilitate ethics reviews for MSR research projects, we create a new ethics framework specific to Computer Science research.
The Ability team is a virtual team consisting of members of MSR's Labs who work on accessible technologies for people with disabilities.
The RoomAlive Toolkit is an open source SDK that enables developers to calibrate a network of multiple Kinect sensors and video projectors. The toolkit also provides a simple projection mapping sample that can be used as a basis to develop new immersive augmented reality experiences similar to those of the IllumiRoom and RoomAlive research projects.
Microsoft believes the Surface Hub will be as empowering and as transformative to teams and the shared work environment as the PC was to individuals and the desk. The Surface Hub creates new modalities for creating and brainstorming with its unique large-screen productivity apps and capabilities. We believe it will be a critical component for the modern workplace, home, or other venue where people need to come together to think, ideate, and produce. This RFP is now closed.
Presenter Camera is a desktop application designed to improve the quality of video seen by remote attendees of a presentation.
We envision a future Internet of Things where every human-created artifact in the world that uses electricity will be connected to the internet. We are creating new experiences and technologies for the coming convergence of digital and physical systems enabled in this future.
The Eye Gaze keyboard is a project to enable people who are unable to speak or use a physical keyboard to communicate using only their eyes. Our initial prototypes are based around an on screen qwerty keyboard very similar to the 'taptip' keyboard built into Windows 8 which has been extended to response to eye gaze input from a sensor bar like the Tobii EyeX. Our goal is to improve communication speed by 25% compared to experienced users of off the shelf Speech Generating Devices.
This project aims to enable people to converse with their devices. We are trying to teach devices to engage with humans using human language in ways that appear seamless and natural to humans. Our research focuses on statistical methods by which devices can learn from human-human conversational interactions and can situate responses in the verbal context and in physical or virtual environments.
Project Blush explorers the materiality of digital ephemera and people's receptiveness to 'digital jewellery' - exploring the materials and aesthetics that may allow wearables to become jewellables.Project Blush is a research project that originates from the Human Experience and Design group (HXD). HXD specialise in designing and fabricating new human experiences with computing. These play on many different kinds of human values, from amplifying efficiency and effectiveness to creating delight a
We envision using Eye Gaze technology to bring independent mobility to people living with disabilities who are unable to use a joystick.
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen-and-touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures, from touch gestures produced by the hand holding the pen, and we can sense which hand grips the tablet, and determine the screen's relative orientation to the pen.
Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face-to-face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches is users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction.
We present a new real-time articulated hand tracker which can enable new possibilities for human-computer interaction (HCI). Our system accurately reconstructs complex hand poses across a variety of subjects using only a single depth camera. It also allows for a high-degree of robustness, continually recovering from tracking failures. However, the most unique aspect of our tracker is its flexibility in terms of camera placement and operating range.
RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented, magical entertainment experience. RoomAlive presents a unified, scalable approach for interactive projection mapping that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment.
The physical charts are an attempt to make data and data visualisations legible to ordinary people in their daily lives. In response to the increasing sophistication of data visualisations and the seemingly unquestioning quest for novelty, the charts make playful use of long established and highly familiar representations like pie charts and bar graphs. Rather than estrange viewers, the objective is to enable them to, at a glance, engage with and comprehend data.
EmotoCouch is a furniture prototype that uses lights, changing patterns and haptic feedback to change its appearance and thereby convey emotion. EmotoCouch is built using the Lab of Things platform.
Quick interaction between a human teacher and a learning machine presents numerous benefits and challenges when working with web-scale data. The human teacher guides the machine towards accomplishing the task of interest. The system leverages big data to find examples that maximize the training value of its interaction with the teacher.
We are looking for participants to engage in a personalised online shopping experience. You will receive a £40 shopping voucher for your participation and get the opportunity to purchase a book at 90% discount. The experiment involves a session of online shopping during which we will measure your eye movements and bodily responses. The shopping session is followed by an interview and we will ask you to fill out a final questionnaire to give us feedback on the study.
We present a machine learning technique for estimating absolute, per-pixel depth using any conventional monocular 2D camera, with minor hardware modifications. Our approach targets close-range human capture and interaction where dense 3D estimation of hands and faces is desired. We use hybrid classification-regression forests to learn how to map from near infrared intensity images to absolute, metric depth in real-time. We demonstrate a variety of human computer interaction scenarios.