|Opening Keynote: Faculty Summit 2014
Harry Shum, executive vice president of Microsoft’s Technology and Research group, opens the Faculty Summit by highlighting major efforts at Microsoft Research. Two of significance are the integration of Microsoft Academic Search into Bing with Cortana (Microsoft’s new personal digital assistant), and major improvements in computer vision via deep learning techniques.
|Learning to Be a Depth Camera for Close-Range Human Capture and Interaction
Among Microsoft Research's contributions to SIGGRAPH 2014, a machine learning technique for estimating absolute, per-pixel depth using any conventional monocular 2D camera, with minor hardware modifications. Our approach targets close-range human capture and interaction where dense 3D estimation of hands and faces is desired. We use hybrid classification-regression forests to learn how to map from near infrared intensity images to absolute, metric depth in real-time. We demonstrate a variety of humancomputer interaction and capture scenarios. Experiments show an accuracy that outperforms a conventional light fall-off baseline, and is comparable to high-quality consumer depth cameras, but with a dramatically reduced cost, power consumption, and form-factor.
|Haptic Feedback at the Fingertips
Presenting fingertip haptics: touch feedback on flat keyboards and touchscreens. Imagine feeling key clicks while typing on a Touch Cover or a Windows Phone, and locating a tile on a touchscreen through its unique tactile texture. Such effects are realized with piezoelectric actuators and electrostatic haptics technology.
Quantum Computing 101
FiRe2014: Artificial Intelligence Helping Humans: Future Research
An interview with Peter Lee, Corporate Vice President and Head of Microsoft Research, hosted by Ed Butler, Presenter, BBC.
|Skype Translator: Breaking down language barriers
Peter Lee, Microsoft Research VP, shares insights and a sneak peek into the Skype Translator, derived from decades of research in speech recognition, automatic translation, and machine learning technologies. The Skype Translator is now being developed jointly by Skype and Microsoft Research teams, and combines voice and IM technologies with Microsoft Translator, and neural network-based speech recognition to deliver near real-time cross-lingual communication. With the Skype Translator, we're one step closer to universal communications across language barriers, allowing people to connect in ways never before possible. In Lee's words 'It's truly magical.'
Skype Translator demonstration from Code Conference 2014
|MonoFusion: Scanning objects in real time with a single web camera
This project offers a method for creating 3-D scans of arbitrary environments in real time, utilizing only a single RGB camera as the input sensor. The camera could be one already available in a tablet or a phone, or it could be a cheap web camera. No additional input hardware is required. This removes the need for power-intensive active sensors that do not work robustly in natural outdoor lighting. In seconds, a user can generate a compelling 3-D model, which can be used in augmented reality, for 3-D printing, or in computer-aided design.
|Motion-sensing keyboard detects hand gestures
Project Type-Hover-Swipe incorporates motion sensing into a keyboard, which detects subtle hand gestures that allow you to manipulate content on the screen. Pinch your fingers together to zoom into a map, swipe your hand across the keyboard to turn a page, steer a race car in a game using a virtual steering wheel, and more. Watch the video demonstration to see what all is possible with this cool technology.
|Elevating human-computer interaction to a new level of sophistication
The Situated Interaction project, a research effort co-led by Eric Horvitz, a Microsoft distinguished scientist and managing director of Microsoft Research Redmond, and his colleague Dan Bohus, focuses on enabling many forms of complex, layered interaction between machines and humans.