At a sneak peek of Microsoft research projects, held on February 21, 2011, Microsoft and Microsoft Research showed off exciting new technologies in the area of natural user interface (NUI). Read on to get a glimpse of what's next.
Photo-Real Talking Head
See a 3-D, photo-real talking head, with freely controlled head motions and facial expressions. The 3-D talking head has many useful applications, such as voice agents, telepresence, gaming, and speech-to-speech translation.
This technology extends a high-quality, 2-D photo-real talking head to 3-D. It does this by first applying a 2-D-to-3-D reconstruction algorithm frame by frame on a 2-D video to construct a 3-D training database.
The 3-D talking head is animated by the geometric trajectory, while the facial expressions and articulator movements are rendered with dynamic texture sequences. Head motions and facial expression can be separately controlled by manipulating corresponding parameters.
- Easy control of head movement, illumination, and facial expressions with the versatility of the 3-D geometry model
- Dynamic texture mapping to bypass the difficulties in rendering soft tissues such as lips, tongues, eyes, and wrinkles
More Natural User Interface Projects
Microsoft Research and the Microsoft Applied Sciences Group often collaborate on natural user interface projects. Below are the latest projects from the Applied Sciences Group, also previewed on February 21, 2011.
The Wedge: See Smart Displays Through a New Lens
In the future, display technology will move towards being an interactive window on the digital world, where the display will know who and where you are, present content that is context aware, and enable natural interactions with the display surface.
View-Dependent and View-Sequential Autostereo 3-D
By using a special, flat optical lens (Wedge) behind an LCD monitor, a narrow beam of light is directed into each of a viewer’s eyes. By using a Kinect head tracker, the user’s relation to the display is tracked, and thereby, the prototype is able to steer that narrow beam to the user. The combination creates a 3-D image that is steered to the viewer without the need for glasses or the head being held in place.
The same optical system used in the 3-D system, Wedge behind an LCD, is used to steer two separate images to two separate people rather than two separate eyes, as in the 3-D case. A Kinect head tracker finds and tracks multiple viewers and each viewer is sent his or her own unique image. Therefore, two people can be looking at the same display but see two completely different images. If the two users switch positions, the same image continuously is steered toward them.
Seeing Through Displays for Interactions and Gestures
By using the flat Wedge optic in camera mode behind a special, transparent organic-light-emitting-diode display, images are captured that are both on and above the display. This enables touch and above-screen gesture interfaces, as well as telepresence applications.
Retro-Reflective Air-Gesture Interactive Display
Sometimes, it's better to control with gestures than buttons. Using a retro-reflective screen and a camera close to the projector makes all objects cast a shadow, regardless of their color. This makes it easy to apply computer-vision algorithms to sense above-screen gestures that can be used for control, navigation, and many other applications.
View-Dependent Virtual Video Window
Using Kinect, a user’s position relative to a 3-D display is tracked to create the illusion of looking through a window. This view-dependent-rendered technique is used in both the Wedge 3-D and multiview demos, but the effect is much more prevalent in this demo. The user quickly should realize the need for a multiview display, as this illusion is only valid for one user with a conventional display. This technique, along with the Wedge 3-D output and 3-D input techniques being developed, are the basic building blocks for building the ultimate telepresence display. This Magic Window is a bi-directional, light-field, interactive display that gives multiple users in a telepresence session the illusion that they are interacting and talking to each other through a simple glass window.
|Academics, Enthusiasts to Get Kinect SDK
On Feb. 21, Microsoft announced plans to release this spring a non-commercial Kinect for Windows software development kit from Microsoft Research, developed in collaboration with Microsoft’s Interactive Entertainment Business.
|New, Natural User Interfaces
New user interfaces to be shown during TechFest 2010, Microsoft Research’s annual technology showcase, include one that turns the human body into a natural input device and another that provides a realistic, natural digital painting experience.