New, Natural User Interfaces
March 2, 2010 12:00 PM PT

Information technology evolves at a dizzying pace. Take the last five years, for example. In 2005, for many, online video was a balky, stuttering, frustrating experience. A new social-networking site called MySpace was just starting to make waves. And the word “twitter” was reserved mainly for birdsong.

Five years—an eon ago.

What will the next five years bring? Today’s innovators are hard at work trying to provide answers, and nowhere more so than at Microsoft Research, where thinking five or 10 years into the future is part of the job description.

One thing is for certain, though: A few years from now, we’ll be enjoying novel, advanced techniques for interacting with computers. New user interfaces—utilizing gestures, machine anticipation, contextual awareness, and rich, 3-D environments, both real and virtual, and often immersive—will make computing easier, more inviting, and increasingly intuitive. Technology will move from providing tools to enhancing life, in a natural, seamless manner formerly unimaginable.

TechFest 2010, Microsoft Research’s annual showcase of revolutionary computer-science technology, will feature several projects aiming to enable that transition. On March 3 and 4, thousands of Microsoft employees will get a chance to view the research on display, talk with the researchers involved, and seek ways to incorporate the work into new products that could be used by millions of people worldwide.

For example, one effort focuses on transforming the human body itself into a user interface. Another gives artists a digital painting experience that rivals that of the traditional brush and canvas.

Let’s take a look:

Body Computing

“Human beings are well evolved to communicate with our environments and with other people in amazingly complex ways,” says Desney Tan, senior researcher in the Visualization and Interaction for Business and Entertainment (VIBE) group at Microsoft Research Redmond. “Unfortunately, most of our current computer input devices, such as mice and keyboards, use very little of this bandwidth and provide a communication channel that could be much richer.”

Desney Tan
Desney Tan

For now, anyway. Tan and his colleagues in VIBE’s Computational User Experiences (CUE) group have been working, along with others across Microsoft, on natural user interfaces to bridge this gap. Work in areas such as speech technology, Microsoft Surface, and Project Natal provide insights into how sensor-enabled computers could point the way to a new era of human-computer interaction, but, Tan contends, such efforts only scratch the surface of what is possible.

“We aim to broaden this and to provide users with mobile natural user interfaces,” he says in introducing the work he’ll be demonstrating during TechFest. “Specifically, by instrumenting the body and turning it into the input device, we hope to create technologies that allow us to build a deeper connection to always-available computing.”

The project Tan will display is called Natural User Interfaces with Physiological Sensing. The demo has two parts. One utilizes electromyography—the sensing of electrical muscle activity—to infer finger gestures. The second uses bio-acoustic sensors that detect energy transmissions through the body, transforming the human body into a tap-based input device. Both parts of the work can be activated by wearing a simple armband on the upper forearm, thereby sending the signals wirelessly to a computing device.

Armband detects muscle activity and body acoustics
A simple armband worn on the upper forearm can detect muscle activity and bio-acoustic information that provide a new way to interact with computing devices.

With traditional mice and keyboards, computer input is enabled by physical transducers that exploit the natural dexterity of hands and fingers. The muscle-computer interface enables hand and finger input by means of gestures learned from an electromyographic recognizer collecting muscle signals from a band of sensors on the upper forearm.

In a paper called Enabling Always-Available Input with Muscle-Computer Interfaces, presented in 2009 during the Association for Computing Machinery’s Symposium on User Interface Software and Technology, Tan—along with T. Scott Saponas and James A. Landay of the Computer Science and Engineering department at the University of Washington, Dan Morris of CUE, Jim Turner of Microsoft, and Ravin Balakrishnan of the Department of Computer Science at the University of Toronto—examines the potential for such work in situations in which people need to interact with technology but using a physical device is either impossible or impractical, such as while carrying heavy objects or when the hands are busy.

Graphical user interface projected onto a user's wrist
An armband equipped with a projector displays a user interface directly onto the wrist of a user.

A second paper, entitled Skinput: Appropriating the Body as an Input Surface—written by Tan, Morris, and Chris Harrison of the Human-Computer Interaction Institute at Carnegie Mellon University—describes how finger taps on skin create useful acoustic signals and how a bio-acoustic sensing array in an armband can listen to acoustically distinct signals from different areas of the body and classify the impacts. The armband also can include a projector able to display a dynamic graphical user interface onto, for example, the user’s wrist.

And what improvements will such advances provide? Tan draws a couple of real-life parallels.

“Remember a time before cellphones, when you had to make a bazillion calls to try to organize getting a group of friends to meet for a movie?” he asks. “Even more recently, remember what it meant when you were sitting at a bar arguing about the number of pro football Hall of Famers born in Seattle without being able to check it with Wikipedia?

“Just as cellphones and mobile computing have changed the way we operate in the world, so, too, will this vision of ‘intravenous computing’ revolutionize the way we use—and rely on—computers.”

Tan, in 2007 named one of Technology Review’s Young Innovators Under 35, envisions a number of ways in which such body-based computing could enhance the lives of people on the go.

“Imagine being able to seamlessly and invisibly conjure up someone’s name or their kid’s birthday as you pass them on the street,” he says. “Or getting real-time translations of restaurant signs in a foreign country and being able to pull down reviews effortlessly. In fact, you could even learn the new language by constantly immersing yourself in the dual world. Of course, you’d also want to go in and get information about the food you are about to eat—where it came from, who had come into contact with it, how healthy it is.

“We are literally trying to put all the world’s knowledge and computing at your fingertips.”

Paint or Pixels?

For the artistically inclined, Project Gustav offers immense promise, as Naga Govindaraju explains.

“Project Gustav is a realistic painting application that enables artists to become immersed in the digital painting experience,” Govindaraju explains. “Gustav combines a natural user interface with natural media simulation and brush modeling to deliver an easy-to-use, intuitive, and flexible tool for experienced artists and novices alike.”

Govindaraju, a senior scientist on the Applications Incubation team within Microsoft Research’s eXtreme Computing Group, is understandably proud of a project that replicates the real-life experience of painting on a digital canvas.

Digital painting with a brush
As Nelson Chu demonstrates, Project Gustav enables lifelike digital painting with a brush ...

“Project Gustav uses an elegant natural media-simulation algorithm to mix, smear, and allow users to interact with paint on the canvas,” he says. “A novel, 3-D, deformable brush model takes advantage of the physical input parameters—such as area, pressure, and orientation—offered by recent stylus- and touch-input hardware.”

New, high-powered GPUs provide the capabilities.

Digital painting with a fingertip
... or, as Bill Baxter shows, with a fingertip.

“Our team, which includes world-class digital-painting researchers such as Bill Baxter and Nelson Chu, has developed new algorithms for simulation of art media. These algorithms are carefully designed to leverage the power of recent graphics processors to deliver a new level of realism in digital painting,” Govindaraju says. “In addition, the team has developed new techniques for simulating 3-D brush dynamics and modeling the subtle interactions between brush, canvas, and the bristles of a bunch.”

The work on Project Gustav removes much of the user-interface hurdles posed by previous painting software, enabling users to concentrate on the creative process rather than having to concern themselves with computer distractions.

“An artist who usually shuns digital painting programs will have no problem getting right into Project Gustav,” Govindaraju says. “Gustav lets you focus on the task of painting or sketching, rather than on wrangling with a complex UI, as in many digital media programs. Load it up on a notebook computer and take it with you for sketching wherever you are.

“A novice interested in painting as a hobby can get going right away and obtain an almost real-life experience without purchasing art materials. Children can draw all they want and still get the feel of real-world drawing and doodling—without all the cleanup.

And Govindaraju has seen Project Gustav work its spell firsthand.

“With the natural interaction metaphors, support for advanced input devices, and realistic modeling of paint media and painting tools,” he concludes, “you can really lose yourself in the program.

“One artist got so immersed in the demo that he tried to smear the chalk on the screen with a licked finger, forgetting that it was not a real canvas.”