Scanning Inside the Box at SIGGRAPH

Published

SIGGRAPH 2013, the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, occurs in Anaheim, California, from July 21 to 25. Sponsored by the Association for Computing Machinery, the conference is expected to attract 20,000 professionals eager to learn about advances across many technologies. The event has long been considered a breeding ground for significant new areas of research.

When principal researcher Andy Wilson attends SIGGRAPH 2013, he will represent two contributions. The first is a hands-on demo of IllumiRoom, a proof of concept of an immersive living-room system that projects visualizations onto the areas surrounding a television screen, presented in April during CHI 2013. It’s the sort of near-science-fiction virtual-reality experience the research community has come to expect from Wilson and the Natural Interaction Research group he manages at Microsoft Research Redmond.

Andy Wilson

Andy Wilson

Spotlight: Event Series

Microsoft Research Forum

Join us for a continuous exchange of ideas about research in the era of general AI. Watch Episode 1 & 2 on-demand.

The second, a paper Wilson co-authored with Karl Willis of Carnegie Mellon University titled InfraStructs: Fabricating Information Inside Physical Objects for Imaging in the Terahertz Region, at first appears to be a complete departure from virtual reality. The InfraStructs project pioneers techniques for reading unique identifiers embedded within 3-D printed objects. What do such techniques have in common with the human-computer-interface work in which Wilson specializes?

On the surface, nothing. Look deeper, though, and there are areas of overlap with techniques Wilson has explored over the past decade. Look farther down the road, and there’s everything to do with the natural-interaction research for which he’s known.

A Busy Research Career

Wilson joined Microsoft Research in 2001 after completing his Ph.D. at the Massachusetts Institute of Technology (MIT).

“When I was at MIT, I was lucky to be with the MIT Media Lab,” Wilson says, referring to the research laboratory famed for interdisciplinary projects and numerous groundbreaking inventions. “That was an exciting place, an environment that could’ve been a tough act to follow. But I’ve been with Microsoft Research now for 12 years, and it’s been even better. I’ve had the good fortune to work on a wide variety of really interesting projects and to collaborate with first-rate scientists.”

Wilson’s achievements since joining Microsoft Research, though, have not resulted solely from good fortune, says Eric Horvitz, Microsoft distinguished scientist and managing co-director of Microsoft Research Redmond.

“Andy brings intellect, imagination, and a wealth of knowledge about core scientific principles to create magic,” Horvitz says. “It’s been a pleasure to follow his contributions and career trajectory at Microsoft Research.”

Although Wilson’s graduate studies focused on pattern recognition and gesture recognition, he was also getting interested in natural user interfaces. At Microsoft Research, he jumped right into such a project.

“It was a surface-computing project, the prototype for what would become Microsoft PixelSense,” he recalls, “and that was a really interesting experience. It was an exercise in multiple research disciplines where we were asking, ‘What are the different ways we can use sensors to create new computing experiences?’

“Almost immediately after that, we began investigating depth cameras. Those were the early days, when we were still exploring the unique capabilities of the camera, building the initial demos, and exploring all the different techniques for working with the data.”

In fact, Wilson admits to hoarding a shelf full of old depth cameras. He can’t bring himself to throw them out because it’s like having a small museum about the evolution of the technology.

Since joining Microsoft Research, Wilson has authored or co-authored more than 40 conference papers and contributed numerous articles to research journals on topics ranging from multitouch, gesture-based interfaces, display technologies, and depth cameras, to natural user interfaces.

And now he’s exploring the future of 3-D printing?

3-D Printing as a Research Topic

“A lot of people see 3-D printers simply as tools for rapid prototyping,” Wilson says. “We want to think about 3-D printing more deeply and approach it as a research topic. InfraStructs brings terahertz scanning into 3-D manufacturing. It opens up new possibilities for encoding hidden information as part of the 3-D fabrication process.”

THz imaging systems

THz imaging systems emit a pulse of THz radiation and measure reflections from material interfaces encountered on the outer and inner surfaces of an object.

The term terahertz (THz) refers to electromagnetic radiation that falls between frequencies of 300 and 3,000 gigahertz (0.3 to 3 THz), with wavelengths of one millimeter or shorter. THz frequencies enable high-resolution volumetric imagery to be sensed accurately from precise locations inside an object.

“Karl (Willis), who was an intern and the first author of this paper, was intrigued by the new terahertz scanners,” Wilson says, “and I knew he would be doing something interesting with sensing or display technologies. If you think about what we do, it’s really about sensing and display.”

InfraStructs

InfraStructs are created by encoding information into a digital model that is then fabricated with material transitions inside a physical object. The object’s internal volume is imaged in the THz region and decoded into meaningful information.

The premise behind the project is that 3-D printing offers a way to embed information inside physical objects as a cost-effective part of the manufacturing process. InfraStructs is a prototype that demonstrates the feasibility of such a process, from designing and fabricating coded ”tags” as unique identifiers to scanning inside a printed object to read and decode the information on those tags. The benefit of such an approach is that manufacturers can embed unique information—serial numbers or even simple programs—in coded tags by designing them as part of the 3-D printed product. This overcomes the need to insert radio-frequency ID (RFID) tags or electronic chips, which add to manufacturing cost and complexity, or using bar codes, which must be applied externally and are vulnerable to tampering.

“It has to do with the waveforms you get when the scan penetrates the object,” Wilson explains. “We are able to distinguish between transitions in the material, void or non-void, by measuring the reflection distance. We investigated a lot of the really early depth-camera technologies, and at some level, it’s all consistent with that line of research.”

Enhancing Computer Interactions

There is a clear relationship between Wilson’s previous work and InfraStructs, but how does the ability to read embedded tags relate to richer interactions between people and computers?

Wilson sees potential uses for InfraStructs beyond manufacturing. He sees the concept applied in future applications such as customized game accessories with embedded tags for location sensing; tabletop computing with tangible objects sensed through other objects beneath them; and, when the technology becomes more portable, mobile robots with THz range finders that can recognize objects in the surrounding area.

What he finds exciting is the notion that objects themselves can be deeply coupled to software that has been designed to sense them. Furthermore, the tags could contain code, not just serial numbers.

“Down the road,” he says, “a program reads the object, and embedded within the object are further instructions, perhaps even code that can be read and compiled to further interrogate the object. There’s been some work in this direction using RFID tags. We talk about ‘the Internet of Things,’ and I would argue this fits into that vision.”

Viewed from this perspective, it’s clear that InfraStructs’ innovative synthesis of seemingly unrelated technologies fulfills SIGGRAPH’s primary goal: to promote leading-edge advances in computer graphics and interactive techniques.

At Home with Scientific Challenges

Pushing the boundaries of natural computing interactions is the perfect environment for Wilson. It involves working with multiple computing disciplines and emerging technologies, challenges that appeal to his imagination and scientific curiosity. It’s a good thing, too, because his personal life barely allows time for outside activities.

“My wife works full-time,” he says, “and we have two small children. By the time I get home, make dinner, get the kids to bed, spend time answering emails, and read a paper or two, it’s time to go to bed. I will probably have to put off hobbies until the kids have left home, but I’m really lucky because I love my work. It’s hard to think of hobbies that could compete with all the fascinating people and technologies I encounter every day.”

During SIGGRAPH 2013, Wilson and Willis are sure to attract attention with InfraStructs. The project is leading-edge, featuring the convergence of two nascent technologies. It’s an example of how research can shape technology by pushing boundaries to discover feasible uses and applications.

For Hugues Hoppe, Microsoft Research Redmond principal researcher, research manager of the Computer Graphics Group, and a member of this year’s Technical Papers Committee, SIGGRAPH is an industry highlight.

“SIGGRAPH is one of the most exciting conferences of the year,” Hoppe says, “not only because it introduces amazing innovations across multiple disciplines of computer graphics and interactive techniques, but also because what’s presented is always of exceptional quality. Microsoft Research is proud to contribute to 19 technical papers this year that cover a wide range of research areas. We’re also very pleased that we will be sharing two projects, IllumiRoom and Foveated 3D Display, as part of SIGGRAPH’s live, hands-on Emerging Technology demos.”

Continue reading

See all blog posts