Invited Speakers

Kurt Akeley

http://www.lytro.com/team/kurt_akeley

Title: A different perspective on the Lytro light field camera

Abstract: Lytro, Inc. has begun shipping the first consumer camera that captures 4-D light field pictures, rather than 2-D images. In this talk the camera and its picture-processing system are explored in the context of computer graphics and vision science, emphasizing parallax as a unifying principle for insight into their operation.

Bio: Kurt Akeley is the CTO at Lytro. A pioneer in the field of computer graphics and a founding member of Silicon Graphics (later known as SGI), Kurt led the development of innovative products such as SGI’s RealityEngine and the OpenGL graphics system. After leaving SGI in 2001, Kurt completed his long-deferred PhD work at Stanford, working in the areas of 3-D display and human perception. He then joined Microsoft Research, where he was assistant managing director of Microsoft's research lab in Beijing, and, after returning to the US in 2007, developed a prototype light-field display with resolution sufficient to stimulate focus cues in human viewers. Kurt is a member of the National Academy of Engineering and was awarded ACM's SIGGRAPH computer graphics achievement award in 1995.

Steven Bathiche

http://www.microsoft.com/appliedsciences/content/team/StevieBathiche.aspx

Title: Breaking Harlow’s Monkeys

Abstract: Machines cannot fulfill people’s fundamental biological need of interacting with other people. Simply put people cannot truly love machines or replace human interaction with those machines. The future of computing is to make machines smart and immersive enough that they no longer cease to be the object of interaction. Instead, they disappear, and allow people who are physically distant from each other to feel like they are literally in the same room. The computer’s role will be to create as real of an experience as possible while giving people the tools to digitally enhance and augment their interaction. Mr. Bathiche will explain what some of these techniques and technologies (input and output) are and how they will redefine the way we compute, communicate, and entertain ourselves.

Bio: Steven Bathiche has been doing applied research at Microsoft since 1999. Bathiche’s interests are in creating novel human machine interfaces, technologies, and computer form factors that embed themselves in the daily life of people to better the way they work, play, and communicate. He established the Applied Sciences Group, a 20 person interdisciplinary team of scientists and engineers focused on developing an innovation pipeline for Windows, Surface, Mobile, and Xbox. Recent examples include algorithms for Microsoft’s Kinect, Pixel Sense technology for Surface 2.0, Anti-ghosting technology for Sidewinder X4 keyboard, and Microsoft’s Multi-Touch Mouse. In the past, he has invented a number of shipping Microsoft features and products; including the SideWinder Freestyle Pro game pad, the first commercial gaming device to use accelerometers for gesture input.

Bathiche obtained his Bachelor’s degree in Electrical Engineering from Virginia Tech and a Master’s degree in Bioengineering from the University of Washington. While in graduate school he developed the Mothmobile, the infamous hybrid robot that uses an insect as its control system via a neural electrical interface.

David Brady

http://www.davidbrady.net

Title: Physical layer compression for high pixel count imaging

Abstract: Multiscale lens design enables wide field cameras with 1-100 gigapixel capacity. In current prototypes, the optical volume of these cameras is much less than the electronic volume. At the current standard of ~1 microJoule/pixel, these cameras may require several kiloWatts for real-time. Reduction of electronic system size, weight and power is the primary barrier to wide spread deployment of this technology. One solution to these challenges is to increase the sophistication of the optical coding layer, which draws no power. This talk explores the use of image space coding in high pixel count cameras for power reduction and focal tomography.

Bio: David Brady is the Michael J. Fitzpatrick Professor of Photonics at Duke University, where he leads the Duke Imaging and Spectroscopy Program. Brady has made significant contributions to the development of compressive imaging and spectroscopy, with particular focus on tomographic applications such as coherence tomography, optical projection tomography, compressive holography, reference structure tomography, focal tomography and coded aperture snapshot spectral imaging. He has also developed multiaperture and multiscale imaging systems. He is currently the principal investigator for the DARPA AWARE Wide Field of View project, through which he has constructed multiscale gigapixel cameras. He is the author of Optical Imaging and Spectroscopy (Wiley, 2009) and is a Fellow of IEEE, SPIE and OSA.

Roger Hanlon

http://www.mbl.edu/mrc/hanlon

Title: Rapid Adaptive Camouflage in Cephalopods

Abstract: Nature has evolved elegant solutions for manipulating ambient light to create patterns and coloration for a wide range of functions such as communication, camouflage and thermoregulation. Nowhere is the diversity and speed of change in body patterning better developed than in the cephalopods (squid, octopus, cuttlefish). I will present some new discoveries and simplifying principles of how these refined biological systems operate. First, I will briefly cover visual sensing of the ambient light field, neural processing of that sensory information, and subsequent control of skin patterning. Then I will describe some details of the biophotonic structures of the skin that produce such remarkable visual diversity. Finally, I will discuss the ways we handle digital imagery (stills, video) to help unravel the visual trickery that enables the multiple functions of changeable body patterns.

Bio: Roger Hanlon is Senior Scientist at the Marine Biological Laboratory in Woods Hole, MA and Professor (MBL) of Ecology and Evolutionary Biology at Brown University. He is a diving biologist who uses digital imagery (stills, video, hyperspectral) to analyze rapid adaptive camouflage and communication in cephalopods (squid, octopus, cuttlefish) and fishes. He was trained in marine sciences at Florida State University and University of Miami, and studied sensory ecology as a postdoctoral fellow at Cambridge University. Recently his laboratory has focused on a highly multidisciplinary effort to quantify animal camouflage, touching subjects as varied as visual perception, psychophysics, neuroscience, behavioral ecology, image analyses, computer vision, and art. Collaborations with materials scientists and engineers aim to develop new classes of materials that change appearance based on the pigments and reflectors in cephalopod skin. Active public outreach featuring these charismatic marine animals has been conducted recently with NOVA, BBC, Discovery, NatGeo, TEDx, and NYT.

Shree Nayar

http://www.cs.columbia.edu/~nayar

Title: Focal Sweep Photography

Abstract: In this talk, we will explore the space-time volume captured by sweeping an image sensor along the optical axis. The result of the sweep can be either a single image with a quasi-depth invariant PSF, or an image stack that captures scene motion as a function of focus. We will present ways to use focal sweep images and stacks to create new experiences for the user. Finally, we will show that the functionalities enabled by focal sweep can also be achieved using a dense camera array.

Bio: Shree K. Nayar is the T. C. Chang Professor and Chair of Computer Science at Columbia University. He heads the Columbia Computer Vision Laboratory (CAVE). His research is focused on three areas; the creation of novel cameras, the design of physics based models for vision, and the development of algorithms for scene understanding. His work is motivated by applications in the fields of digital imaging, computer graphics, and robotics. He has received the David Marr Prize (1990 and 1995), the David and Lucile Packard Fellowship (1992), the National Young Investigator Award (1993), the NTT Distinguished Scientific Achievement Award (1994), the Columbia Great Teacher Award (in 2006), and the Carnegie Mellon Alumni Achievement Award (in 2009). He was elected to the National Academy of Engineering in 2008 and to the American Academy of Arts and Sciences in 2011.

Steve Seitz

http://www.cs.washington.edu/homes/seitz

Title: A Trillion Photos

Abstract: Collectively, we take upwards of a trillion photos each year. These images together comprise a nearly complete visual record of the world's people, places, things and events. However, this record is massively disorganized, unlabeled, and untapped. This talk explores ways of transforming this massive, unorganized photo collection into reconstructions and visualizations of the world's sites, cities, and people. I'll focus on new research at the University of Washington, and our efforts on Internet-scale deployment at Google.

Bio: Steve Seitz is a Professor in the Department of Computer Science and Engineering at the University of Washington. He also directs a computer vision group at Google. He received his B.A. in computer science and mathematics at the University of California, Berkeley in 1991 and his Ph.D. in computer sciences at the University of Wisconsin in 1997. Following his doctoral work, he spent one year visiting the Vision Technology Group at Microsoft Research and the subsequent two years as an Assistant Professor in the Robotics Institute at Carnegie Mellon University. He joined the faculty at the University of Washington in July 2000. He was twice awarded the David Marr Prize for the best paper at the International Conference of Computer Vision, and he has received an NSF Career Award, and ONR Young Investigator Award, and an Alfred P. Sloan Fellowship, and is an IEEE Fellow. His work on Photo Tourism (joint with Noah Snavely and Rick Szeliski) formed the basis of Microsoft's Photosynth technology; at Google, he's changed how images are browsed in maps and Picasa.

Jason Salavon

http://salavon.com

Title: Artist's Talk: On Recent Work and the Malleable Visual

Bio: Born in Indiana (1970), raised in Texas, and based in Chicago, Salavon earned his MFA at The School of the Art Institute of Chicago and his BA from The University of Texas at Austin. His work has been shown in museums and galleries around the world. Reviews of his exhibitions have been included in such publications as Artforum, Art in America, The New York Times, and WIRED. Examples of his artwork are included in prominent public and private collections inluding the Metropolitan Museum of Art, the Whitney Museum of Art, and the Art Institute of Chicago among many others.

Previously, he taught at The School of the Art Institute of Chicago and was employed for numerous years as an artist and programmer in the video game industry. He is currently assistant professor in the Department of Visual Arts and the Computation Institute at the University of Chicago.