Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
KinectFusion: Real-Time Dynamic 3D Surface Reconstruction and Interaction

Shahram Izadi, Richard Newcombe, David Kim, Otmar Hilliges, David Molyneaux, Steve Hodges, Pushmeet Kohli, Andrew Davison, and Andrew Fitzgibbon


We present KinectFusion, a system that takes live depth data from a moving Kinect camera and in real-time creates high-quality, geometrically accurate, 3D models. Our system allows a user holding a Kinect camera to move quickly within any indoor space, and rapidly scan and create a fused 3D model of the whole room and its contents within seconds. Even small motions, caused for example by camera shake, lead to new viewpoints of the scene and thus refinements of the 3D model, similar to the effect of image superresolution. As the camera is moved closer to objects in the scene more detail can be added to the acquired 3D model.

To achieve this, our system continually tracks the 6DOF pose of the camera and rapidly builds a representation of the geometry of arbitrary surfaces. Novel GPU-based implementations for both camera tracking and surface reconstruction allow us to run at interactive real-time rates that have not previously been demonstrated. We define new instantiations of two well known graphics algorithms designed specifically for parallelizable GPGPU hardware.


Publication typeOther
Book titleSIGGRAPH Talks
> Publications > KinectFusion: Real-Time Dynamic 3D Surface Reconstruction and Interaction