A Real-time Augmented Reality Processor and a Smart Glasses System

Real-time augmented reality (AR) is actively studied for the future user interface and experience (UI/UX) in smart glasses platforms. However, due to the small battery size and limited computing power of the current smart glasses, it has failed to implement the real-time markerless AR in the glasses-type form-factor. In the presentation, I propose a real-time and low-power AR processor for advanced and recognition-based AR applications. For the high throughput, the processor adopts task-level pipelined SIMD-PE clusters and a congestion-aware network-on-chip (NoC). Both of these two features exploit the high data-level parallelism (DLP) and task-level parallelism (TLP) with the pipelined multicore architecture. For the low power consumption, it employs a vocabulary forest accelerator and visual attention algorithm reduces overall workload by removing background clutters from the input video frames to reduce unnecessary external memory accesses and core activation. The proposed processor is successfully demonstrated in a battery-powered head mounted display platform, performing the full-chain of AR operation in real-time.

Speaker Details

Gyeonghoon Kim received the B.S. and M.S. degrees from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea in 2009 and 2011, respectively, where he is currently pursuing the Ph.D. degree in electrical engineering. His current research interests include low-power digital processors with dynamic resource management for computer vision and Network-on-Chip (NoC) based SoC design.

Date:
Speakers:
Gyeonghoon Kim
Affiliation:
Korea Advanced Institute of Science and Technology (KAIST)
    • Portrait of Jeff Running

      Jeff Running

Series: Microsoft Research Talks