Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
GlimpseData: Towards Continuous Vision-Based Personal Analytics

Seungyeop Han, Rajalakshmi, Matthai Philipose, Arvind Krishnamurthy, and David Wetherall


Emerging wearable devices provide a new opportunity for mobile context-aware applications to use continuous audio/video sensing data as primitive inputs. Due to the highdatarate and compute-intensive nature of the inputs, it is important to design frameworks and applications to be efficient. We present the GlimpseData framework to collect and analyze data for studying continuous high-datarate mobile perception. As a case study, we show that we can use lowpowered sensors as a filter to avoid sensing and processing video for face detection. Our relatively simple mechanism avoids processing roughly 60% of video frames while missing only 10% of frames with faces in them.


Publication typeInproceedings
Published inWorkshop on Physical Analytics
PublisherACM – Association for Computing Machinery
> Publications > GlimpseData: Towards Continuous Vision-Based Personal Analytics