current projects
  • Muscle-Computer Interfaces
    Many human-computer interaction technologies are currently mediated by physical transducers such as mice, keyboards, pens, dials, and touch-sensitive surfaces. While these transducers have enabled powerful interaction paradigms and leverage our human expertise in interacting with physical objects, they tether computation to a physical artifact that has to be within reach of the user.

    As computing and displays begin to integrate more seamlessly into our environment and are used in situations where the user is not always focused on the computing task, it is important to consider mechanisms for acquiring human input that may not necessarily require direct manipulation of a physical implement. We explore the feasibility of muscle-computer input: an interaction methodology that directly senses and decodes human muscular activity rather than relying on physical device actuation or user actions that are externally visible or audible.

    Checkout our conference papers exploring sensing finger gestures on a desk or table [CHI '08]; pinching with empty hands, holding objects, gripping handles [UIST '09]; on and above interactive surfaces [ITS '09]; and pinching detected through a wireless-armband platform [CHI '10]. Also, take a peek at our video figure from our UIST 2009 paper on YouTube:




    and a clip of me playing Wireless Air Guitar Hero from our CHI 2010 note on YouTube:



    For a longer clip of the Guitar Hero demo, here is a video of me playing Air Guitar Hero.

    All versions of our original UIST 2009 Video Figure on YouTube: mine, CHI, and TechFlash.

  • Tongue-Computer Input
    Many patients with paralyzing injuries or medical conditions retain the use of their cranial nerves, which control the eyes, jaw, and tongue. While researchers have explored eye-tracking, speech recognition, and other technologies for these patients, we believe there is potential for directly sensing explicit tongue movement for controlling computers.

    In our work, we explore a novel approach of using infrared optical sensors embedded within a dental retainer to sense tongue gestures. Our first prototype discriminates among four simple gestures: swiping left, swiping right, taping up, and holding up. We have demonstrated using these gestures for real-time control of applications through the game Tetris. Checkout our Tech Note from UIST '09 describing our approach and initial prototype system. Below is a video of someone playing Tetris with just their tongue:



previous projects (from my days at UW)
  • VoiceLabel: Using Speech to Label Mobile Sensor Data
    Many mobile machine learning applications require collecting and labeling data, and a traditional GUI on a mobile device may not be an appropriate or viable method for this task. We present an alternative approach to mobile labeling of sensor data called VoiceLabel. VoiceLabel consists of two components: (1) a speech-based data collection tool for mobile devices, and (2) a desktop tool for offline segmentation of recorded data and recognition of spoken labels.
  • VoicePen: Augmenting Pen Input with Simultaneous Non-Linguisitic Vocalization
    Non-linguistic vocalizations, such as vowel sounds, variation of pitch, or control of loudness have the potential to provide fluid continuous input concurrently with pen interaction. VoicePen is a set of interaction techniques that leverage the combination of voice and pen input when performing both creative drawing and object manipulation tasks.
  • SketchWizard
    SketchWizard is a tool for wizarding sketch based interfaces over the network.
  • Dapple
    Dapple is a designers' tool for rapid prototyping UbiComp applications.
  • Devices That Tell On You: The Nike+iPod Sport Kit
    Increased wireless gadgetry can erode our privacy. In this work, we explore the privacy properties of the Nike+iPod Sport Kit.
  • Design Patterns for the Digital Home
    Whether design patterns are a useful means to disseminate design knowledge in emerging domains such as the digital home is an open question. We have developed a set of design patterns for the digital home and have evaluated the extent to which these are useful to design professionals empirically with 44 designers.
  • Twice
    Twice is a toolkit for wizarding UbiComp environments.
  • Ubiquitous Broadcast Computing
    As we traverse physical spaces we seek and consume information relatively anonymously and harmlessly. We hypothesize there is much more information that providers are willing to make public electronically to local physical areas. However, we believe it is challenging to provide public information while ensuring provider security and user privacy under current wireless network schemes. We have developed a toolkit for broadcasting public information over private wireless networks and a suite of applications that employ this approach.