Motion and Attention in a Kinetic Videoconferencing Proxy

  • David Sirkin ,
  • Gina Venolia ,
  • George Robertson ,
  • Taemie Kim ,
  • ,
  • Mara Sedlins ,
  • Bongshin Lee ,
  • ,
  • Mike Sinclair

Interact 2011 |

Published by Springer

Publication | Publication | Publication

Compared to collocated interaction, videoconferencing disrupts the ability to use gaze and gestures to mediate interaction, direct reactions to specific people, and provide a sense of presence for the satellite (i.e., remote) participant. We developed a kinetic videoconferencing proxy with a swiveling display screen to indicate which direction that the satellite participant was looking. Our goal was to compare two alternative motion control conditions, in which the satellite participant directed the display screen’s motion either explicitly (aiming the direction of the display with a mouse) or implicitly (with the screen following the satellite participant’s head turns). We then explored the effectiveness of this prototype compared to a typical stationary video display in a lab study. We found that both motion conditions resulted in communication patterns that indicate higher engagement in conversation, more accurate responses to the satellite participant’s deictic questions (i.e., “What do you think?”), and higher user rankings. We also discovered tradeoffs in attention and clarity between explicit versus implicit control, a tension in how motion toward one person can exclude other people, and ways that swiveling motion provides attention awareness, even without direct eye contact.