dan's face  
Daniel C. Robbins

Visual Cues


Related Pages:

The Visual Domain

In many 3D user interface designs, users are presented with an array of items (pages, objects, and controls) that float, un-tethered in the 3D space. Because the typical user is viewing these environments through a non-stereoscopic display it becomes quite difficult for her to determine where in depth each object lies -- the relationships between objects in the 3D environment is perceived as ambiguous. In the real-world there are many cues that make this easier such as shadows, parallax, atmospheric shading, binocular disparity, and physical connection. The 3D user interface designer's job is to gather take advantage of a set of cues in an attempt to overcome the shortcomings of the virtual world. Within the constraints of the desktop environment, we have added the following cues:

Animation is vital in order to maintain a sense of reality and tangibility. Smooth transitions between different states decreases the cognitive load on the user while taking advantage of natural perceptual abilities.

 

Go here for help seeing the above video.
Simplified shadows fall under each movable icon on the palette. These give each icon a sense of "groundedness" and help the user to understand that they lie on top of the palette.
Simple stands and podiums help to suggest, in a mechanical fashion, how each object relates to its surrounding environment.
A representation of the user's left hand is shown whenever the user glances to the left. This is independent of the user's location in the environment and helps to reinforce the "Body-centric" point-of-view. Future versions of our interface could easily support customization, so that the user could choose what kind of representation is used for the hands.
Applications, Documents, and Windows are represented by visual snapshots, each showing a scaled down representation of the full-size item. Snapshots are much more recognizable than icons for many information types.
A representation of the user's right hand is shown whenever the user glances to the right. This is independent of the user's location in the environment and further helps to reinforce the "Body-centric" point-of-view.
Wherever possible, high-quality textures with precomputed lighting effects were used.

[NOTE: This off-axis view is not normally seen by users and is provided here for didactic purposes]

Users can easily change the appearance of task backgrounds. This is akin to changing the background "wallpaper" in the standard Windows environment. This helps users find and distinguish tasks and lends a note of individualization.

The Audio Domain

Interactive audio for the Task Gallery is functional, not decorative (every attempt has been made to code useful information onto the audio events). Buttons are spatialized according to left to right position and each buttonís timbre is subtly unique. There are rollover sounds that assist the user in locating specific tools.

Objects are dragged on the pallet, objects are moved on the stage, and tasks are moved in the room -- each with its own distinct dragging sound that is spatially and gesturally dynamic. Volume, timbre and stereo positioning are controlled in a way that simulates the sound of the type of object that is being moved (as an object is moved farther away from the user's viewpoint its sound becomes more distant). When the user moves a task there is a continuous sound proportional to the userís gesture and each surface is distinguishable; floor from ceiling, left wall from right. When the user moves a task from a wall to the floor an additional sound confirms the discontinuity caused by that interaction. This increases the userís sense of efficacy and is specific audio confirmation of that exact action. Run-time synthesis of pre-rendered basis sounds binds user interaction to audio. Simply playing back audio clips synchronized to user interaction does not have enough information to warrant the userís attention. We have tried to make each audio event convey useful information. User interaction is coupled to the audio, thus reducing the need for users to visually focus on the detail of every gesture

Validation

The validation of these cues comes in two forms: 

  1. Feedback from our User Studies has been quite positive and users have generally been able to understand and navigate our 3D environment

  2. 3D first and third person perspective games are starting to take advantage of many of these cues. These cues let users focus on the "fun" aspects of the game without having to worry about navigation or disambiguating 3D object relationships.