This page describes my graduate work which was done at the University of Virginia. This project is not affiliated with Microsoft.
The props-based interface is a three-dimensional human-computer interface for neurosurgical visualization. The 3D user interface based on the two-handed physical manipulation of hand-held tools in free space. These user interface props facilitate transfer of the neurosurgeon's skills for manipulating tools with two hands to the operation of a user interface for visualizing 3D medical images, without need for training.
Props-based Interface for 3D Neurosurgical Visualization
From the surgeon's perspective, the interface is analogous to holding a miniature head in one hand which can be "sliced open" or "pointed to" using a cross-sectioning plane or a stylus tool, respectively, held in the other hand. Cross-sectioning a 3D volume, for example, simply requires the surgeon to hold a plastic plate (held in the preferred hand) up to the miniature head (held in the nonpreferred hand) to demonstrate the desired cross-section.
Our informal evaluations of over fifty neurosurgeons have shown that with a cursory introduction, surgeons can understand and use the interface within about one minute of touching the props.
This work is the result of a collaboration between the Neurosurgical Visualization
Lab and the User
Interface Group .
Multimedia Medical Systems is
currently working to transition this research system into a commercial
product. User holding the cross-sectioning plane up to the doll's head,
and the corresponding 3D graphics seen on the screen (including the
appropriate slice through the patient's MRI data set).
User holding the stylus up to the doll's head, and the
corresponding 3D graphics seen on the screen.
Touchscreen Interfaces : Hybrid 2D and 3D User Interfaces
Multimedia Medical Systems is currently working to transition this research system into a commercial product.
User holding the cross-sectioning plane up to the doll's head, and the corresponding 3D graphics seen on the screen (including the appropriate slice through the patient's MRI data set).
User holding the stylus up to the doll's head, and the corresponding 3D graphics seen on the screen.
Touchscreen graphical user interface for use with the interface props. The touchscreen simplifies switching between the 3D operations supported by the props and 2D operations supported by traditional GUI's.
The techniques of virtual reality have been applied to a number real-world tasks which involve three-dimensional manipulation, such as computer-aided design, scientific visualization, architectural design, and surgical planning. While 3D manipulation is crucial in these applications, 3D manipulation is not the only task of interest, particularly if the system is intended for desk-top usage (that is, if it does not employ a head-mounted display). For example, in surgical planning, the surgeon needs to rotate, cross-section, and point at 3D image data, but the surgeon must also (for example) be able to select patient images from a database, browse through 2D axial, coronal, and sagittal image slices, and adjust image contrast and brightness.
Unfortunately, many previous systems have ignored the importance of such auxiliary tasks, and have focused on aspects of the interface which require direct 3D manipulation. A few ad-hoc techniques have been developed to allow, for example, menu selection using a 3D input device, but such techniques tend to be task-specific and do not necessarily scale well to more complex tasks (such as menu selection from a large hierarchy of choices). We have been exploring the use of touchscreens in conjunction with our 3D interface props as a medium which intuitively and seamlessly combines 3D input with more traditional 2D input in the same hybrid user interface, in a manner which supports the range and complexity of tasks available in modern graphical user interfaces (GUI's).
Note the ergonomic facility with which touch can be used: the user can move in 3D using the props, and then, without having to put the props down, the user can reach out and touch the screen or tablet to perform 2D tasks. One interacts gesturally with the props to perform 3D operations; one interacts gesturally with the touch screen or tablet to perform 2D operations. From the user's perspective, one is always interacting gesturally with objects in the real environment, and the user may not even be aware that 3D gestures are being digitized with a magnetic tracker while 2D gestures are being digitized with a touch-sensitive technology.
Some additional examples of Touchscreen interfaces can be seen below.
Left: Touchscreen GUI for our 2D surgical planning software. This snapshot shows the images just after the user has registered the images into a steretactic coordinate system. Right: Touchscreen GUI for navigating our patient image database.