Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
A New Way to Interact with the Cloud
March 4, 2010 9:00 AM PT

Much researcher attention is being applied to the “cloud,” the Web-based infrastructure that provides a resource-rich platform for delivering applications and services to computer users.

For those capabilities to be harnessed, though, users must be able to interact with this prodigious repository. Increasingly, the combination of the means of interaction—the client—and the cloud is viewed as one possessing transformational potential.

Microsoft Research has taken an integrated approach to cloud computing, with work in areas ranging from computer security and cryptography to operating-system design, cloud software, data-center architectures, devices, and green computing. This diversity is evident during TechFest 2010, Microsoft Research’s annual showcase of its leading-edge research projects, being held in Redmond on March 3 and 4.

One of the most fascinating research projects featured during the event is called Inside the Cloud: New Cloud-Computing Interaction, a collaboration among researchers at Microsoft Research Asia and Microsoft Research Cambridge that explores novel ways of working with this rich, evolving ecosystem.

Client + cloud computing offers the prospect of persistent, anytime/anywhere access to data, of which vast amounts become available through Web data mining, social networks, and archives of public and personal information.

“Given the resources of cloud computing,” says Richard Harper, a principal researcher with the Socio-Digital Systems group at Microsoft Research Cambridge, “a two-dimensional desktop layout is no longer sufficient to capture or convey rich, real-time relationships between data, people, schedules, or places. Cloud computing calls for new interaction metaphors, and these metaphors necessitate new input-output technologies.”

Cloud Mouse prototype
The Cloud Mouse prototype.

The Cloud Mouse, a project driven by Microsoft Research Asia, in collaboration with Harper, is one such technology, demonstrating a novel hardware and interaction design that illustrates the properties of a general-purpose user interface for cloud computing.

The project brought together Harper and a number of researchers from Microsoft Research Asia: Chunhui Zhang, Chunshui Zhao, and Tong Yuan of the Hardware Computing Group, and Min Wang of the Human-Computer Interaction group.

Navigating through a cloud using the Cloud Mouse
Navigating through a cloud using the Cloud Mouse.

The motivation for the Cloud Mouse came from the realization that clouds contain aggregations of data generated by huge numbers of users. Valuable insights and service opportunities are possible only if users have a way to engage effectively with such complex, dynamic information.

“The cloud allows human-computer interaction to move beyond the desktop,” Harper says. “The idea is to put the user inside the cloud to engage with the data for a richer experience.”

Basic operation of the Cloud Mouse
Basic operation of the Cloud Mouse.

The scenario the team envisions is one in which data is presented through handheld projectors or augmented eyeglasses, or displayed across multiple surface displays as 3-D visualizations. In terms of hardware, the Cloud Mouse is the key to this interactive experience. When a user moves the Cloud Mouse through these data visualizations, differentiated sensory outputs such as vibrations or sound will alert users to locations where they can they can retrieve or view information, post or store information, or steer closer toward a target.

Whatever the mode of display, the Cloud Mouse is designed for virtual content and enables interaction with 360 degrees of movement, spatial depth, proximity, and geometric relationships between objects. In one demo scenario, the user views a photo as though standing inside the picture with the ability to view 360 degrees, make the mouse point up and down, and move deeper into the picture.

“The Cloud Mouse achieves a number of goals,” Harper says. “It allows the user to navigate across multiple screens with great precision. It’s tactile and feels natural to use. You can drag and drop by grasping it. You can use it to point directly at objects in the cloud space or drag and drop objects across different screens. We want users to feel as though they are right inside in the cloud space.”