|Interactive belt-worn badge provides one-handed data access
An embedded LCD presents dynamic information to the wearer, and interaction is facilitated by sensing movement of the retractable string which attaches the unit to the wearer's belt. This makes it possible to interact using a single hand, providing lightweight and immediate access to a variety of information when it's not convenient to pick up, unlock and interact directly with a device like a smartphone.
|Shake ‘n’ Sense
Shake ‘n’ Sense is a novel yet simple mechanical technique for mitigating the interference when two or more Kinect cameras point at the same part of a physical scene. The technique is particularly useful for Kinect, where the structured light source is not modulated. It requires only mechanical augmentation of the Kinect, without any need to modify the internal electronics, firmware or associated host software.
Handheld projector systems have the potential to enable users to dynamically augment environments with digital graphics. With Augmented Projector we explore new parts of the design space for interacting using handheld projection in indoor spaces, in particular systems that are more 'aware' of the environment in which they are used.
|Vermeer: Direct Interaction with a 360-Degree Viewable 3D Display
Vermeer is a novel interactive 360° viewable display suitable for a tabletop form factor. It provides viewpoint corrected stereoscopic 3D graphics to simultaneous users 360° around the display, without the need for eyewear or other user instrumentation.
We present KinectFusion, a system that takes live depth data from a moving depth camera and in real-time creates high-quality 3D models. The system allows the user to scan a whole room and its contents within seconds. As the space is explored, new views of the scene and objects are revealed and these are fused into a single 3D model. The system continually tracks the 6DOF pose of the camera and rapidly builds a volumetric representation of arbitrary scenes.
Our technique for tracking is directly suited to the point-based depth data of Kinect, and requires no feature extraction or feature tracking. Once the 3D pose of the camera is known, each depth measurement from the sensor can be integrated into a volumetric representation. We describe the benefits of this representation over mesh-based approaches. In particular, the representation implicitly encodes predictions of the geometry of surfaces within a scene, which can be extracted readily from the volume. As the camera moves through the scene, new depth data can be added or removed from this volumetric representation, continually refining the 3D model acquired. We describe novel GPU-based implementations for both camera tracking and surface reconstruction. These take two well-understood methods from the computer vision and graphics literature as a starting point, defining new instantiations designed specifically for parallelizable GPGPU hardware. This allows for interactive real-time rates that have not previously been demonstrated.
We demonstrate the interactive possibilities enabled when high-quality 3D models can be acquired in real-time, including: extending multi-touch interactions to arbitrary surfaces; advanced features for augmented reality; real-time physics simulations of the dynamic model; novel methods for segmentation and tracking of scanned objects
Watch the team talking about this rapid prototyping device developed by Microsoft Research, now available commercially.
Microsoft .NET Gadgeteer is a new prototyping platform that makes it easier to construct, program and shape new kinds of computing objects. It is comprised of modular hardware, software libraries and 3D CAD support. Together, these elements support the key activities involved both in the rapid prototyping and the small-scale production of custom embedded, interactive and connected devices.
In this talk we will show (through live coding and hardware assembly) how to design, build and program working devices using .NET Gadgeteer. The aim will be to give those present a taster of how Gadgeteer can be used as a tool for researchers and students that need to prototype and build bespoke hardware.
|TUTORIAL: Microsoft .NET Gadgeteer
Microsoft .NET Gadgeteer is a new prototyping platform that makes it easier to construct, program and shape new kinds of computing objects. It is comprised of modular hardware, software libraries and 3D CAD support. Together, these elements support the key activities involved both in the rapid prototyping and the small-scale production of custom embedded, interactive and connected devices. This will be a tutorial and live demo where we will show (through live coding and hardware building) how to use Gadgeteer to illustrate concepts in programming via real hardware, embedded software design, networking sensors, etc. The aim will be to give those present a taster of how Gadgeteer can be used in classrooms to inspire and educate computer scientists.
|Microsoft Touch Mouse
This video introduces Microsoft's brand new Touch Mouse, a high-precision mouse with multi-touch gestures. Designed to work best with Windows 7 features, the Touch Mouse is capable of two- and three-finger gestures to manage their entire desktop.
|Building devices with .NET Gadgeteer
Microsoft .NET Gadgeteer is a rapid prototyping platform for small electronic gadgets and embedded hardware devices. Individual .NET Gadgeteer modules can be easily connected and programmed using C# to make fully functional devices.
|Mouse 2.0: Multi-Touch Meets the Mouse
This video shows 5 different prototypes of multi-touch mice. Each prototype explores a different sensing strategy, physical form-factor, and interactive capabilities
SenseCam is a wearable digital camera that is designed to take photographs passively, without user intervention, while it is being worn. Unlike a regular digital camera or a cameraphone, SenseCam does not have a viewfinder or a display that can be used to frame photos. Instead, it is fitted with a wide-angle (fish-eye) lens that maximizes its field-of-view. This ensures that nearly everything in the wearer’s view is captured by the camera.
|First Look: SecondLight
It's hard to top Microsoft Surface, but the brains at Microsoft Research (with help from MS Hardware) certainly did with SecondLight. SecondLight is a variation on the Microsoft Surface computer, the twist is that it can actually project a second image through the first image, which can land on a sheet of paper, a plastic sheet, or anything else semi-transparent that you want to use as a makeshift secondary (or third, fourth, fifth) display. Those displays can also have their own multitouch capabilities in the air.
|PDC demo of SecondLight
SecondLight is a new type of multi-touch interaction surface which allows an image to be projected through the display surface in addition to the image which is rendered on the surface itself.
SideSight supports virtual multi-“touch” interactions around the body of a small mobile device. Optical sensors positioned along each edge of the device allow fingers to be sensed as they approach the device from the sides. In the example depicted, the device is resting on a flat surface, and fingers touching the surface on either side of the device are sensed to provide multi-touch input.
Surface computing and beyond: More expressive interaction. 18 November 2008. Presentation at the UK-Japan HCI workshop held at the Bristish Embassy, Tokyo, Japan.
SecondLight demo (skip forward to 84:00 minute mark). 29 October 2008. PDC 2008, LA Convention Center, Los Angeles, USA.
Surface computing: The post-PC experience. 8 October 2008. Presentation at the UKDL Event on Touch Screen Technology and Interactive Display Technology, Cambridge, UK.
ThinSight:Versatile Multi-touch Sensing for Thin Form-factor Displays. 26 September 2008. Invited paper presentation at HCI 2008, Liverpool, UK.
Casting a wider net: New applications for wireless sensing. 18 October 2007. Keynote presentation at IEEE SenseApp, Second IEEE International Workshop on Practical Issues in Building Sensor Network Applications, Dublin, Ireland.
Casting a wider net: New applications for wireless sensing". Presented 30 May 2007 at the first SensorNet ‘North West UK Sensor Network Day’ at Lancaster University.
"wasp: A platform for prototyping ubiquitous computing devices". Presented 2 June 2006 at the 1st International Workshop on Software Engineering Challenges for Ubiquitous Computing, Lancaster University.
"Just what you need: Simplifying electronic devices". Presented 27 April 2005 at the International Forum ‘Less is more - Simple Computing in an Age of Complexity’.
"Track and trace for the global supply chain using RFID". Presented 3 November 2004 at Cambridge University Computer Lab as part of the Wednesday Seminar Series.