Alex Butler   D. Alex Butler
  Senior Research Hardware Engineer
  Microsoft Research Cambridge

  dab [at]

Alex is a Senior Research Hardware Engineer in the Sensors & Devices Group at Microsoft Research, Cambridge UK.


Alex has accumulated extensive systems experience in designing and implementing real-time applications at all levels from embedded microcontrollers, microprocessors and DSP, PC-based applications through to high-performance highly parallel computer systems. The wide range of application areas he has worked in include embedded systems, 3D computer graphics and imaging, CAD, computer vision, telecommunications software, wireless and broadband video-on-demand systems. Alex has worked at all levels of real-time embedded prototype and commercial product development from raw hardware interfacing, systems integration, field-debugging, low-level driver development, design validation testing (DVT), protocol monitoring and testing, and production test through to high-level system specification and overall system architecture and design.  On the "pure consultancy" side he has helped with “new technology” technical evaluations and due-diligence work, product innovation studies, new product and intellectual property (IP) exploitation brainstorms.  He has also been closely involved in business innovation and business development strategies for a number of blue-chip clients and start-up technology companies both around Cambridge, across Europe and the USA. 




Alex graduated in 1987 with first class honours in Computer Engineering from the University of Manchester.  His final year project work was a software simulation of a serial supercombinator graph reduction machine. Essentially, this work investigated a specific hardware architecture for the execution of referentially transparent, declarative (functional) computer languages using the principles of lambda calculus.   This was followed by an M.Sc. in Computer Science by research into free-space holographic optical interconnects and architectures for digital optical computers.   This work involved researching and building holographic optical interconnects (primarily space-variant arrays of lenslets) using two main techniques – firstly by the synthetic fabrication of computer generated holograms and secondly by the construction of a physical computer-controlled robotic-bench facility with moving optical components and moving recording plates.  Part of this research involved simple investigations and practical chemical processing/fabrication work on both commercial silver-halide-based materials and through the use of dichromated gelatin holographic materials for creating phase holograms.  He also researched digital and analogue optical-computing architectures, optical interconnection technologies and spatial light modulator technologies.


This was followed by two years of academic research on hardware and software 3D computer graphics techniques, including 3D volume rendering, medical imaging, scientific visualisation, pixel-merging (composition) architectures and parallelism exploitation in 3D computer graphics.  This work involved researching a wide range of parallel computer graphics architectures, computer graphics representations, algorithms and visual display techniques, high performance geometry and rendering pipelines, optimisation strategies, distributed hardware and software components, architectures for 3D pixel compositing and display technologies and also how the personal perception and intellectual background of the viewer forms a holistic part of any visualisation scenario.


Alex then worked at the Centre for Novel Computing (CNC) – a UK centre of excellence in parallel computation based at the University of Manchester - as a Research Associate doing research on high performance parallel computing (virtual-shared-memory parallel computer architectures). 


While at Manchester, he also co-founded the Advanced Interfaces Group (AIG) researching systems for Virtual Reality (VR) and Augmented Reality (AR).


Alex then spent about ten years doing commercial work at the high technology/business development consultancy Scientific Generics (recently rebranded Sagentia) in Cambridge, UK.  At Generics, Alex specialised in embedded real-time software product development, novel computing technologies, product innovation and new technology business development.  Many of these technologies were incorporated into a number of start-up companies.


At Generics Alex was also a key member of the team which built the fastest, cheapest, multi-user "true" video-in-demand (VOD) servers in the world in the mid 1990's - where he wrote the underlying SCSI disc drive driver to support full featured streaming of MPEG video from hard disc arrays and helped design and implement different architectures for both ATM, microwave and satellite streamed video distribution.


After Generics, Alex was a member of the team which founded Polatis Limited - a telecoms photonic optical switch start-up based in Cambridge - and was technical director of the switch development team which built the lowest loss fibre-optic switching platform available in the optical networking market based on custom piezo-actuated millimetre scale bulk-optics opto-mechanical architecture using fast computer-controlled feedback loops.


Immediately prior to joining Microsoft Research in Cambridge, Alex was director of eWinkle Limited - a small technology consultancy specialising in contract hardware/software development and new product concept development.


In various capacities Alex has had past technology and investment related involvements with a diverse range of organisations from start-ups to blue-chip companies including Hasbro, Mattel, Nike, Duracell, Creature Labs/CyberLife, Absolute Sensors, Flying Null, Synaptics, ICL, Thorn-EMI, Sony, Ionica, Nortel FRA, 3i, Imerge, Intrasonics, Ford, Westica, Pathfinder, Carlstedt, Memory Corporation, BSkyB, Transitive Technologies, Ferranti, Nomura, 3i, UK government DTI, WorldPipe, Power X Networks etc.



Recent Research Projects, Background & Technical Interests, Previous (Non Academic) Work, Publications, Contributions, Technical Reports and Theses, Personal Interests




Project links:  SideSight, Mouse 2.0, Reconfigurable Ferromagnetic Input, SecondLight, ThinSight, TouchTalk, SenseCam




Recent Research Projects






Despite the flexibility of touch-screens, using such an input mode carries a number of tradeoffs. For many mobile devices, e.g. wristwatches and music players, a touch-screen can be impractical because there simply isn’t enough screen real estate. With a continued trend for ever-smaller devices, this problem is gradually becoming exacerbated. Even when a touch-screen is practical, interacting fingers will still occlude parts of the display, covering up valuable screen pixels and making it harder to see the results of an interface action.  SideSight investigates interaction around devices such as these as a way to ameliorate such problems as well as increase the active size of the interaction space beyond that of the physical device itself.


SideSight is a new technique which supports multi-touch interaction on small mobile devices such as cellphones, media players, and PDAs. However, the ‘touch sensitive’ regions used for these interactions are not physically on the device itself, but are instead formed around the periphery of the device. Proximity sensors embedded along the edges of the device can detect the presence and position of fingers adjacent to them. When using SideSight to interact with a device, it is typically first placed onto a flat surface such as a table or desk. The tabletop space immediately around the device acts like a large, virtual, multi-point track pad. Thus the display itself is not occluded as the user interacts. The sequence of figures below shows an example of a two fingered rotate and zoom gesture being controlled using SideSight.



In one early implementation of SideSight, infra red (IR) proximity sensors are embedded around the periphery of the device.  These sensors are able to detect the position of fingers and other objects placed near to the object - by sensing the reflected IR off those objects - as shown in the figure below:



The intensity of the reflected IR can be measured and formatted into a depth map as shown in the following figures which illustrate the raw left and right hand data matching the two finger rotate/zoom in the first example above:




SideSight can also enable simple virtual mouse interactions such as for driving conventional menu input as shown in the Figure below:



SideSight is also useful for enabling more natural bi-modal interaction.  In the example shown below - the dominant right hand is using the stylus to mark up a document displayed on a small mobile phone screen.  The non-dominant left hand is used for panning and scrolling the virtual canvas of the document.  The user experience is that of moving a sheet of paper around under the pen hand - and since scroll bars are not needed - this helps ensure that the useful pen input region is maximised.



SideSight may also be used in conjunction with or instead of on-display touch input.







Mouse 2.0


In the Mouse 2.0 project we explored a number of different ways to augment computer mice with multi-touch capabilities - primarily as a way to learn how to enrich traditional pointer based desktop interactions with touch and gestures which users may have begun to find familiarity with through the increasing use of touch and multi-touch interactive displays.



This was cross company research collaboration combining researchers and engineers from Microsoft Research Cambridge, Microsoft Applied Sciences Group (Microsoft Hardware) and Microsoft Research in Redmond.  In all, five main research prototypes were built and trialled - using a variety of different sensing technologies and often very different form factors:





FTIR Mouse applies the principle of frustrated total internal reflection (FTIR) to illuminate a user’s fingers, and uses a camera to track multiple points of touch on its curved translucent surface:





Orb Mouse is equipped with an internal camera and a source of diffuse IR illumination, allowing it to track the user’s hand on its hemispherical surface:




Cap Mouse employs a matrix of capacitive touch-sensing electrodes to track the position of the user’s fingertips over its surface:





Side Mouse rests under the palm of hand, allowing fingers to touch the table surface directly in front of the device. These are sensed using an internal camera and IR laser:






Arty Mouse is equipped with three high-resolution optical mouse sensors: one in the base, which rests under the user’s palm, and two under the articulated extensions that follow the movements of the index finger and thumb:





A variety of applications were explored running with our MT mice.


Clockwise from top-left: manipulating Virtual Earth, 3D modeling in SolidWorks, controlling a first person shooter game and photo browsing using a desktop mockup:









Reconfigurable Ferromagnetic Input




We have experimented with novel input devices based on ferromagnetic sensing. The sensing surface of the device can be overlaid with different materials to provide distinct forms of interaction. Overlaying with Ferrofluid, for example, presents the user with a tactile feeling of interacting with a compliant gel like material, whilst overlaying with ball bearings allow the user to interact through the direct displacement of physical objects which can be handled and lifted from the surface. This is a generic sensing technique, which we hope will be utilized by practitioners to develop novel input devices and applications.




A variety of experimental user interfaces were implemented utilising the ferromagnetic input device including a virtual sculpting application shown below:














SecondLight is a novel technology enabling "beyond the Surface" interactions.  SecondLight utilises an electrically switchable diffuser as the tabletop of a rear-projected interactive surface computer.   The material used is called polymer stabilised cholesteric textured (PSCT) liquid crystal - similar to that used in electrically switchable "privacy windows".  When the surface is made diffuse we project the image to be displayed on the tabletop and also image the objects placed on the diffuse surface using a camera to determine finger contact using frustrated total internal reflection (FTIR) and objects in contact with the surface with diffuse IR illumination.  This is shown by the two images below:



FTIR finger contact on the surface




Diffuse imaging just above the surface


The PSCT is then made transparent and a second image is projected through the surface.  This image can be made to illuminate objects above the tabletop such as prism objects or even simple pieces of tracing paper held above the surface used to catch the image.  The figures below shows both these examples:



By alternately switching the surface between diffuse and transparent states very quickly and synchronising the projection of images and camera capture it becomes possible to maintain the illusion of an interactive tabletop surface with multi-touch and object interaction whilst enabling beyond the surface interactions and illumination too.  Switching at 60Hz and above means the user becomes unaware of this and SecondLight enables a kind of magic.  The Figure below shows interaction on the tabletop with an ordinary paintbrush whilst simultaneously displaying beyond the surface text projected into the prism object placed on the tabletop:



When the PSCT is made transparent we also image objects beyond the tabletop using a second camera. 



In this way we open up the ability to capture beyond the surface (faces, objects) using conventional computer vision techniques. 


Being able to image through and beyond the tabletop enables computer vision techniques to detect and response to gestures being made away from the tabletop surface itself as shown in the following image:



By detecting and tracking the shapes and orientation of objects above the surface - coupled with projecting appropriate distortion-corrected images - we can enable secondary mini surfaces above and beyond the tabletop surface itself as shown in the following Figures where a "running man" video is automatically pre-distorted so as to appear correctly on such a secondary surface:









This project explores a new technique for optical sensing through thin form-factor displays. It allows for detection of fingers and other physical objects close to or on the display surface. This essentially allows us to turn a regular LCD into a sensing surface that can be used for multi-touch and tangible computing applications. We are interested in both the underlying hardware and software aspects of this approach as well as the interaction techniques and application scenarios it enables.   ThinSight is in effect a prototype of a thin interactive display with embedded optical sensing pixels as well as display pixels.


In our first ThinSight prototype shown below, we cut a hole through the back of a conventional laptop display panel and placed 3 optical array sensor boards behind the LCD backlight - each PCB comprising 7x5 IR proximity sensors spaced 10mm apart.




For ThinSight, the IR emitted from the IR LED part of each proximity sensor is transmitted through the LCD.  When objects - such as fingertips or tangible items - are placed on or near to the surface of the LCD, the IR light reflected from them passes back through the LCD and can be sensed by the IR photodiode portion of the proximity sensors behind the display.  By assembling the sensed results as a low resolution image, this, in effect, created a region in the centre of the LCD which can act as an optical 15 x 7 multi-touch area.


We progressed ThinSight to a full sized screen implementation comprising 30 sensor PCB's, forming, in effect, a 1050 pixel camera distributed across the complete LCD surface - as shown in the figures below.  The figure on the far right shows the kind of sensing the surface is capable of after processing the raw data:



The resulting low resolution image can be processed by a series of computer vision filters to enhance the resolution and to enable multi-touch interactions.  The sequence of figures below gives an example where raw finger-tip data (top left) is scaled up with interpolation and normalized (top right), thresholded (bottom left) and finally processed using a connected components analysis to identify individual fingertip touch points.









In the TouchTalk project we worked with a leading cellular network carrier to explore ways in which technology can support new forms of non-verbal communication between people, in particular exploring how people can keep in touch with each other more easily, more expressively or or more intimately in a social context. We developed a small electronic device with a variety of unconventional input and output modalities which could remotely communicate with a similar, paired device via a mobile phone as the common gateway.  One of the motivations for TouchTalk was to explore how people can communicate specifically without using voice or text - but more through gesture, acoustic and vibro-tactile mechanisms.  With TouchTalk we developed new techniques for communicating expressions of continuous analog form from one device to another remote device.








SenseCam is a wearable digital camera that is designed to take photographs passively, without user intervention, while it is being worn. Unlike a regular digital camera or a cameraphone, SenseCam does not have a viewfinder or a display that can be used to frame photos. Instead, it is fitted with a wide-angle (fish-eye) lens that maximizes its field-of-view. This ensures that nearly everything in the wearer’s view is captured by the camera, which is important because a regular wearable camera would likely produce many uninteresting images.



SenseCam also contains a number of different electronic sensors. These include light-intensity and light-colour sensors, a passive infrared (body heat) detector, a temperature sensor, and a multiple-axis accelerometer. These sensors are monitored by the camera’s microprocessor, and certain changes in sensor readings can be used to automatically trigger a photograph to be taken.




The data collected by SenseCam can be periodically transferred to a Windows PC via USB and used to recreate a summarised version of the activities of the wearer for the elapsed capture period.  For example, the series of photographs which were taken can subsequently be used to recreate a quick movie of the users day - rather like time lapse video, with other sensor data used to highlight or identify interesting events or anomalies which occurred during the capture period.  To support this kind of viewing a PC application was developed to enable browsing of the SenseCam data:







Background & Technical Interests


Microsoft Research, Cambridge     Microsoft Research


Interactive Surface Technologies, Multi-touch Technologies, Life-Logging Technologies, SenseCam, User Interfaces, Sensors & Devices, Optical Technologies, Display Technologies, Virtual/Augmented Reality, Novel Hardware Technologies, Computer Mediated Living Technologies, Embedded/Ubiquitous Computing, Printable Technology, Computer Vision, Wireless Technologies, Mobile Devices.

0   SenseCam PCB Front




eWinkle, Cambridge   


Augmented Reality product development, Astronomy gadgets, Distributed Home Media technologies, Embedded systems



Polatis, Cambridge      


Photonic Switching, Optical Switching, Electronics, Optics, Software, Mechanical, Piezo Technologies, Optical Fibre Technologies, Fibre Bragg Gratings, Real-time Embedded Systems, Optical Management Protocols, Real-time DSP and Telecom Processing, Test Systems, Computer Vision, Capacitive Sensors.


Polatis OXC Module  




Scientific Generics (now Sagentia), Cambridge   


New business development, New technology concepts, Due Diligence on New Technologies and Start-up's, Real-time Embedded Systems, Telecoms, Datacoms, Artificial-Life Technologies, Smart Toy Technologies, Mobile Displays, Novel Computing Technologies, Distributed/Parallel Architectures, In-Vehicle Computer Vision, Spread Spectrum Acoustic Communications, Microwave Communications, Air Interface Protocols, Functional and Novel Programming Language Architectures.




IBM (UK) Laboratories, Hursley Park, Nr. Winchester   


CMOS design tools, Level Sensitive Boundary Scan Test Systems, Test Systems, Relational Databases, Computer Graphics and GUI's, Novel programming languages.




University of Manchester, Manchester    The University of Manchester, established in 1824.  


Optical computing architectures, Computer Generated Holography, Optical Holography, Analog Computing, Fundamentals of Computation, 3D computer graphics, Scientific Visualisation, Volume Rendering, 3D Graphics Compositing Architectures, Distributed Computation, Virtual Reality Systems, Parallel computing, Shared Memory Parallel Computer Architectures, Novel Programming Languages.





Previous (Non-Academic) Work


Commercial product and development projects have included:

·        Design and implementation of electronics and software for an experimental low-cost novel sensing technology for a world leader in touch-sensing user-interface technologies.  This work was initially implemented on a PIC-based microcontroller platform using a combination of embedded C and assembler.   This was followed by a similar implementation on DSP.  Diagnostic and control facilities, including real-time filtering algorithms and statistical performance analysis were implemented as a graphical user interface (GUI) on a PC.

·      Design and implementation of a suite of complex GUI’s and communications control/management protocols for the complete life-cycle of a commercial photonic switch product. This 4-year project included basic diagnostic, performance test-and-measurement, factory calibration (e.g. coarse- and fine- optical calibration), design validation test (DVT) and factory production test facilities.  Final customer-facing graphical user interfaces and switch management control protocols were also designed, implemented, documented, tested, packaged and shipped.

·        Design and implementation of embedded DSP software for hard real-time closed loop feedback control of a micro-robotic photonic switch.  This also included all the support functionality required to go around the basic switching functionality including communications code, management protocol code, low-level hardware drivers and a FLASH-based virtual-file file system for managing code downloads, code variants, configuration scripts and calibration data for example.

·      Design and implementation of an embedded SCPI command interface for test-equipment vendors to control a photonic switch over TCP/IP/Ethernet.  This protocol was implemented on a variety of embedded platforms (DSP, Motorola PowerQUICC, Lantronix) as well as a variety of PC-based engineering and customer graphical-user interface (GUI) front ends.

·       Algorithm development and implementation of a PC-based in-vehicle computer-vision system for monitoring of driver alertness by face tracking and eyelid blink observation at full video rate.  This used special infra-red illumination techniques to alleviate the effects of shadows, the use of “snakes” type vision algorithms to identify and track facial features and image analysis algorithms to monitor eye behaviour.

·       Design and implementation of telecommunications software for the Ionica and Nortel Networks wireless local-loop (WLL) Fixed Wireless Access (FWA) system – Proximity-I.   This seven year involvement was at all levels of base-station development - from low level device drivers and DSP coding through to board-level code through to top-level system design and systems integration.   The Proximity-I FWA system continued to be deployed successfully around the world by Nortel and as new features were developed (e.g. data services for Internet access).   Alex was involved in the implementation of the original “Ionica Link Demonstrator” (LD1) when the company was a start-up and saw the company grow to US$1B before its eventual collapse.

·       Design and implementation of a commercial real-time high performance SCSI-2 and MPEG-based video-on-demand (VOD) server and complete system architecture including the integration with a content preparation and distribution system employing digital broadcast satellite and E1 landline paths.  In its time, this was considered to be the highest performance, lowest cost, streaming VOD system in the world.  This technology was spun into a start-up company called Imerge Limited.

·      Design and implementation of software for a broadband wireless access system running ATM and Digital-Video-Broadcast (DVB) over custom 40GHz microwave links.  This technology was spun into a start-up company called WorldPipe Limited.

·       Design and implementation of the air-interface protocol (AIP) for a 1.4GHz microwave link radio product.  This technology is incorporated into a start-up company called Westica Limited.

·       Design and implementation of embedded Intel 8051 software for a low-cost call-routing consumer telephony product.  This technology was spun into a start-up company called Pathfinder Limited.

·      Design and implementation of software for a PC-based system used to embed and decode digital data signals inaudibly within audio (music) streams using spread-spectrum encoding techniques.  This technology was spun into a start-up company called Intrasonics Limited.


Pure consultancy projects have included:

·        Technical and commercial evaluation of a novel VLSI-based parallel computer architecture based on parallel graph reduction and declarative languages.

·         Evaluation and competitive benchmarking of a novel VLSI-based memory technology.

·         Evaluation and business development of Artificial Life (A-Life) technology based on cellular programming, neural networks and genetic programming.

·        Digital-satellite/cable broadcast-based interactive-TV architecture evaluation including novel architectures for the distribution and deployment of interactive digital content.

·        Survey of a number of novel display technologies primarily with respect to predicting future power consumption requirements for smart and dumb battery-powered portable devices (mobile phones, PDA’s for example).

·         Due diligence and business development of a novel re-programmable instruction set processor initially aimed as a Java machine.

·         Due diligence and business investment potential of a high performance binary-translation software technology used for optimised CPU emulation.

·         Survey of technologies for the delivery of compressed video over GSM phones using GPRS and beyond.

·         Survey of battery requirements for future mobile devices.

·         Evaluation and competitive benchmarking of data switching fabrics for carrier class networking.

·         Business development and market study of a new carrier class switching technology and business.

·         Smart Toys innovation and business development.

·        Extrapolating 10-20 year “futures” for a UK government funded (DTI/Office of Science and Technology) “FORESIGHT” programme – helping to identify future opportunities and threats for science, engineering and technology (including identifying likely social, economic and market trends) and likely developments in science, engineering, technology and required infrastructure to best address future needs.





















Published Workshop contributions



Technical Reports and Theses


Patents (pending or granted)






1983       The Peace Memorial Scholarship                Jardine, Matheson and Co., Hong Kong 
1986       IBM Student Bursary                               IBM UK Laboratories Ltd
1987       The Kilburn Prize in Computer Science        University of Manchester



Personal Interests



Contact Us Terms of Use Trademarks Privacy Statement ©2009 Microsoft Corporation. All rights reserved.Microsoft