Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Our research
Content type
+
Downloads (455)
+
Events (487)
 
Groups (150)
+
News (2848)
 
People (721)
 
Projects (1160)
+
Publications (13020)
+
Videos (6120)
Labs
Research areas
Algorithms and theory47205 (89)
Communication and collaboration47188 (108)
Computational linguistics47189 (54)
Computational sciences47190 (76)
Computer systems and networking47191 (294)
Computer vision208594 (88)
Data mining and data management208595 (22)
Economics and computation47192 (18)
Education47193 (36)
Gaming47194 (45)
Graphics and multimedia47195 (141)
Hardware and devices47196 (103)
Health and well-being47197 (34)
Human-computer interaction47198 (309)
Machine learning and intelligence47200 (193)
Mobile computing208596 (19)
Quantum computing208597 (1)
Search, information retrieval, and knowledge management47199 (207)
Security and privacy47202 (92)
Social media208598 (14)
Social sciences47203 (103)
Software development, programming principles, tools, and languages47204 (203)
Speech recognition, synthesis, and dialog systems208599 (15)
Technology for emerging markets208600 (6)
1–25 of 309
Sort
Show 25 | 50 | 100
1234567Next 
This project studies the problem of visualizing large-scale and high-dimensional data in a low-dimensional (typically 2D or 3D) space. Much success has been reported recently by techniques that first compute a similarity structure of the data points and then project them into a low-dimensional space with the structure preserved. These two steps suffer from considerable computational costs, preventing the state-of-the-art methods such as the t-SNE from scaling to large-scale and high-dimensional
Project details
Labs: Asia
Technology platforms are emerging as a new kind of workplace: from crowdwork to ‘peer economy’ platforms. This project uses ethnographic methods to address questions such as: Who are the workers on these platforms? What are their work practices? How do they differ from traditional labor? How is the complex relationship between workers, customers, platform provider, and the algorithms they create, experienced? What does all this imply for designing more equitable and sustainable markets for work?
Project details
Labs: India
holoportation is a new type of 3D capture technology that allows high quality 3D models of people to be reconstructed, compressed, and transmitted anywhere in the world in real-time. When combined with mixed reality displays such as HoloLens, this technology allows users to see and interact with remote participants in 3D as if they are actually present in their physical space. Communicating and interacting with remote users becomes as simple as face to face communication.
Project details
Labs: Redmond
One out of four people in the world have experienced mental illness at some point in their lives. DiPsy is a digital psychologist presented as a personalized chatbot, who can evaluate, diagnose, treat and study users' mental processes through natural conversations.
Project details
Labs: Asia
Room2Room is a life-size telepresence system that leverages projected augmented reality to enable co-present interaction between two remote participants. We enable a face-to-face conversation by performing 3D capture of the local user with color + depth cameras and projecting their virtual copy into the remote space at life-size scale. This creates an illusion of the remote person’s presence in the local space, as well as a shared understanding of verbal and non-verbal cues (e.g., gaze).
Project details
Labs: Redmond
Project details
Labs: Redmond
This research project investigates the design of an open source peer economy platform designed with and for service providers. This project is an early prototype of a worker dispatch system.
Project details
Labs: FUSE Labs
Project details
Labs: Redmond
Microsoft Research is conducting a study of a new device, called a Timecard. Timecard allows you to organise photos and other content around a timeline, and display this on a dedicated screen in your home.
Project details
Labs: Cambridge
NUIgraph is a prototype Windows 10 app for visually exploring data in order to discover and share insight.
Project details
Labs: Redmond
In order to increase user awareness of third party tracking we have investigated designs for visualizing cookie traffic as the users browse the Web.
Project details
Labs: Cambridge
Platform for Situated Interaction
Project details
Labs: Redmond
Instructions for Mechanical Turk tasks on classifying images
Project details
Labs: Redmond
We present a new interactive approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment, whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems, where capture, labeling and batch learning often takes hours or even days to perform, our approach is fully online.
Project details
The Ability team is a virtual team consisting of members of MSR's Labs who work on accessible technologies.
Project details
The RoomAlive Toolkit is an open source SDK that enables developers to calibrate a network of multiple Kinect sensors and video projectors. The toolkit also provides a simple projection mapping sample that can be used as a basis to develop new immersive augmented reality experiences similar to those of the IllumiRoom and RoomAlive research projects.
Project details
Labs: Redmond
Microsoft believes the Surface Hub will be as empowering and as transformative to teams and the shared work environment as the PC was to individuals and the desk. The Surface Hub creates new modalities for creating and brainstorming with its unique large-screen productivity apps and capabilities. We believe it will be a critical component for the modern workplace, home, or other venue where people need to come together to think, ideate, and produce. This RFP is now closed.
Project details
Labs: Redmond
Presenter Camera is a desktop application designed to improve the quality of video seen by remote attendees of a presentation.
Project details
Labs: Redmond
Project details
Labs: Redmond
The Eye Gaze keyboard is a project to enable people who are unable to speak or use a physical keyboard to communicate using only their eyes. Our initial prototypes are based around an on screen qwerty keyboard very similar to the 'taptip' keyboard built into Windows 8 which has been extended to response to eye gaze input from a sensor bar like the Tobii EyeX. Our goal is to improve communication speed by 25% compared to experienced users of off the shelf Speech Generating Devices.
This project aims to enable people to converse with their devices. We are trying to teach devices to engage with humans using human language in ways that appear seamless and natural to humans. Our research focuses on statistical methods by which devices can learn from human-human conversational interactions and can situate responses in the verbal context and in physical or virtual environments.
Project details
Labs: Redmond
Project Blush explorers the materiality of digital ephemera and people's receptiveness to 'digital jewellery' - exploring the materials and aesthetics that may allow wearables to become jewellables.Project Blush is a research project that originates from the Human Experience and Design group (HXD). HXD specialise in designing and fabricating new human experiences with computing. These play on many different kinds of human values, from amplifying efficiency and effectiveness to creating delight a
Project details
Labs: Cambridge
Team Three Rs is a group of Microsoft Researchers working on the Global Learning XPRIZE challenge, which aims to create software to help children in the developing world achieve success in learning the "Three Rs" (Reading, Writing, and Arithmetic.
Project details
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen-and-touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures, from touch gestures produced by the hand holding the pen, and we can sense which hand grips the tablet, and determine the screen's relative orientation to the pen.
Project details
Labs: Redmond
1–25 of 309
Sort
Show 25 | 50 | 100
1234567Next 
> Our research