This project focuses on rural government maternal health workers in India (called Accredited Social Health Activists, or ASHAs), using a tool called ASHA Assist to help ASHAs engage their clients in persuasive discussions about various topics related to maternal health. ASHA Assist consists of interactive videos on mobile phones, covering topics related to maternal health for use in counseling their clients.
Spoken language understanding (SLU) is an emerging field in between the areas of speech processing and natural language processing. The term spoken language understanding has largely been coined for targeted understanding of human speech directed at machines. This project covers our research on SLU tasks such as domain detection, intent determination, and slot filling, using data-driven methods.
With proliferation of ubiquitous access to information, the question arises of how distracting processing information can be in social settings, especially during a face-to-face conversation. In this paper, we investigate how much information users can consume during a conversation and what information delivery mode, via audio or visual aids, helps them effectively conceal the fact that they are receiving information.
Data is all the buzz. It's being seen in everything and found everywhere. But what are the consequences of this vision of a data-rich world for those of us on the street; what impact if any does it have on our everyday experiences and with the things that matter most to us. Here, we aim to reflect on the rise of (big) data and investigate what it does mean for us, and what it could come to mean.
Automatic program verification tool for proving termination and other liveness properties
One Click Access evaluation at NTCIR
Subtopic Mining and Diversified Search evaluation at NTCIR
Evaluating summaries, ranked retrieval and sessions seamlessly
This research project in MSR SVC aims to answer the following question: Can we allow programmers to write cloud applications as though they are accessing centralized, strongly consistent data while at the same time allowing them to specify their consistency/availability/performance (CAP) requirements in terms of service-level agreements (SLAs) that are enforced by the cloud storage system at runtime?
The XCG Lab Security and Cryptography teams do development, applied research, and theoretical research in the fields of systems security and cryptography. These teams include the Cryptography Research team, the Security & Cryptography team, and the Systems Incubation team.
Labs: eXtreme Computing Group
We investigate how people's behaviour online can be characterized in terms of psychometric measurements such as the Big-5 personality traits openness, conscientiousness, extraversion, agreeableness, and neuroticism as well as general intelligence and satisfaction-with-life. We investigate patterns of Facebook usage, website preferences, query logs, and Facebook Likes and look for interesting correlations which can be used to predict users behaviours, preferences or characteristics.
In order to render a high quality, versatile 3D talking head, a stable, high frame rate AV data acquisition system is constructed. It can capture 3D position, surface orientation and albedo texture of the talking head video images along with the corresponding speech signals.
We propose a new photo-realistic, voice driven only (i.e. no linguistic info of the voice input is needed) talking head.
Two important performance metrics in collaborative systems are local and remote response times. These response times depend on three important factors: processing architecture, communication architecture, and scheduling of tasks dictated by these two architectures. We show that it is possible to create a system that improves response times by dynamically adjusting these three system parameters in response to changes to collaboration parameters.
We conducted a study comparing avatar conferencing with video and audio conferencing for work scenarios. We studied nine four-person teams using a within-subjects design that measured users’ perceptions and preferences across the three conferencing conditions.
People sometimes miss small parts of meetings and need to quickly catch up without disrupting the rest of the meeting. We developed an Accelerated Instant Replay (AIR) Conferencing system for videoconferencing that enables users to catch up on missed content while the meeting is ongoing. AIR can replay parts of the conference using four different modalities: audio, video, conversation transcript, and shared workspace.
Code Digger is a Microsoft® Visual Studio® 2012 extension that analyzes possible execution paths through your .NET code. The result is a table where each row shows a unique behavior of your code. The table helps you understand the behavior of the code, and it may also uncover hidden bugs.
In recent years the Web has evolved substantially, transforming from a place where we primarily find information to a place where we also leave, share and keep it. This presents a fresh set of challenges for the management of personal information, which include how to underpin greater awareness and more control over digital belongings and other personally meaningful content that is hosted online.
Optimus is a framework for dynamically rewriting an execution plan graph in distributed data-parallel computing at runtime. It enables optimizations that require knowledge of the semantics of the computation, such as language customizations for domain-specific computations including matrix algebra. We address several problems arising in distributed execution including data skew, dynamic data re-partitioning, unbounded iterative computations, and fault tolerance.
The great thing about large displays is their size. But their size is also the bad news - in terms of conventional interface design. Conventional UI elements may be too far to conveniently reach, or reach at all. This work is directed at exploring alternative modes of interaction which bring the interaction to the user, rather than the reverse - using various techniques and technologies. Emerging from this are new insights in how to work in natural, appropriate and engaging ways.
An increased dependence on medical imaging for patient diagnosis and treatment places new challenges upon the clinical community. Existing image processing workflows struggle to keep up with the pace at which imaging technology is developing. Microsoft Research is working with top research institutes around the world to make available data and tools and advance the state of the art in automatic analysis of medical scans.
Distribution Modeller (temporary name only!) is CEES' end-to-end browser tool that lets the researcher to rapidly import data, supplement that data with environmental info from FetchClimate, specify an arbitrary model by point and click or in code, parameterize the model against the data using Filzbach, make and visualize predictions with a full propagation of parameter uncertainty – then package and share everytihng, in a way that is inspectable, repeatable, and modifiable.
We demonstrate a novel method for real-time 3D scene capture and reconstruction. Using several live color images, we build a high resolution voxelization of the visible surfaces. The key to our approach is an efficient sparse voxel representation ideally suited to Graphics Processing Unit (GPU) acceleration. We store only those voxels that contain the visible surfaces, leading to a compact representation for the 3D model.
Real-time information about businesses such as, the current occupancy and music levels, as well as the type or exact song playing now, can be important factors in the local search decision process. In this work, we propose to automatically crowdsource such rich, real time business metadata through user check-in events.
With the emergence of abundant online content, cloud computing, and electronic reading devices, textbooks are poised for transformative changes. Taking into account the vast amount of existing textbooks designed for traditional printed medium and the potential for enabling new kinds of functionalities through the medium of electronic textbooks, we present the results of our research into algorithmically diagnosing and enhancing the quality of textbooks.