FaST-LMM (Factored Spectrally Transformed Linear Mixed Models) is a set of tools for performing genome-wide association studies (GWAS) on large data sets. FaST-LMM runs on both Windows and Linux, and contains code to do (1) univariate GWAS, (2) testing sets of SNPs, (3) feature selection for background correction, (4) epistatic association scans, (5) a correction method for cellular heterogeneity in methylation and similar data.
We envision using Eye Gaze technology to bring independent mobility to people living with disabilities who are unable to use a joystick.
Parasail is a novel approach to parallelizing a large class of seemingly sequential applications wherein dependencies are, at runtime, treated as symbolic values. The efficiency of parallelization, then, depends on the efficiency of the symbolic computation, an active area of research in static analysis, verification, and partial evaluation. This is exciting as advances in these fields can translate to novel parallel algorithms for sequential computation.
NLPwin is a software project at Microsoft Research that aims to provide Natural Language Processing tools for Windows (hence, NLPwin). The project was started in 1991, just as Microsoft inaugurated the Microsoft Research group; while active development of NLPwin continued through 2002, it is still being updated regularly, primarily in service of Machine Translation.
We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen-and-touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures, from touch gestures produced by the hand holding the pen, and we can sense which hand grips the tablet, and determine the screen's relative orientation to the pen.
Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face-to-face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches is users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction.
We present a new real-time articulated hand tracker which can enable new possibilities for human-computer interaction (HCI). Our system accurately reconstructs complex hand poses across a variety of subjects using only a single depth camera. It also allows for a high-degree of robustness, continually recovering from tracking failures. However, the most unique aspect of our tracker is its flexibility in terms of camera placement and operating range.
An Ironclad App lets a user securely transmit her data to a remote machine with the guarantee that every instruction executed on that machine adheres to a formal abstract specification of the app's behavior. This does more than eliminate implementation vulnerabilities such as buffer overflows, parsing errors, or data leaks; it tells the user exactly how the app will behave at all times.
We want to use eye gaze and face pose to understand what users are looking at, to what they are attending, and use this information to improve speech recognition. Any sort of language constraint makes speech recognition and understanding easier since the we know what words might come next. Our work has shown significant performance improvements in all stages of the speech-processing pipeline: including addressee detection, speech recognition, and spoken-language understanding.
RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented, magical entertainment experience. RoomAlive presents a unified, scalable approach for interactive projection mapping that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment.
We propose to use deep bidirectional LSTM for audio/visual modeling in our photo-real talking head system.
Here are some of the events I've been involved in since joining the group!
The physical charts are an attempt to make data and data visualisations legible to ordinary people in their daily lives. In response to the increasing sophistication of data visualisations and the seemingly unquestioning quest for novelty, the charts make playful use of long established and highly familiar representations like pie charts and bar graphs. Rather than estrange viewers, the objective is to enable them to, at a glance, engage with and comprehend data.
Zero-Effort Payments (ZEP) is a seamless mobile computing system designed to accept payments with no effort on the customer’s part beyond a one-time opt-in. With ZEP, customers need not present cards nor operate smartphones to convey their identities. ZEP uses three complementary identification technologies: face recognition, proximate device detection, and human assistance.
EmotoCouch is a furniture prototype that uses lights, changing patterns and haptic feedback to change its appearance and thereby convey emotion. EmotoCouch is built using the Lab of Things platform.
NL-SPARQL is a data set freely available for research purposes. It includes ....
Quick interaction between a human teacher and a learning machine presents numerous benefits and challenges when working with web-scale data. The human teacher guides the machine towards accomplishing the task of interest. The system leverages big data to find examples that maximize the training value of its interaction with the teacher.
ATL Cairo is requesting graduation projects proposal submissions in the area of “Computer Vision”, “Natural Language Processing & Social Analytics” and “Speech”. While encouraging submissions in the general area of all mentioned tracks, we encourage submitting proposals addressing the following project ideas:
Labs: ATL Cairo
Interspeech 2014 Tutorial Web Page
an overhead-constraint logging system
We are looking for participants to engage in a personalised online shopping experience. You will receive a £40 shopping voucher for your participation and get the opportunity to purchase a book at 90% discount. The experiment involves a session of online shopping during which we will measure your eye movements and bodily responses. The shopping session is followed by an interview and we will ask you to fill out a final questionnaire to give us feedback on the study.
CityNoise is a project led by Dr. Yu Zheng in Microsoft Research. The project aims to diagnose a city's noise pollution with crowdsensing and ubiquitous data. It reveals the fine-grained noise situation throughout a city and analyzes the composition of noises in a particular location, by using 311 complaint data together with road network data, points of interests, and social media.
Animated computer graphics are projected onto the base of a fiber optic tree to create a sparse 3D display within the tree. This was done as an entry into Microsoft Research's MakeFest and demonstrated on 1/10/2014 to the MSRMakeFest community.
OSLO is a .NET and Silverlight class library for the numerical solution of ordinary differential equations (ODEs). The library enables numerical integration to be performed in C#, F# and Silverlight applications. OSLO implements Runge-Kutta and back differentiation formulae (BDF) for non-stiff and stiff initial value problems.