Teaching computers to understand the visual world
The goal of computer vision is to make computers efficiently perceive, process, and understand visual data such as images and videos. The ultimate goal is for computers to emulate the striking perceptual capability of human eyes and brains-or even to surpass and assist the human in certain ways.
Within Microsoft Research, our computer-vision research include investigations into:
- Imaging and Photogrammetry, including high-resolution cameras, radiometric calibration, photometric stereo, 3-D imaging and video, 3-D scene reconstruction from images and video, and image and video enhancement.
- Pattern Recognition and Statistical Learning, including data clustering and classification, manifold learning, and high-dimensional geometry and statistics.
- Object Detection and Recognition, including face detection, alignment, and tagging; video-based face recognition; and sparsity-based robust face recognition. We also investigate general object-class recognition and advanced medical-image analysis.
- Image and Video Editing and Enhancement, including denoising and deblurring, novel representations for images and video, techniques for content-aware edits such as in-painting, and object removal.
J. Margeta, A.Criminisi, D.C.Lee, and N.Ayache, Recognizing Cardiac Magnetic Resonance Acquisition Planes using Finetuned Convolutional Neural Networks, in To appear in Computer Methods in Biomechanics and Biomedical Engineering, December 2015
Toby Sharp, Cem Keskin, Duncan Robertson, Jonathan Taylor, Jamie Shotton, David Kim, Christoph Rhemann, Ido Leichter, Alon Vinnikov, Yichen Wei, Daniel Freedman, Pushmeet Kohli, Eyal Krupka, Andrew Fitzgibbon, and Shahram Izadi, Accurate, Robust, and Flexible Real-time Hand Tracking, CHI, April 2015
Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore, Quantum Nearest-neighbor Algorithms for Machine Learning, in Quantum Information and Computation, vol. 15, no. 3&4, pp. 0318-0358, Rinton Press, March 2015
Kalin Ovtcharov, Olatunji Ruwase, Joo-Young Kim, Jeremy Fowers, Karin Strauss, and Eric S. Chung, Accelerating Deep Convolutional Neural Networks Using Specialized Hardware, 23 February 2015
- Fully Articulated Hand Tracking
- ATL Cairo GPSP - Projects Ideas
- Learning to be a depth camera for close-range human capture and interaction
- Sparse Reflections Analysis
- User-Specific Hand Modeling from Monocular Depth Sequences
- Real-Time RGB-D Camera Relocalization
- Real-time 3D Reconstruction at Scale using Voxel Hashing
- Kinectrack: Agile 6-DoF Tracking Using a Projected Dot Pattern
- RetroDepth: 3D Silhouette Sensing for High-Precision Input On and Above Physical Surfaces
- Microsoft 3-Handpose dataset
- Eye-Gaze Tracking for Improved Natural User Interaction
- ViiBoard: Vision-enhanced Immersive Interaction with Touch Board
- Alternating Minimization for Non-convex Optimization Problems
- Sketch2Cartoon: Composing Cartoon Images by Sketching
- Sketch2Tag: Automatic Hand-Drawn Sketch Recognition
- MSR-Bing Image Retrieval Challenge (IRC)