Teaching computers to understand the visual world
We want to change the way you interact with visual data. We want to make your photos magical, we want to deeply understand images and videos from cameras everywhere: in your phone, on your Xbox, in your fridge, on robots, in cars, anywhere. We want you to be able to find your stuff, answer questions, make fantastic new images. And we do that by inventing new algorithms and thinking of new mathematical models for how images come to be.
J. Margeta, A.Criminisi, D.C.Lee, and N.Ayache, Recognizing Cardiac Magnetic Resonance Acquisition Planes using Finetuned Convolutional Neural Networks, in To appear in Computer Methods in Biomechanics and Biomedical Engineering, December 2015.
H. Lombaert, A. Criminisi, and N. Ayache, Spectral Forests: Learning of Surface Data, Application to Cortical Parcellation, in Medical Image Computing and Computer Assisted Intervention (MICCAI), Springer, October 2015.
Gerard Pons-Moll, Jonathan Taylor, Jamie Shotton, Aaron Hertzmann, and Andrew Fitzgibbon, Metric Regression Forests for Correspondence Estimation, in IJCV, Springer, August 2015.
J. Valentin, V. Vineet, M.-M. Cheng, D. Kim, J. Shotton, P. Kohli, M. Niessner, A. Criminisi, S. Izadi, and P. Torr, SemanticPaint: Interactive 3D Labeling and Learning at your Fingertips, in ACM Trans. on Graphics (TOG), ACM – Association for Computing Machinery, August 2015.
Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John Platt, Lawrence Zitnick, and Geoffrey Zweig, From Captions to Visual Concepts and Back, in The proceedings of CVPR, IEEE – Institute of Electrical and Electronics Engineers, June 2015.
Join us! Do you love to turn mathematics into code? Do you want to build the future? Then Apply here.
- RoomAlive Toolkit
- Depth from Time-of-Flight
- From Captions to Visual Concepts and Back
- Eye Gaze Keyboard
- Human activity detection in RGBD videos
- Fully Articulated Hand Tracking
- ATL Cairo GPSP - Projects Ideas
- Learning to be a depth camera for close-range human capture and interaction
- Sparse Reflections Analysis
- User-Specific Hand Modeling from Monocular Depth Sequences
Our research page.