Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Multimedia Search and Mining

Multimedia Search and Mining (MSM) group focuses on a wide variety of multimedia-related research and projects, e.g., understanding, analysis, search, data mining, and applications. We are working on research problems in image understanding, video analytics, large scale visual (image and video) indexing and search, 3D reconstruction, and so on.

  • 3D Object Reconstruction and Recognition
    We study the problem of 3D object reconstruction and recognition. For reconstruction, we aim at developing algorithms and systems to lower down the barrier of 3D reconstruction for common users. In this way, we can collect a world-class 3D object repository via leveraging crowdsourcing. For recognition, we aim at dealing with a large-scale task (e.g. identifying thousands of objects), and providing real-time performance.
  • Dog Recognition
    Dogs are human's close friends on the planet, there were estimated to be 400 million dogs in the world from hundreds of varied breeds. As the large number of breeds, it is hard for normal users to recognize most of them. Hereby, we developed a dog recognizer to assist users to know more about dogs.
  • Food Recognition
    We study the problem of food image recognition via deep learning techniques. Our goal is to develop a robust service to recognize thousands of popular Asia and Western food. Several prototypes have been developed to support diverse applications. We are also developing a prototype called Im2Calories, to automatically calculate the calories and conduct nutrition analysis for a dish image.
  • Image Chat
    Image is becoming a popular media for user communications on social networks. Then, it comes to be a natural requirement to enable chatbot to chat on images besides textual inputs. Based on MS XiaoIce(微软小冰), we explore the direction of image chat and iterate several rounds to enhance her talkative ability for images.
  • Image2Text
    We study the problem of image captioning, i.e., automatically describing an image by a sentence. This is a challenging problem, since different from other computer vision tasks such as image classification and object detection, image captioning requires not only understanding the image, but also the knowledge of natural language. We formulate this problem as a multimodal translation task, and develop novel algorithms to solve this problem.
  • Large Scale Weakly Supervised Learning
    Click-through data accumulated by search engine where rich connections between images and semantics have been built via the massive user clicks. The data comes free when search engine freely provides service to users, and naturally scales up to million scale even billion scale. Unlike dedicatedly constructed datasets, click-through data is noisy, unstructured and unbalanced. Under this project, we are targeting effectively using click-through data to solve image understanding problems.
  • Network Morphism
    We propose a novel learning scheme called network morphism. It morphs a parent network into a child network, allowing fast knowledge transferring. The child network is able to achieve the performance of the parent network immediately, and its performance shall continue to improve as the training process goes on. The proposed scheme allows any network morphism in an expanding mode for arbitrary non-linear neurons, including depth, width, kernel size and subnet morphing operations.
  • Photo Story
    The capability of managing personal photos is becoming crucial. In this work, we have attempted to solve the following pain points for mobile users: 1) intelligent photo tagging, best photo selection, event segmentation and album naming, 2) speech recognition and user intent parsing of time, location, people attributes and objects, 3) search by arbitrary queries.
  • Video and Language
    Automatically describing video content with natural language is a fundamental challenge of computer vision. Recurrent Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. In this project, we present a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding.
Past Projects
  • 3D Object Reconstruction and Recognition
    We study the problem of 3D object reconstruction and recognition. For reconstruction, we aim at developing algorithms and systems to lower down the barrier of 3D reconstruction for common users. In this way, we can collect a world-class 3D object repository via leveraging crowdsourcing. For recognition, we aim at dealing with a large-scale task (e.g. identifying thousands of objects), and providing real-time performance.
  • MindFinder: Finding Images by Sketching
    Sketch-based image search is a well-known and difficult problem, in which little progress has been made in the past decade in developing a large-scale and practical sketch-based search engine. We have revisited this problem and developed a scalable solution to sketch-based image search. The MindFinder system has been built by indexing more than two million web images to enable efficient sketch-based image retrieval, and many creative applications can be expected to advance the state of the art.
  • Mobile Video Search
    Mobile video is quickly becoming a mass consumer phenomenon. More and more people are using their smartphones to search and browse video contents while on the move. This project is to develop an innovative instant mobile video search system through which users can discover videos by simply pointing their phones at a screen to capture a very few seconds of what they are watching.
  • Multimedia Advertising
    The ever increasing multimedia content on the Internet has become the primary source for more effective online advertising. Conventional advertising systems treat multimedia content as the same as general text, without considering automatically monetizing the rich content of the images and videos. This research direction will leverage content analysis and understanding to enable more effective and efficient advertising on multimedia content, whether on the Internet and mobile devices.
  • Picto: A large scale visual indexing and recognition system
    In this project, we focus on developing algorithms for large-scale image indexing and recognition. Our research covers low-level image features, middle level image representations, and indexing and ranking algorithms.
  • Video Collage
    Video Collage is a kind of synthesized image that enable users to quickly browse the video content. Given a video, Video Collage is able to select the most representative images from the video, extract salient regions of interest from these images, and seamlessly arrange ROI on a given canvas. Video Collage can be used for Windows Vista Explorer, Live Search Video, as well as MSN Soapbox.
Publications

    2016

    2015

    2014

    2013

    2012