Microsoft Research announced the 10 recipients of the Virtual Earthï¿½ Academic Research Collaboration awards, totaling $400,000 (USD) in funding. This award focuses on advancing academic research and publication in the area of Internet technologies and services, particularly in computer vision, location-based search, and information discovery and sharing. To assist awardees, Microsoft made available street imagery data from one residential area, and the downtown area of San Francisco including: 1) color satellite imagery that covers the area; 2) street-side images with their estimated position and orientation accumulated in 4m intervals; and 3) models of the houses in the area, with some textured buildings. Microsoftï¿½s geographic imagery, in combination with the Interactive Virtual Earth software development kit, will enable researchers to explore potential applications of location-based web searches.
Cartographers have long realized that they must abstract (or generalize) the information presented in a map to emphasize the details that are essential for navigation while de-emphasizing or omitting information that is irrelevant. We are building a system for automatically generating abstracted tourist maps of a given location. Such maps typically contain visual representations of landmarks, such as buildings, to assist navigation and to remind viewers of points of interest. We are developing techniques to automatically extract building icons from street-side and aerial photographs, selecting the prominent landmarks from among all the building icons and then placing the icons on a map. We believe that such tourist maps are far more effective as navigational tools than current online maps.
Creating models of the real world requires massive manual labor to ensure reliable mapping, navigation, and geography to assist users in determining their geographical position on the Internet. The goal of this project is to develop a way to automate the process of creating models of the real world. There exists rich literature, a variety of methods, and projects to obtain 3D models of buildings and entire cities. Using 3D reconstruction methods, such as, stereo or laser scanning could almost fully automate this process. However, their drawbacks are that no significant information is provided unless there is a car, house, or traffic sign in the model. Only point clouds and textured 3D models exist in these methods. In order to properly interpret a scene and to build better 3D models, a better level of knowledge is required about the object. This project aims at providing novel recognition methods to determine, separate, and define meaningful information about the objects in a scene.
To implement City Capture, a project that aims to enable a user to navigate around the world in three dimensions, we are installing several Microsoft Research-pioneered GigaPixel Sensors - that have high-resolution, high-focal length camera lenses mounted on modified telescope pan-tilt rigs --throughout a major city to capture its evolution over time. In addition, we are developing a Web based interface to interact with the resulting treasure trove of information, developing the technology to integrate the captured data with Microsoft Virtual Earth, and using the captured data as source imagery for the 4D Cities project at Georgia Tech.
This study investigates the results from matching ordinary still photographs with Digital Terrain Model (DTM) data available from Microsoft Virtual Earth. From the end-userï¿½s perspective, such matching makes it possible to perform a variety of enhancements, such as removing fog and haze, removing unwanted objects, or adding new ones, adding resolution/detail in the distance. Other creative manipulations might include changes in the viewing position, and simulating the appearance of the scene at different times of day, or under different weather conditions. From Microsoft Virtual Earthï¿½s standpoint, such matching will provide a mechanism for updating and adding detail to the relevant terrain textures.
Efficient Image Correspondence and Indexing Methods for Urban Scene and Object Recognition
Kristen Grauman, Trevor Darrell
University of Texas at Austin, U.S.; Massachusetts Institute of Technology, U.S.
Image matching and category recognition can enable exploitation of geographic image databases. With this in mind, we are developing an efficient step-by-step procedure to match an observed image from a mobile camera to an existing urban image from a database. We want to devise methods that automatically identify instances of objects and categories of interest. Due to the increasing volume and refresh rate of databases like Microsoftï¿½s Virtual Earth collection, we believe that computational complexity is a foremost concern in our designs. To accommodate robust image retrieval from very large databases and recognition of a large number of categories, we have developed a sub-linear time-randomized, hashing algorithm, for correspondence based search with local image feature representations. Our techniques enable fast indexing and category recognition over a very large database of examples represented by sets of local features. We are using the Microsoft Virtual Earth street scene database to demonstrate and evaluate our random hashing method.
One of the common Web searches that have a strong local component is the search for hotels and restaurants. Users try to identify hotels and restaurants that satisfy particular criteria, such as, excellent food quality, service, and so on. One of the important components of such a search is the location of the hotel or restaurant. For example, everything else being equal, a hotel next to the beach is typically more desirable than a hotel that is separated by a highway from the beachfront. Our goal is to build on our existing EconoMining project and identify the economic value of different location characteristics given the associated local infrastructure. All else equal, restaurants located next to highways typically have lower prices than restaurants nestled in the midst of greenery, which in turn will have different prices than those in the middle of a cityï¿½s downtown where there is a higher density of businesses or higher concentration of human activity. Similarly, hotels next to high-traffic areas can charge higher prices than those located at the outskirts of a city or in sparsely populated suburbs and still make a sale. By reversing the logic of this analysis, we are identifying the important location-based characteristics that influence the desirability of a particular venue, and improve the quality of local search for such venues.
There are a wide variety of maps available from online sources. These maps include street maps, property survey maps, maps of oil and natural gas fields, and so on. However, there is no automatic method for determining the coverage of a map or fusing a map with an aerial image. In this project we are developing a general approach to finding online maps and aligning them with aerial imagery. For instance, by aligning abstract street maps with known road networks and other maps -- and integrate them with Virtual Earth, this will enable users to select individual map 'layers' and display them on available aerial images of a particular region -- meaning they can be as specific or as general as suits their needs.
This project builds a New View on NEWS, with a geo-tagger in news articles which identifies the geographical references in them. The articles are then displayed on a map interface, such as, the one provided by Microsoft Virtual Earth. Articles are assembled by similar topics, which are then associated with their corresponding markers on the map interface. News topics associated with a marker are placed in decreasing order according to how popular the stories rate with the general public. The zoom level of the interface enables users to choose markers on the maps and browse news articles from international to local news and events. Issues in the design of the geo-tagger are being investigated as well. The front-end of the system uses sophisticated spatial querying functionalities like those currently available on the Spatial And Non-spatial Database (SAND), Internet Browser, developed at the University of Maryland, College Park, with functionality defined by the different types of tables and attributes it supports.
We are reconstructing accurate, high resolution surface models of the worldï¿½s sites, cities, and landscapes. Much of the requisite imagery is already (or becoming) available on the Internet. However, we envision an automated web geometry crawler that processes images on the Internet, computes matches with other images of the same site, and integrates them into Microsoft Virtual Earth-style models with satellite and aerial imagery. Ultimately, we are going to reconstruct models of sites all over the world. Given the vast and growing range of imagery available, the models could eventually cover large areas of the earthï¿½s surface. We plan to build the underlying technology that will enable this massive scale reconstruction.
On Testing Non-Testable Information Retrieval Systems with Geographic Components on the Web
Zhi Quan Zhou
University of Wollongong, Australia;
The University of Hong Kong, China;
Beijing University of Aeronautics and Astronautics, China;
Swinburne University of Technology, Australia
Testing and debugging account for over fifty percent of the total cost of software development. Compared with other quality aspects of information retrieval systems on the Web, such as performance and capacity, functional correctness is more fundamental but its verification suffers from the oracle problem. It is very expensive or even impossible to decide whether outcomes of executions on real-world data are correct.ï¿½ For example, how can testers decide whether the result returned by a system that finds the shortest route is indeed the shortest among all possible routes, or whether the results returned by a Web search engine are complete?ï¿½Using a metamorphic testing method that verifies necessary properties of software applications, we have detected various failures in the Microsoft search engine Live Search, as well as, in other major search engines such as Google, Yahoo!, and Lycos.ï¿½ Based on these preliminary research findings, we are developing a novel method for detecting failures in Web search engines.ï¿½ We shall develop a fully automatic method for testing information retrieval systems with geographic components on the Web.