Detecting Nearly Duplicated Records in Location Datasets

  • Yu Zheng ,
  • Xixuan Feng ,
  • ,
  • Shuang Peng ,
  • James Fu

Proceedings of 18th ACM SIGSPATIAL Conference on Advances in Geographical Information Systems |

Best Paper Award

Publication

The quality of a local search engine, such as Google and Bing Maps, heavily relies on its geographic datasets. Typically, these datasets are obtained from multiple sources, e.g., different vendors or public yellow-page websites. Therefore, the same location entity, like a restaurant, might have multiple records with slightly different presentations of title and address in different data sources. For instance, ‘Seattle Premium Outlets’ and ‘Seattle Premier Outlet Mall’ describe the same Outlet located in the same place while their titles are not identical. This will cause many nearly-duplicated records in a location database, which would bring trouble to data management and make users confused by the various search results of a query. To detect these nearly duplicated records, we propose a machine-learning-based approach, which is comprised of three steps: candidate selection, feature extraction and training/inference. Three key features consisting of name similarity, address similarity and category similarity, as well as corresponding metrics, are proposed to model the differences between two entity records. We evaluate our method with intensive experiments based on a large-scale real dataset. As a result, both the precision and recall of our method exceeded 90%.