Fast Approximate k-Means via Cluster Closures

Jing Wang1, Jingdong Wang2, Qifa Ke3, Gang Zeng1, and Shipeng Li2
1Peking University  2Microsoft Research Asia  3Microsoft Research Silicon Valley
 
Overview
 

Abstract

K-means, a simple and effective clustering algorithm, is one of the most widely used algorithms in computer vision community. Traditional k-means is an iterative algorithm — in each iteration new cluster centers are computed and each data point is re-assigned to its nearest center. The cluster re-assignment step becomes prohibitively expensive when the number of data points and cluster centers are large.
    In this paper, we propose a novel approximate k-means algorithm to greatly reduce the computational complexity in the assignment step. Our approach is motivated by the observation that most active points changing their cluster assignments at each iteration are located on or near cluster boundaries. The idea is to efficiently identify those active points by pre-assembling the data into groups of neighboring points using multiple random spatial partition trees, and to use the neighborhood information to construct a closure for each cluster, in such a way only a small number of cluster candidates need to be considered when assigning a data point to its nearest cluster. Using complexity analysis, real data clustering, and applications to image retrieval, we show that our approach out-performs state-of-the-art approximate k-means algorithms in terms of clustering quality and efficiency.

 

Download

| PDF |
 

Bibtex

@inproceedings{WangWKZL12,
  author    = "Jing Wang, Jingdong Wang, Qifa Ke, Gang Zeng, Shipeng Li",
  title     = "Fast Approximate k-Means via Cluster Closures",
  booktitle = "CVPR",
  year      = "2012",
}
 

Results

Cluster performance comparison
 
Performance Comparison 1
Comparison1
Clustering performance in terms of within-cluster sum of squared distortions (WCSSD) vs. time. The first row are the results of clustering 1M SIFT dataset into 0.5K, 2K and 10K clusters, respectively.
The second row are results on 1M tiny image dataset.
 
Performance Comparison 2
Comparison2
Clustering performance in terms of normalized mutual information (NMI) vs. time, on the dataset of 200K tiny images, 500K tiny images, and 200K shopping images.
 
Performance Comparison 3
Example3
Performance comparison of our approach, HKM, and AKM using different codebook sizes on the Oxford 5K dataset
 
Visual Cluster results
 
Visual Example 1
Example1
Clustering results of 200K Product image data set, each cluster example is represented by two rows of images which are randomly picked out from the cluster
 
Visual Example 2
Example2
Clustering results of 500K Tiny image data set, each cluster example is represented by two rows of images which are randomly picked out from the cluster
 
Visual Example 3
Example3
Examples of the retrieval results of Oxford5k dataset: the first image in each row is the query image and the following images are the top results.
©Copyright Jingdong Wang 2012
HTML Hit Counters