In this paper we show how the complexity of performing nearest neighbor (NNS)
search on a metric space is related to the expansion of the metric space. Given a
metric space we look at the graph obtained by connecting every pair of points
within a certain distance r . We then look at various notions of expansion in
this graph relating them to the cell probe complexity of NNS for randomized and
deterministic, exact and approximate algorithms. For example if the graph has
node expansion *Φ* then we show that any deterministic t-probe data
structure for n points must use space S where (St/n)^{t} > Phi. We
show similar results for randomized algorithms as well. These relationships can
be used to derive most of the known lower bounds in the well known metric spaces
such as l1, l2, l1 by simply computing their expansion. In
the process, we strengthen and generalize our previous results [19].
Additionally, we unify the approach in [19] and the communication complexity
based approach. Our work reduces the problem of proving cell probe lower bounds
of near neighbor search to computing the appropriate expansion parameter.
In our results, as in all previous results, the dependence on t is weak; that is,
the bound drops
exponentially in t. We show a much stronger (tight) time-space tradeoff for the
class of dynamic low contention data structures. These are data structures that
supports updates in the data set and that do not look up any single cell too
often.