Semidefinite Programming by Perceptron Learning

Thore Graepel, Ralf Herbrich, Andriy Kharechko, and John Shawe-Taylor

Abstract

We present a modified version of the perceptron learning algorithm (PLA) which solves semidefinite programs (SDPs) in polynomial time. The algorithm is based on the following three observations: (i) Semidefinite programs are linear programs with infinitely many (linear) constraints; (ii) every linear program can be solved by a sequence of constraint satisfaction problems with linear constraints; (iii) in general, the perceptron learning algorithm solves a constraint satisfaction problem with linear constraints in finitely many updates. Combining the PLA with a probabilistic rescaling algorithm (which, on average, increases the size of the feasable region) results in a probabilistic algorithm for solving SDPs that runs in polynomial time. We present preliminary results which demonstrate that the algorithm works, but is not competitive with state-of-the-art interior point methods.

Details

Publication typeInproceedings
Published inAdvances in Neural Information Processing Systems 16
Pages457–464
PublisherMIT Press
> Publications > Semidefinite Programming by Perceptron Learning