Finding Dense Subgraphs

A basic primitive in graph optimization is that of finding small induced subgraphs of a given graph with “many” edges. Questions of this nature arise often in practice, for instance in finding “communities” in social networks, and in clustering type problems. From the theory side, it captures questions such as “small-set expansion”, which has proved to be an intriguing open problem.

A simple formalization I will talk about is the Densest-k-subgraph problem, where the objective, given a graph G, is to find an induced subgraph on k vertices with as many edges as possible. There is a vast gap in our understanding of this problem in terms of the approximation ratio we can achieve. I will discuss the state of the art in approximation algorithms, and try to point out why it is difficult to write convex programming relaxations for the problem. This is akin to difficulties that arise in other problems which have a “small-support” constraint, such as those arising in compressed sensing. An interesting aspect of the densest-k-subgraph problem is that “random instances” seem to be the hardest for obtaining approximation algorithms.

I will then talk about related “continuous” optimization questions, such as q-p norms of matrices, singular values of matrices and tensors, and discuss potential future directions.

Speaker Details

Aditya Bhaskara is a PhD candidate in Computer Science at Princeton University, advised by Moses Charikar. His research focuses on approximation algorithms for graph and matrix optimization problems, and understanding the power of mathematical programming relaxations. He is also interested in techniques from probability theory and convex geometry with applications to Computer Science. Previously, he obtained a B. Tech in Computer Science from the Indian Institute of Technology, Bombay.

Date:
Speakers:
Aditya Bhaskara
Affiliation:
Princeton University
    • Portrait of Jeff Running

      Jeff Running