Are Neural Networks Optimal Approximation Algorithms?


This NeurIPS, we presented our work on the question of whether neural networks can be efficiently used to design optimal approximation algorithms. The answer is a conditional yes. Assuming the Unique Games Conjecture, there is a general semidefinite program (SDP) that achieves optimal approximation guarantees for all maximum constraint satisfaction problems. In our paper, we show that this SDP can be efficiently solved using a rather simple graph neural network architecture (GNN). More specifically, gradient descent on the quadratically penalized Lagrangian of the SDP leads to message passing iterations, which we execute through a neural network that we call OptGNN.

This leads to an architecture that achieves strong empirical results across several NP-Hard combinatorial problems. The paper has also been featured on MIT CSAIL news so feel free to check that out for some additional commentary.