Optimal Embedding: HypOp

Combinatorial optimization is ubiquitous across science and industry. In recent years, the integration of artificial intelligence (AI) into the field of scientific discovery is growing increasingly fluid, providing means to enhance and accelerate research. An approach to integrate AI into scientific discovery involves leveraging machine learning (ML) methods to expedite and improve the combinatorial optimization techniques to solve extremely challenging optimization tasks. Several combinatorial optimization problems are proved to be NP-hard, rendering most existing solvers non-scalable. Moreover, the continually expanding size of today’s datasets makes existing optimization methods inadequate for addressing constrained optimization problems on such vast scales. To facilitate the development of scalable and rapid optimization algorithms, various learning-based approaches have been proposed in the literature.

In [Heydaribeni et al., Nature Machine Intelligence 2024], Koushanfar’s team build upon the unsupervised learning-based optimization method for quadratic-cost combinatorial optimization problems on graphs introduced by Schuetz et al., and present HypOp, a new scalable solver for a wide range of constrained combinatorial optimization problems with arbitrary cost functions. Our approach is applicable to problems with higher-order constraints by adopting hypergraph modeling for such problems; HypOp subsequently utilizes hypergraph neural networks in the training process, a generalization of the graph neural networks employed by Schuetz et al. HypOp further combines unsupervised HyperGNNs with another generic optimization method, simulated annealing (SA), to boost its performance. Incorporating SA with HyperGNN can help with mitigation of the potential subpar local optima that may arise from HyperGNN training.

To establish a scalable solver and expedite the optimization process, HypOp proposes two algorithms for parallel and distributed training of HyperGNN. First, it develops a distributed training algorithm in which the hypergraph is distributed across a number of servers and each server only has a local view of the hypergraph. We develop a collaborative distributed algorithm to train the HyperGNN and solve the optimization problem. Second, HypOp proposes a parallel HyperGNN training approach where the costs associated with constraints are computed in a scalable manner. We further exhibit the transferability of our models, highlighting their efficacy in solving different optimization problems on the same hypergraph through transfer learning. This not only shows the generalizability of HypOp but also significantly accelerates the optimization process. HypOp is tested by comprehensive experiments, thoughtfully designed to provide insights into unsupervised learning-based optimization algorithms and their effectiveness across diverse problem types. We validate the scalability of HypOp by testing it on huge graphs, showcasing how parallel and distributed training can yield high-quality solutions even on such massive graphs. Now that the efficiency and accuracy of HypOp has been demonstrated, we will be applying this to the optimal embedding (as well as other optimization problems) in chip design.

Team Members

Farinaz Koushanfar1

Collaborators

Nasimeh Heidaribani1

1. UC San Diego

Publications

Nature Machine Intelligence >