Graph Representation Learning
Graph neural networks (GNNs) have been a very successful architecture in many domains. With [Barzilay et al., Nature Reviews Methods Primers 2024] we wrote an introductory survey on GNNs in a high-profile journal.
Scalability. Many graph algorithms, including some for graph representation learning, are expensive to scale to large graphs. In [Le et al., ICLR 2024] we develop a signal sampling theory for graph limits (i.e., graphons), and develop a criterion for an optimal sampling set. Then we develop an algorithm to sample such a set, based on the limit graph. Finally, we show that the selected set applies to graphs of varying sizes, and is consistent. Our results rely on a connection to spectral clustering and Gaussian elimination.
Sign equivariance for spectral graph representation learning and point clouds. Recent work has shown the utility of developing machine learning models that respect the structure and symmetries of eigenvectors. These works promote sign invariance, since for any eigenvector v the negation -v is also an eigenvector. However, in [Lim et al., NeurIPS 2023] we show that sign invariance is theoretically limited for tasks such as building orthogonally equivariant models and learning node positional encodings for link prediction in graphs. We demonstrate the benefits of sign equivariance for these tasks. To obtain these benefits, we develop novel sign equivariant neural network architectures. Our models are based on a new analytic characterization of sign equivariant polynomials and thus inherit provable expressiveness properties. Controlled synthetic experiments show that our networks can achieve the theoretically predicted benefits of sign equivariant models.
Team Members
Hamed Hassani1
Stefanie Jegelka2
Amin Karbasi3
Yusu Wang4
Collaborators
Regina Barzilay2
Gabriele Corso2
Tommi Jaakkola2
Haggai Maron5
Joshua Robinson6
Hannes Stärk2
1. University of Pennsylvania
2. MIT
3. Yale University
4. UC San Diego
5. NVIDIA/Technion
6. Stanford University
Publications
Nature Reviews Methods Primers 2024 >
ICLR 2024 >
NeurIPS 2023 >