Parallelization

Parallelization A major challenge in Federated Learning is tackling the behavior of Byzantine machines, which behave completely arbitrarily. This can happen due to software or hardware crashes, poor communication links between local machines and the center machine, stalled computations, and even coordinated or malicious attacks by a third party. In order to fit complex machine […]

Read More

Neural Networks that Learn Algorithms Implicitly

Neural Networks that Learn Algorithms Implicitly The remarkable capability of Transformers to show reasoning and few-shot abilities, without any fine-tuning, is widely conjectured to stem from their ability to implicitly simulate multi-step algorithms—such as gradient descent—with their weights in a single forward pass. Recently, there has been progress in understanding this complex phenomenon from an […]

Read More

Learning for Optimization

Recently, neural approaches have shown promise in tackling (combinatorial) optimization problems in a data-driven manner. On the other hand, for many problems, especially geometric optimization problems, many beautiful geometric ideas and algorithmic insights have been developed in fields such as theoretical computer science and computational geometry. Our goal is to infuse geometric and algorithmic ideas […]

Read More

Powerful Learning Models for Graphs and Hypergraphs

Powerful Learning Models for Graphs and Hypergraphs In practice, depending on the type of data at hand and the problem at hand, often we need to design suitable neural architecture to produce efficient and effective learning models. Many practical problems from our use-domains operate on (hyper-)graph types of data. Wang’s team has explored the following: […]

Read More

Nonlinear Feature Learning in Neural Networks

Nonlinear Feature Learning in Neural Networks Learning non-linear features from data is thought to be one of the fundamental reasons for the success of deep neural networks. This has been observed in a wide range of domains, including computer vision and natural language processing. Among many theoretical approaches to study neural nets, much work has […]

Read More

Deep Learning with Symmetries

Deep Learning with Symmetries Sample Complexity Gain of Invariances In practice, encoding invariances into models improves sample complexity. In [Tahmasebi & Jegelka, NeurIPS 2023], we study this phenomenon from a theoretical perspective. In particular, we provide minimax optimal rates for kernel ridge regression on compact manifolds, with a target function that is invariant to a […]

Read More

Graph Representation Learning

Graph Representation Learning Graph neural networks (GNNs) have been a very successful architecture in many domains. With [Barzilay et al., Nature Reviews Methods Primers 2024] we wrote an introductory survey on GNNs in a high-profile journal. Scalability. Many graph algorithms, including some for graph representation learning, are expensive to scale to large graphs. In [Le […]

Read More

Differentiable Extensions with Rounding Guarantees for Combinatorial Optimization over Permutations and Trees

Differentiable Extensions with Rounding Guarantees for Combinatorial Optimization over Permutations and Trees Continuously extending combinatorial optimization objectives is a powerful technique commonly applied to the optimization of set functions. However, few such methods exist for extending functions on permutations, despite the fact that many combinatorial optimization problems, such as the traveling salesperson problem (TSP), are […]

Read More