Parallelization

Parallelization A major challenge in Federated Learning is tackling the behavior of Byzantine machines, which behave completely arbitrarily. This can happen due to software or hardware crashes, poor communication links between local machines and the center machine, stalled computations, and even coordinated or malicious attacks by a third party. In order to fit complex machine […]

Read More

Neural Networks that Learn Algorithms Implicitly

Neural Networks that Learn Algorithms Implicitly The remarkable capability of Transformers to show reasoning and few-shot abilities, without any fine-tuning, is widely conjectured to stem from their ability to implicitly simulate multi-step algorithms—such as gradient descent—with their weights in a single forward pass. Recently, there has been progress in understanding this complex phenomenon from an […]

Read More

Learning for Optimization

Recently, neural approaches have shown promise in tackling (combinatorial) optimization problems in a data-driven manner. On the other hand, for many problems, especially geometric optimization problems, many beautiful geometric ideas and algorithmic insights have been developed in fields such as theoretical computer science and computational geometry. Our goal is to infuse geometric and algorithmic ideas […]

Read More

Powerful Learning Models for Graphs and Hypergraphs

Powerful Learning Models for Graphs and Hypergraphs In practice, depending on the type of data at hand and the problem at hand, often we need to design suitable neural architecture to produce efficient and effective learning models. Many practical problems from our use-domains operate on (hyper-)graph types of data. Wang’s team has explored the following: […]

Read More

Deep Learning with Symmetries

Deep Learning with Symmetries Sample Complexity Gain of Invariances In practice, encoding invariances into models improves sample complexity. In [Tahmasebi & Jegelka, NeurIPS 2023], we study this phenomenon from a theoretical perspective. In particular, we provide minimax optimal rates for kernel ridge regression on compact manifolds, with a target function that is invariant to a […]

Read More

Differentiable Extensions with Rounding Guarantees for Combinatorial Optimization over Permutations and Trees

Differentiable Extensions with Rounding Guarantees for Combinatorial Optimization over Permutations and Trees Continuously extending combinatorial optimization objectives is a powerful technique commonly applied to the optimization of set functions. However, few such methods exist for extending functions on permutations, despite the fact that many combinatorial optimization problems, such as the traveling salesperson problem (TSP), are […]

Read More