Extrapolation

Extrapolation An important question, in learning for optimization and deep learning more generally, is the question of extrapolation, e.g. the behavior of the model under distribution shifts. We analyzed conditions under which graph neural networks for sparse graphs extrapolate to larger graphs, and we drew connections between in-context learning and adaptation to different environments. Can […]

Read More

Neural Networks that Learn Algorithms Implicitly

Neural Networks that Learn Algorithms Implicitly The remarkable capability of Transformers to show reasoning and few-shot abilities, without any fine-tuning, is widely conjectured to stem from their ability to implicitly simulate multi-step algorithms—such as gradient descent—with their weights in a single forward pass. Recently, there has been progress in understanding this complex phenomenon from an […]

Read More

Learning for Optimization

Recently, neural approaches have shown promise in tackling (combinatorial) optimization problems in a data-driven manner. On the other hand, for many problems, especially geometric optimization problems, many beautiful geometric ideas and algorithmic insights have been developed in fields such as theoretical computer science and computational geometry. Our goal is to infuse geometric and algorithmic ideas […]

Read More

Powerful Learning Models for Graphs and Hypergraphs

Powerful Learning Models for Graphs and Hypergraphs In practice, depending on the type of data at hand and the problem at hand, often we need to design suitable neural architecture to produce efficient and effective learning models. Many practical problems from our use-domains operate on (hyper-)graph types of data. Wang’s team has explored the following: […]

Read More