< BACK TO ALL TILOS NETWORKS PROJECTS Data-driven Adaptive Network Models: Quantitative Group Testing The quantitative group testing (QGT) problem aims at learning an underlying binary vector x of length n with a sparsity parameter k. The information acquisition process about x involves conducting pooled measurements, also known as group tests, where the outcome […]
Read More
Learning to Slice Wi-Fi Networks: A State-Augmented Primal-Dual Approach In enterprise settings, it is vital to manage network operations to support multiple use cases with different requirements. Additionally, 3GPP includes architectures to integrate Wi-Fi in converged connectivity (5G + Wi-Fi) in enterprise. Network slicing allows an access point (AP) to allocate the network resources across […]
Read More
Nonconvex Optimization in Deep Learning Sharpness-Aware Minimization (SAM) is a recently proposed gradient-based optimizer (Foret et al., ICLR 2021) that greatly improves the prediction performance of deep neural networks. Consequently, there has been a surge of interest in explaining its empirical success. In their work on the crucial role of normalization in sharpness-aware minimization, Suvrit […]
Read More
Dynamic Decisions Under Uncertainty The Effect of Delayed Feedback for Reinforcement Learning with Function Approximation Recent studies in reinforcement learning (RL) have made significant progress by leveraging function approximation to alleviate the sample complexity hurdle for better performance. Despite the success, existing provably efficient algorithms typically rely on the accessibility of immediate feedback upon taking […]
Read More
Learning Ultrametric Trees for Optimal Transport Regression Optimal transport provides a metric which quantifies the dissimilarity between probability measures. For measures supported in discrete metric spaces, finding optimal transport distance has cubic time complexity in the size of the space. However, measures supported on trees admit a closed-form optimal transport which can be computed in […]
Read More
Nonconvex Optimization and Transformer Architectures Deciding whether saddle points exist or are approximable for nonconvex-nonconcave problems is usually intractable. Zhang, Zhang & Sra [SIAM Journal on Optimization 2023] takes a step toward understanding a broad class of nonconvex-nonconcave minimax problems that do remain tractable. Specifically, it studies minimax problems in geodesic metric spaces. The first […]
Read More
Sampling for Constrained Distributions with Applications Mangoubi and Vishnoi [COLT 2023] considers the problem of approximating a d×d covariance matrix M with a rank-k matrix under differential privacy constraint. The authors present and analyze a complex variant of the Gaussian mechanism and give the optimal bound on the Frobenius norm of the difference between the […]
Read More
Extrapolation An important question, in learning for optimization and deep learning more generally, is the question of extrapolation, e.g. the behavior of the model under distribution shifts. We analyzed conditions under which graph neural networks for sparse graphs extrapolate to larger graphs, and we drew connections between in-context learning and adaptation to different environments. Can […]
Read More
Deep Learning with Symmetries Sample Complexity Gain of Invariances In practice, encoding invariances into models improves sample complexity. In [Tahmasebi & Jegelka, NeurIPS 2023], we study this phenomenon from a theoretical perspective. In particular, we provide minimax optimal rates for kernel ridge regression on compact manifolds, with a target function that is invariant to a […]
Read More
Graph Representation Learning Graph neural networks (GNNs) have been a very successful architecture in many domains. With [Barzilay et al., Nature Reviews Methods Primers 2024] we wrote an introductory survey on GNNs in a high-profile journal. Scalability. Many graph algorithms, including some for graph representation learning, are expensive to scale to large graphs. In [Le […]
Read More