Sampling for Constrained Distributions with Applications

Mangoubi and Vishnoi [COLT 2023] considers the problem of approximating a d×d covariance matrix M with a rank-k matrix under differential privacy constraint. The authors present and analyze a complex variant of the Gaussian mechanism and give the optimal bound on the Frobenius norm of the difference between the matrix output by this mechanism and the best rank-k approximation to M. Their analysis leverages the fact that the eigenvalues of complex matrix Brownian motion repel more than in the real case, and uses Dyson’s stochastic differential equations governing the evolution of its eigenvalues to show that the eigenvalues of the matrix perturbed by complex Gaussian noise have large gaps with high probability. The results contribute to the analysis of low-rank approximations under average-case perturbations and to an understanding of eigenvalue gaps for random matrices, which may be of independent interest.

Given a Lipschitz or smooth convex function for an explicitly specified bounded polytope, Mangoubi and Vishnoi [NeurIPS 2023] considers the problem of sampling from the log-concave distribution π(θ) ∝ exp[-f (θ)], constrained to K. Interest in this problem derives from its applications to Bayesian inference and differential privacy. The authors present a generalization of the Dikin walk to this setting that improves on the running time of prior works for a range of structured settings important for the aforementioned inference and privacy applications. Technically, the work departs from previous Dikin walks by adding a soft-threshold regularizer derived from the Lipschitz or smoothness properties of f to a barrier function for K that allows their version of the Dikin walk to propose updates that have a high Metropolis acceptance ratio for f, while at the same time remaining inside the polytope K.

Mangoubi and Vishnoi [ICLR 2024] considers the problem of sampling from a log-concave distribution π(θ) ∝ exp[-f (θ)], constrained to a polytope K. The authors present a nearly-optimal implementation of this Markov chain with per-step complexity that is roughly the number of non-zero entries of the matrix specifying K. The key technical ingredients are to (1) show that the matrices that arise in this Dikin walk change slowly, (2) deploy efficient linear solvers that can leverage this slow change to speed up matrix inversion by using information computed in previous steps, and (3) speed up the computation of the determinantal term in the Metropolis filter step via a randomized Taylor series-based estimator. This result directly improves the runtime for applications that involve sampling from Gibbs distributions constrained to polytopes that arise in Bayesian statistics and private optimization.

Team Members

Stefanie Jegelka1
Melvin Leok2
Arya Mazumdar2
Suvrit Sra1
Nisheeth Vishnoi3
Yusu Wang2

Collaborators

Oren Mangoubi4

1. UC San Diego
2. MIT
3. Yale
4. Worcester Polytechnic Institute

Publications

COLT 2023 >
NeurIPS 2023 >
ICLR 2024 >