Old Tricks for New Problems: Nonconvex Optimization and Langevin Dynamics (Armin Eftekhari - EPFL)

16.04.2019 14:00 – 15:00

The Augmented Lagrangian Method (ALM) and its variants, such as the popular Alternating Direction Method of Multipliers (ADMM), are the workhorses of convex constrained optimization. The simple but powerful idea of enforcing constraints using penalty functions has interesting applications beyond convex optimization, and this talk focuses on two of them.

In the first part of the talk, we design efficient and intuitive ALM and ADMM variants to solve nonconvex optimization problems, with provable local (and sometimes global) sublinear convergence rates. We show how the ubiquitous Slater's condition in convex optimization gives its place to a geometric condition which is reminiscent of the Mangasarian-Fromovitz constraint qualification. We then apply these new algorithms to a variety of applications, including clustering, generalized eigenvalue decomposition, sparse regression, and even robustness in deep neural networks. We will also show how the convergence rate can be improved to obtain linear rates, using the concepts of restricted strong convexity and smoothness.

In the second part of the talk, we use penalty functions and homotopy techniques to address a key weakness of the popular Langevin Monte Carlo technique in sampling from probability measures which are not absolutely continuous. To be specific, we design provable and efficient algorithms to sample from distributions which are supported on an affine subspace or a convex polytope. Some of our results are new and the rest considerably improve the best-known iteration complexities for sampling.

This is a joint work with the LIONS lab at EPFL.

Lieu

Room 623, Séminaire d'analyse numérique

Organisé par

Section de mathématiques

Intervenant-e-s

Armin Eftekhari, EPFL

entrée libre

Classement

Catégorie: Séminaire

Mots clés: analyse numérique