Random matrix theory of high-dimensional optimization - Lecture 6

Speaker: Elliot Paquette

Date: Tue, Jul 9, 2024

Location: CRM, Montreal

Conference: 2024 CRM-PIMS Summer School in Probability

Subject: Mathematics, Probability

Class: Scientific

Abstract:

Please note: Due to a problem with the zoom configuration, there is no video associated with this lecture, only the audio was recoreded.

 

Optimization theory seeks to show the performance of algorithms to find the (or a) minimizer x∈ℝd of an objective function. The dimension of the parameter space d has long been known to be a source of difficulty in designing good algorithms and in analyzing the objective function landscape. With the rise of machine learning in recent years, this has been proven that this is a manageable problem, but why? One explanation is that this high dimensionality is simultaneously mollified by three essential types of randomness: the data are random, the optimization algorithms are stochastic gradient methods, and the model parameters are randomly initialized (and much of this randomness remains). The resulting loss surfaces defy low-dimensional intuitions, especially in nonconvex settings.
Random matrix theory and spin glass theory provides a toolkit for theanalysis of these landscapes when the dimension $d$ becomes large. In this course, we will show

how random matrices can be used to describe high-dimensional inference
nonconvex landscape properties
high-dimensional limits of stochastic gradient methods.