Kernel Approximation of Wasserstein and Fisher-Rao Gradient flows

Speaker: Jia-Jie Zhu

Date: Mon, Dec 16, 2024

Location: PIMS, University of British Columbia, Zoom

Conference: Kantorovich Initiative Seminar

Subject: Mathematics

Class: Scientific

CRG: Pacific Interdisciplinary Hub on Optimal Transport

Abstract:

Gradient flows have emerged as a powerful framework for analyzing machine learning and statistical inference algorithms. Motivated by several applications in statistical inference, generative models, generalization, and robustness of learning algorithms, I will provide a few new results regarding the kernel approximation of gradient flows, such as a hidden link between the gradient flows of kernel maximum-mean discrepancy and relative entropies. These findings not only advance our theoretical understanding but also provide practical tools for enhancing machine learning algorithms. I will showcase inference and sampling algorithms using a new kernel approximation of the Wasserstein-Fisher-Rao (a.k.a. Hellinger-Kantorovich) gradient flows, which have better convergence characterization and improved performance in computation.

Additional Files: