Deep JKO: Time-implicit particle methods for general nonlinear gradient flows

Wonjun Lee, Li Wang, Wuchen Li

Research output: Contribution to journalArticlepeer-review

Abstract

We develop novel neural network-based implicit particle methods to compute high-dimensional Wasserstein-type gradient flows with linear and nonlinear mobility functions. The main idea is to use the Lagrangian formulation in the Jordan–Kinderlehrer–Otto (JKO) framework, where the velocity field is approximated using a neural network. We leverage the formulations from the neural ordinary differential equation (neural ODE) in the context of continuous normalizing flow for efficient density computation. Additionally, we make use of an explicit recurrence relation for computing derivatives, which greatly streamlines the backpropagation process. Our methodology demonstrates versatility in handling a wide range of gradient flows, accommodating various potential functions and nonlinear mobility scenarios. Extensive experiments demonstrate the efficacy of our approach, including an illustrative example from Bayesian inverse problems. This underscores that our scheme provides a viable alternative solver for the Kalman-Wasserstein gradient flow.

Original languageEnglish (US)
Article number113187
JournalJournal of Computational Physics
Volume514
DOIs
StatePublished - Oct 1 2024

Bibliographical note

Publisher Copyright:
© 2024 Elsevier Inc.

Keywords

  • Density estimation
  • JKO scheme
  • Neural ODE
  • Nonlinear gradient flows
  • Nonlinear mobility
  • Particle methods

Fingerprint

Dive into the research topics of 'Deep JKO: Time-implicit particle methods for general nonlinear gradient flows'. Together they form a unique fingerprint.

Cite this