A07 – Model order reduction for Bayesian inference

This project started with the second funding period in July 2021.

When compared to classical methods that rely exclusively on point estimates or regularisation techniques, Bayesian inference provides a richer description of uncertainty. However, this richer description comes at a price: Bayesian methods tend to be computationally more costly. The high computational cost is often due to expensive evaluations of the likelihood or forward model. In addition, in many inverse problems where the unknown is a function, accurate discretisations of the unknown lead to high-dimensional spaces. In high dimensions, sampling algorithms tend to perform poorly, and covariance matrices can be too large to store or compute with. There is therefore a need for methods for Bayesian inference based on computationally efficient approximate forward models or likelihoods, for a rigorous analysis of the associated approximate posteriors and for fast effective numerical linear algebra techniques.

The aim of this project is to tackle the challenge of analysing and designing computationally efficient algorithms for Bayesian inference, by using ideas and approaches from dimension reduction and model order reduction (MOR). We will consider both linear and nonlinear inverse problems, and address both theoretical and algorithmic problems. In doing so, we aim to strengthen the bridge between Bayesian inference and MOR. This will make a wider range of computationally efficient algorithms available for Bayesian inference, and thus improve the applicability of Bayesian methods for inverse problems.

This project concentrates on both theoretical and algorithmic aspects.

  1. We will develop a theory of approximate Bayesian inference using dimension reduction based on low-rank updates and approximate forward models obtained by MOR. In particular, we aim to prove bounds on the error in the approximate posterior with respect to suitable measures of the error associated to the MOR method.
  2. We will develop and analyse computationally efficient algorithms based on low-rank approximations and MOR algorithms. A guiding principle will be to exploit the structure of specific inverse problems and problems of numerical linear algebra associated to these algorithms.

 

  • König, J., Pfeffer, M. and Stoll, M. (2023). Efficient training of Gaussian processes with tensor product structure. arXiv 2312.15305.

  • Mach, T and Freitag, M.A. (2023). Solving the Parametric Eigenvalue Problem by Taylor Series and Chebyshev Expansion arXiv 230212.03661

  • König, J. and Freitag, M. (2022). Time-limited Balanced Truncation for Data Assimilation Problems. arXiv 2212.07719.

  • M.A. Freitag, J.M. Nicolaus, M. Redmann (2023). Model order reduction methods applied to neural network training. Proceedings in Applied Mathematics and Mechanics, e202300078. https://doi.org/10.1002/pamm.202300078

  • M.A. Freitag, P.Kriz, T. Mach, J. M. Nicolaus (2023). Can one hear the depth of the water? Proceedings in Applied Mathematics and Mechanics, e202300122. https://doi.org/10.1002/pamm.202300122

  • König, J., Freitag, M.A. (2023). Time-Limited Balanced Truncation for Data Assimilation Problems. Journal of Scientific Computing, Volume 97, Number 47 <doi: 10.1007/s10915-023-02358-4>

  • J. König & M.A. Freitag (2023). Time-limited Balanced Truncation within Incremental Four-Dimensional Variational Data Assimilation. Proceedings in Applied Mathematics and Mechanics, e202300019. https://doi.org/10.1002/pamm.202300019

  • Birzhan Ayanbayev, Ilja Klebanov, Han Cheng Lie and T J Sullivan (2021). Gamma-convergence of Onsager–Machlup functionals: II. convergence of Onsager–Machlup functionals: II. Infinite product measures on Banach spaces. Inverse Problems, Volume 38, Number 2, doi:10.1088/1361-6420/ac3f82.

  • M. Redmann, M.A. Freitag (2021). Optimization based model order reduction for stochastic systems. Appl. Math. Comput., 398.

  • H. C. Lie, M. Stahn, T.J. Sullivan (2022). Randomised one-step time integration methods for deterministic operator differential equations. Calcolo, Volume 59, Number 13 doi:10.1007/s10092-022-00457-6.

  • M.A. Freitag, S. Reich (2022). Datenassimilation: Die nahtlose Verschmelzung von Daten und Modellen. Mitteilungen der Deutschen Mathematiker-VereinigungVerlag, De GruyterSeiten, 108‒112, Band 30.

  • Alqahtani, A., Mach, T., and Reichel, L. (2023). Solution of Ill-posed Problems with Chebfun. Numerical Algorithms (2023). doi:10.1007/s11075-022-01390-z arXiv 2007.16137

  • Mach, T., Reichel, L., and Van Barel, M. (2023). Adaptive cross approximation for Tikhonov regularization in general form. Numerical Algorithms, doi:10.1007/s11075-022-01395-8 arXiv 2204.05740

  • Birzhan Ayanbayev, Ilja Klebanov, Han Cheng Lie and T J Sullivan (2021). Gamma-convergence of Onsager–Machlup functionals: I. With applications to maximum a posteriori estimation in Bayesian inverse problems. Inverse Problems, Volume 38, Number 2, doi:10.1088/1361-6420/ac3f81.