A07 – Model order reduction for Bayesian inference

This project started with the second funding period in July 2021.

When compared to classical methods that rely exclusively on point estimates or regularisation techniques, Bayesian inference provides a richer description of uncertainty. However, this richer description comes at a price: Bayesian methods tend to be computationally more costly. The high computational cost is often due to expensive evaluations of the likelihood or forward model. In addition, in many inverse problems where the unknown is a function, accurate discretisations of the unknown lead to high-dimensional spaces. In high dimensions, sampling algorithms tend to perform poorly, and covariance matrices can be too large to store or compute with. There is therefore a need for methods for Bayesian inference based on computationally efficient approximate forward models or likelihoods, for a rigorous analysis of the associated approximate posteriors and for fast effective numerical linear algebra techniques.

The aim of this project is to tackle the challenge of analysing and designing computationally efficient algorithms for Bayesian inference, by using ideas and approaches from dimension reduction and model order reduction (MOR). We will consider both linear and nonlinear inverse problems, and address both theoretical and algorithmic problems. In doing so, we aim to strengthen the bridge between Bayesian inference and MOR. This will make a wider range of computationally efficient algorithms available for Bayesian inference, and thus improve the applicability of Bayesian methods for inverse problems.

This project concentrates on both theoretical and algorithmic aspects.

  1. We will develop a theory of approximate Bayesian inference using dimension reduction based on low-rank updates and approximate forward models obtained by MOR. In particular, we aim to prove bounds on the error in the approximate posterior with respect to suitable measures of the error associated to the MOR method.
  2. We will develop and analyse computationally efficient algorithms based on low-rank approximations and MOR algorithms. A guiding principle will be to exploit the structure of specific inverse problems and problems of numerical linear algebra associated to these algorithms.

 

  • Siobhán Correnty, Melina A. Freitag, Kirk M. Soodhalter (2023). Chebyshev HOPGD with sparse grid sampling for parameterized linear systemsarXiv:2309.14178

  • König, J., Pfeffer, M. and Stoll, M. (2023). Efficient training of Gaussian processes with tensor product structure. arXiv 2312.15305.

  • Mach, T. and Freitag, M.A. (2023). Solving the Parametric Eigenvalue Problem by Taylor Series and Chebyshev ExpansionarXiv 230212.03661

  • Quinn, P. D., Landmann, M. S., Davis, T., Freitag, M. A., Gazzola, S., and Dolgov, S. (2024). Optimal Sparse Energy Sampling for X-ray Spectro-Microscopy: Reducing the X-ray Dose and Experiment Time Using Model Order Reduction. Chem. Biomed. Imaging 2024. doi: 10.1021/cbmi.3c00116

  • Kaya, A. and Freitag, M. A. (2024). Low-rank solutions to the stochastic Helmholtz equation. Journal of Computational and Applied Mathematics. doi: 10.1016/j.cam.2024.115925

  • Freitag, M.A., Nicolaus, J.M., and Redmann, M. (2023). Model order reduction methods applied to neural network training. Proceedings in Applied Mathematics and Mechanics, e202300078. doi: 10.1002/pamm.202300078

  • Freitag, M.A., Kriz, P., Mach, T, and Nicolaus, J.M. (2023). Can one hear the depth of the water? Proceedings in Applied Mathematics and Mechanics, e202300122. doi: 10.1002/pamm.202300122

  • König, J. and Freitag, M.A. (2023). Time-Limited Balanced Truncation for Data Assimilation Problems. Journal of Scientific Computing, Volume 97, Number 47. doi: 10.1007/s10915-023-02358-4

  • König, J. and Freitag, M.A. (2023). Time-limited Balanced Truncation within Incremental Four-Dimensional Variational Data Assimilation. Proceedings in Applied Mathematics and Mechanics, e202300019. doi: 10.1002/pamm.202300019

  • Hijazi, S., Freitag, M. A., and Landwehr, N. (2023). POD-Galerkin reduced order models and physics-informed neural networks for solving inverse problems for the Navier-Stokes equations. Adv. Model. Simul. Eng. Sci. doi: 10.1186/s40323-023-00242-2

  • Ayanbayev, B., Klebanov, I., Lie, H.C., and Sullivan, T.J. (2021). Gamma-convergence of Onsager–Machlup functionals: II. convergence of Onsager–Machlup functionals: II. Infinite product measures on Banach spaces. Inverse Problems, Volume 38, Number 2. doi:10.1088/1361-6420/ac3f82.

  • Redmann, M. and Freitag, M.A. (2021). Optimization based model order reduction for stochastic systems. Appl. Math. Comput., 398. doi: 10.1016/j.amc.2020.125783

  • Lie, H.C., Stahn, M. and Sullivan, T.J. (2022). Randomised one-step time integration methods for deterministic operator differential equations. Calcolo, Volume 59, Number 13. doi:10.1007/s10092-022-00457-6.

  • Freitag, M.A. and Reich, S. (2022). Datenassimilation: Die nahtlose Verschmelzung von Daten und Modellen. Mitteilungen der Deutschen Mathematiker-VereinigungVerlag, De GruyterSeiten, 108‒112, Band 30. doi: 10.1515/dmvm-2022-0037

  • Alqahtani, A., Mach, T., and Reichel, L. (2023). Solution of Ill-posed Problems with Chebfun. Numerical Algorithms (2023). doi:10.1007/s11075-022-01390-z​​​​​​​, arXiv 2007.16137

  • Mach, T., Reichel, L., and Van Barel, M. (2023). Adaptive cross approximation for Tikhonov regularization in general form. Numerical Algorithms. doi:10.1007/s11075-022-01395-8, arXiv 2204.05740

  • Birzhan Ayanbayev, Ilja Klebanov, Han Cheng Lie and T J Sullivan (2021). Gamma-convergence of Onsager–Machlup functionals: I. With applications to maximum a posteriori estimation in Bayesian inverse problems. Inverse Problems, Volume 38, Number 2, doi:10.1088/1361-6420/ac3f81.