A04 – Nonlinear statistical inverse problems with random observations
Objectives
Many mathematical models of time-dependent processes are used to make predictions. These models often come in the form of ordinary differential equation initial value problems, where the vector field depends on a finite-dimensional vector of parameters. Estimating the parameters is important for making predictions. However, in some practical problems, the parameter cannot be observed directly. Instead, it is known that the parameter is related to an vector of "covariates", which can be observed directly. For example, in pharmacology, the model parameters may be blood flow rates, organ volumes, or enzymatic activity, and the covariates may be body weight, body mass index, or genetic disposition. The relationship is encoded in a so-called “covariate-to-parameter map”. Inferring the unknown covariate-to-parameter map using pairs of observed covariates and the associated measurements of the time-dependent process constitutes an ill-posed nonlinear inverse problem. This project aims to develop nonparametric statistical methods for solving this inverse problem, with a focus on problems that feature the constraint of random design, i.e. where the observed covariates are i.i.d. copies of a random variable.
In the second funding period, we will focus on the problems of adaptivity and inference, and tackle the inverse problem from both the frequentist and Bayesian points of view. Adaptivity means that regularisation parameters, such as the penalty in the Tikhonov functional or the iteration number in gradient descent, are chosen in a data-driven way without prior smoothness assumptions on the covariate-to-parameter map. For the frequentist part, we will study methods that perform adaptive regularisation by early stopping. We will also study the construction of honest and optimal confidence sets for early-stopping estimators. For the Bayesian part, we shall study adaptive methods that work under the random design constraint and the posterior concentration properties of these methods. We will also investigate Bayesian credible sets and their frequentist coverage properties. We aim to test the developed methods on some problems from pharmacology.
Preprints
Lie, H. C. (2024). Bayesian inference of covariate-parameter relationships for population modelling. ArXiv 2407.09640
Cvetkovic, N. and Lie, H. C. and Bansal, H. and Veroy-Grepl, K. (2023). Choosing observation operators to mitigate model error in Bayesian inverse problems. ArXiv 2301.04863
Rastogi, A. and Mathé, P. (2020). Inverse learning in Hilbert scales.arXiv 2002.10208
Celisse, A. and Wahl, M. (2020). Analyzing the discrepancy principle for kernelized spectral filter learning algorithms.arXiv: 2004.08436
Duval, C. and Mariucci, E. (2020). Non-asymptotic control of the cumulative distribution function of Lévy processes. arXiv 2003.09281
Blanchard, G., Mathé, P. and Mücke, N. (2019). Lepskii Principle in Supervised Learning. arXiv: 1905.10764
Wahl, M. (2019). A note on the prediction error of principal component regression.arXiv: 1811.02998
Carpentier, A., Duval, C. and Mariucci, E. (2019). Total variation distance for discretely observed Lévy processes: a Gaussian approximation of the small jumps. arXiv: 1810.02998
Duval, C. and Mariucci, E. (2019). Compound Poisson approximation to estimate the Lévy density. arXiv: 1702.08787
Jirak, M. and Wahl, M. (2018). Perturbation bounds for eigenspaces under a relative gap condition.arXiv: 1803.03868
Jirak, M. and Wahl, M. (2018). Relative perturbation bounds with applications to empirical covariance operators.arXiv: 1802.02869
Publications
Cvetkovic, N. and Lie, H. C. and Bansal, H. and Veroy-Grepl, K. (2024): Choosing observation operators to mitigate model error in Bayesian inverse problems. SIAM/ASA Journal of Uncertainty Quantification 12 (3):723-758. ArXiv 2301.04863, doi: 10.1137/23M1602140
Stankewitz, B. (2024): Early stopping for L2-boosting in high-dimensional linear models. Annals of Statistics 52 (2):491-518, arXiv:2210.07850.
Lie, H. C. and Rudolf, D. and Sprungk, B. and Sullivan, T. J. (2023). Dimension-independent Markov chain Monte Carlo on the sphere. Scandinavian Journal of Statistics 50 (4):1818-1858. ArXiv: 2112.12185
Stankewitz, B. and Mücke, N. and Rosasco, L. (2023). From inexact optimization to learning via gradient concentration. Computational Optimization and Applications 84:265-294. arXiv:2106.05397.
Lie, H. C. and Stahn, M. and Sullivan, T.J. (2022). Randomised one-step time integration methods for deterministic operator differential equations. Calcolo, Volume 59, Number 13, ArXiv 2103.16506, doi: 10.1007/s10092-022-00457-6.
Hartung, N., Wahl, M., Rastogi, A., and Huisinga, W. (2021). Nonparametric goodness-of-fit tests for parametric covariate models in pharmacometric analyses. CPT Pharmacometrics & Systems Pharmacology 10: 564-576. ArXiv 2011.07539 DOI
Rastogi, A. (2020): Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems. Communications on Pure & Applied Analysis 19(8): 4111-4126. ArXiv 2002.01303DOI
Rastogi, A., Blanchard, G. and Mathé, P. (2020): Convergence analysis of Tikhonov regularization for non-linear statistical inverse learning problems. Electronic Journal of Statistics 14(2): 2798-2841. ArXiv 1902.05404v2DO
Milbradt, C. and Wahl, M. (2020). High-probability bounds for the reconstruction error of PCA. Statist. Probab. Lett. 161. ArXiv 1909.10787
Reiß, M. and Wahl, M. (2020). Non-asymptotic upper bounds for the reconstruction error of PCA. Ann. Stat. 48(2): 1098-1123. arXiv 1609.03779
Rastogi, A. (2019). Tikhonov regularization with oversmoothing penalty for linear statistical inverse learning problems. AIP Conference Proceedings 2183(1): 110004 AIP Publishing LLC. DOI
Gugushvili, S., Mariucci, E. and Meulen, van der F. (2019). Decompounding discrete distributions: A non-parametric Bayesian approach. To appear in Scandinavian Journal of Statistics. arXiv: 1903.11142
Blanchard, G., Neuvial, P. and Roquain, E. (2019). Post hoc inference via joint family-wise error rate control. (to appear in Annals of Statistics) arXiv: 1703.02307
Blanchard, G., Hoffmann, M. and Reiß, M. (2018). Early stopping for statistical inverse problems via truncated SVD estimation. Electron. J. Statist. 12 (2): 3204-3231. arXiv 1710.07278; doi: 10.1214/18-EJS1482
Blanchard, G., Hoffmann, M. and Reiß, M. (2018). Optimal adaptation for early stopping in statistical inverse problems. SIAM/ASA Journal on Uncertainty Quantification 6(3): 1043-1075. arXiv 1606.07702; doi:10.1137/17M1154096
Bachoc, F., Blanchard, G. and Neuvial, P. (2018): On the post selection inference constant under restricted isometry properties. Electron. J. Statist. 12(2): 3736-3757. doi: 10.1214/18-EJS1490