Uncertainty Quantification from a Statistics Perspective

February 18, 2026 - February 20, 2026

Organizers:

Image
Snigdansu Chatterjee
UMBC
Image
Abba Gumel
University of Maryland
Image
Eric Slud
University of Maryland

Uncertainty Quantification (UQ) is a broad field, making rapid advances in characterizing levels of error in applied mathematical models in the physical, social and biological sciences. In this workshop, we will explore how the statistical viewpoint and available statistical techniques arrive at meaningful and sometimes verifiable quantifications of variability in the sense of variance and other descriptors of probability distributions for outcome random variables in scientific models. The "statistics viewpoint" implies that the investigator has in mind probabilistic data generating mechanisms that propagate through dynamical and transformation mechanisms to result in observable data. One of the central issues to discuss is the scientific meaning of "variability" quantified within misspecified models. That is, especially when the models are high-dimensional or contain artificial elements like priors, the probabilistic data-generating mechanisms may not bear much resemblance to the mechanisms in Nature producing the data; so in what sense are the resulting estimated variances valid? The "statistics perspective" at least suggests that simulations of the data-generating mechanism and analytical methodology could provide gold- standard variance quantification.

The Workshop will draw together sessions on the following topics: (i) examples from Survey Sampling, where Variance Estimation for Design-based inference from surveys uses resampled or reweighted data replicates, and in current applications reweighting may incorporate machine-learning or network methodologies; (ii) UQ in mechanistic dynamical-system models arising in mathematical epidemiology, incorporating interacting disease-transmission and human behavioral effects, where uncertainty enters through noisy data, stochastic dynamics, and through parameterization and calibration of the model; (iii) bootstrap and other resampling methods in artificial-intelligence and machine-learning use cases, including ensemble learning, resampling for robust learning, and resampling in the context of generative AI; (iv) Uncertainty quantification in Bayesian and variational Bayes methods, including applications to deep neural network models; and (v) nascent uncertainty quantification methods for inference from Networks, using asymptotic statistical properties of estimators and other methods to account for the complex dependency structure of network data.


  • Participants
  • Schedule
  • Workshop poster (download pdf)
  • Photos