Emilie Fons, Isabel L. McCoy, Tom Beucler, David Neubauer, Ulrike Lohmann
Biomass burning aerosols (BBAs) from Southern Africa seasonally overlie the semi-permanent South-East Atlantic (SEA) stratocumulus deck, impacting the region's energy budget through complex aerosol-cloud-radiation-meteorology interactions. Climate model intercomparison initiatives, like the Aerosol Comparisons between Observations and Models (AeroCom), have highlighted the large inter-model variability for BBA radiative effects, especially over the SEA, due to parameterization of emission modeling and smoke properties. Observational constraints are needed to reduce these uncertainties, but correlative observational studies are typically affected by confounding meteorological influences. We propose a physically informed statistical approach, based on causal graphs applied to satellite observations, to disentangle BBA influences on shortwave radiation over the SEA and identify the main sources of statistical biases plaguing observational studies. We find that, during the fire season, BBAs cause a regional shortwave cooling of -2.5 W m$^{-2}$, which can be decomposed into equal contributions from three physical pathways: aerosol-radiation interactions (ARI), adjustments to ARI, and aerosol-cloud interactions (ACI). We also perform ablation experiments with graph variants to investigate the main sources of confounding - like large-scale winds, humidity-biased retrievals or spatial aggregation of data - and show that they result in biased radiative effect estimates (between -50 $\%$ and +15 $\%$). Once free of such biases, our derived causal estimates of smoke radiative effects can be used as observational constraints to improve climate models.
Filippo Quarenghi, Ryan Cotsakis, Tom Beucler
The ``differentiability gap'' presents a primary bottleneck in Earth system deep learning: since models cannot be trained directly on non-differentiable scientific metrics and must rely on smooth proxies (e.g., MSE), they often fail to capture high-frequency details, yielding ``blurry'' outputs. We develop a framework that bridges this gap using two different methods to deal with non-differentiable functions: the first is to analytically approximate the original non-differentiable function into a differentiable equivalent one; the second is to learn differentiable surrogates for scientific functionals. We formulate the analytical approximation by relaxing discrete topological operations using temperature-controlled sigmoids and continuous logical operators. Conversely, our neural emulator uses Lipschitz-convolutional neural networks to stabilize gradient learning via: (1) spectral normalization to bound the Lipschitz constant; and (2) hard architectural constraints enforcing geometric principles. We demonstrate this framework's utility by developing the Minkowski image loss, a differentiable equivalent for the integral-geometric measures of surface precipitation fields (area, perimeter, connected components). Validated on the EUMETNET OPERA dataset, our constrained neural surrogate achieves high emulation accuracy, completely eliminating the geometric violations observed in unconstrained baselines. However, applying these differentiable surrogates to a deterministic super-resolution task reveals a fundamental trade-off: while strict Lipschitz regularization ensures optimization stability, it inherently over-smooths gradient signals, restricting the recovery of highly localized convective textures. This work highlights the necessity of coupling such topological constraints with stochastic generative architectures to achieve full morphological realism.
Milton Gomez, Marie McGraw, Saranya Ganesh S., Frederick Iat-Hin Tam, Ilia Azizi, Samuel Darmon, Monika Feldmann, Stella Bourdin, Louis Poulain--Auzéau, Suzana J. Camargo, Jonathan Lin, Dan Chavas, Chia-Ying Lee, Ritwik Gupta, Andrea Jenney, Tom Beucler
TCBench is a benchmark for evaluating global, short to medium-range (1-5 days) forecasts of tropical cyclone (TC) track and intensity. To allow a fair and model-agnostic comparison, TCBench builds on the IBTrACS observational dataset and formulates TC forecasting as predicting the time evolution of an existing tropical system conditioned on its initial position and intensity. TCBench includes state-of-the-art dynamical (TIGGE) and neural weather models (AIFS, Pangu-Weather, FourCastNet v2, GenCast). If not readily available, baseline tracks are consistently derived from model outputs using the TempestExtremes library. For evaluation, TCBench provides deterministic and probabilistic storm-following metrics. On 2023 test cases, neural weather models skillfully forecast TC tracks, while skillful intensity forecasts require additional steps such as post-processing. Designed for accessibility, TCBench helps AI practitioners tackle domain-relevant TC challenges and equips tropical meteorologists with data-driven tools and workflows to improve prediction and TC process understanding. By lowering barriers to reproducible, process-aware evaluation of extreme events, TCBench aims to democratize data-driven TC forecasting.
Max Defez, Filippo Quarenghi, Mathieu Vrac, Stephan Mandt, Tom Beucler
Deep-learning video super-resolution has progressed rapidly, but climate applications typically super-resolve (increase resolution) either space or time, and joint spatiotemporal models are often designed for a single pair of super-resolution (SR) factors (upscaling spatial and temporal ratio between the low-resolution sequence and the high-resolution sequence), limiting transfer across spatial resolutions and temporal cadences (frame rates). We present a scale-adaptive framework that reuses the same architecture across factors by decomposing spatiotemporal SR into a deterministic prediction of the conditional mean, with attention, and a residual conditional diffusion model, with an optional mass-conservation (same precipitation amount in inputs and outputs) transform to preserve aggregated totals. Assuming that larger SR factors primarily increase underdetermination (hence required context and residual uncertainty) rather than changing the conditional-mean structure, scale adaptivity is achieved by retuning three factor-dependent hyperparameters before retraining: the diffusion noise schedule amplitude beta (larger for larger factors to increase diversity), the temporal context length L (set to maintain comparable attention horizons across cadences) and optionally a third, the mass-conservation function f (tapered to limit the amplification of extremes for large factors). Demonstrated on reanalysis precipitation over France (Comephore), the same architecture spans super-resolution factors from 1 to 25 in space and 1 to 6 in time, yielding a reusable architecture and tuning recipe for joint spatiotemporal super-resolution across scales.
Tom Beucler, Stephan Rasp, Michael Pritchard, Pierre Gentine
Artificial neural-networks have the potential to emulate cloud processes with higher accuracy than the semi-empirical emulators currently used in climate models. However, neural-network models do not intrinsically conserve energy and mass, which is an obstacle to using them for long-term climate predictions. Here, we propose two methods to enforce linear conservation laws in neural-network emulators of physical models: Constraining (1) the loss function or (2) the architecture of the network itself. Applied to the emulation of explicitly-resolved cloud processes in a prototype multi-scale climate model, we show that architecture constraints can enforce conservation laws to satisfactory numerical precision, while all constraints help the neural-network better generalize to conditions outside of its training set, such as global warming.
Griffin Mooers, Jens Tuyls, Stephan Mandt, Michael Pritchard, Tom Beucler
While cloud-resolving models can explicitly simulate the details of small-scale storm formation and morphology, these details are often ignored by climate models for lack of computational resources. Here, we explore the potential of generative modeling to cheaply recreate small-scale storms by designing and implementing a Variational Autoencoder (VAE) that performs structural replication, dimensionality reduction, and clustering of high-resolution vertical velocity fields. Trained on ~6*10^6 samples spanning the globe, the VAE successfully reconstructs the spatial structure of convection, performs unsupervised clustering of convective organization regimes, and identifies anomalous storm activity, confirming the potential of generative modeling to power stochastic parameterizations of convection in climate models.
Jerry Lin, Mohamed Aziz Bhouri, Tom Beucler, Sungduk Yu, Michael Pritchard
Accurate and computationally-viable representations of clouds and turbulence are a long-standing challenge for climate model development. Traditional parameterizations that crudely but efficiently approximate these processes are a leading source of uncertainty in long-term projected warming and precipitation patterns. Machine Learning (ML)-based parameterizations have long been hailed as a promising alternative with the potential to yield higher accuracy at a fraction of the cost of more explicit simulations. However, these ML variants are often unpredictably unstable and inaccurate in \textit{coupled} testing (i.e. in a downstream hybrid simulation task where they are dynamically interacting with the large-scale climate model). These issues are exacerbated in out-of-distribution climates. Certain design decisions such as ``climate-invariant" feature transformation for moisture inputs, input vector expansion, and temporal history incorporation have been shown to improve coupled performance, but they may be insufficient for coupled out-of-distribution generalization. If feature selection and transformations can inoculate hybrid physics-ML climate models from non-physical, out-of-distribution extrapolation in a changing climate, there is far greater potential in extrapolating from observational data. Otherwise, training on multiple simulated climates becomes an inevitable necessity. While our results show generalization benefits from these design decisions, the obtained improvment does not sufficiently preclude the necessity of using multi-climate simulated training data.
Francesco Zanetta, Daniele Nerini, Tom Beucler, Mark A. Liniger
Weather forecasting centers currently rely on statistical postprocessing methods to minimize forecast error. This improves skill but can lead to predictions that violate physical principles or disregard dependencies between variables, which can be problematic for downstream applications and for the trustworthiness of postprocessing models, especially when they are based on new machine learning approaches. Building on recent advances in physics-informed machine learning, we propose to achieve physical consistency in deep learning-based postprocessing models by integrating meteorological expertise in the form of analytic equations. Applied to the post-processing of surface weather in Switzerland, we find that constraining a neural network to enforce thermodynamic state equations yields physically-consistent predictions of temperature and humidity without compromising performance. Our approach is especially advantageous when data is scarce, and our findings suggest that incorporating domain expertise into postprocessing models allows to optimize weather forecast information while satisfying application-specific requirements.
Gunnar Behrens, Tom Beucler, Fernando Iglesias-Suarez, Sungduk Yu, Pierre Gentine, Michael Pritchard, Mierk Schwabe, Veronika Eyring
Deep learning is a powerful tool to represent subgrid processes in climate models, but many application cases have so far used idealized settings and deterministic approaches. Here, we develop stochastic parameterizations with calibrated uncertainty quantification to learn subgrid convective and turbulent processes and surface radiative fluxes of a superparameterization (SP) embedded in an Earth System Model (ESM). We explore three methods to construct stochastic parameterizations: 1) a single Deep Neural Network (DNN) with Monte Carlo Dropout; 2) a multi-member parameterization; and 3) a Variational Encoder Decoder with latent space perturbation. We show that the multi-member (MM) parameterization improves the representation of convective processes, especially in the planetary boundary layer, compared to individual DNNs. The respective uncertainty quantification illustrates that methods 2) and 3) are advantageous compared to a dropout-based DNN parameterization regarding the spread of convective processes. Hybrid simulations with our best-performing MM parameterizations remained challenging and crash within the first days. Therefore, we develop a pragmatic partial coupling strategy relying on the SP for condensate emulation. Partial coupling reduces the computational efficiency of hybrid Earth-like simulations but enables model stability over 5 months with our MM parameterizations. However, our hybrid simulations exhibit biases in thermodynamic fields and differences in precipitation patterns. Despite this, the MM parameterizations enable improvements in reproducing tropical extreme precipitation compared to a traditional convection parameterization. Despite these challenges, our results indicate the potential of a new generation of MM machine learning parameterizations leveraging uncertainty quantification to improve the representation of stochasticity of subgrid effects.
Arthur Grundner, Tom Beucler, Pierre Gentine, Veronika Eyring
A promising method for improving the representation of clouds in climate models, and hence climate projections, is to develop machine learning-based parameterizations using output from global storm-resolving models. While neural networks can achieve state-of-the-art performance within their training distribution, they can make unreliable predictions outside of it. Additionally, they often require post-hoc tools for interpretation. To avoid these limitations, we combine symbolic regression, sequential feature selection, and physical constraints in a hierarchical modeling framework. This framework allows us to discover new equations diagnosing cloud cover from coarse-grained variables of global storm-resolving model simulations. These analytical equations are interpretable by construction and easily transferable to other grids or climate models. Our best equation balances performance and complexity, achieving a performance comparable to that of neural networks ($R^2=0.94$) while remaining simple (with only 11 trainable parameters). It reproduces cloud cover distributions more accurately than the Xu-Randall scheme across all cloud regimes (Hellinger distances $<0.09$), and matches neural networks in condensate-rich regimes. When applied and fine-tuned to the ERA5 reanalysis, the equation exhibits superior transferability to new data compared to all other optimal cloud cover schemes. Our findings demonstrate the effectiveness of symbolic regression in discovering interpretable, physically-consistent, and nonlinear equations to parameterize cloud cover.
Arthur Grundner, Tom Beucler, Julien Savre, Axel Lauer, Manuel Schlund, Veronika Eyring
Cloud-related parameterizations remain a leading source of uncertainty in climate projections. Although machine learning holds promise for Earth system models (ESMs), many data-driven parameterizations lack interpretability, physical consistency, and smooth integration into ESMs. Here, a two-step method is presented to improve a climate model with data-driven parameterizations. First, we incorporate a physically consistent cloud cover parameterization -- derived from storm-resolving simulations via symbolic regression, preserving interpretability while enhancing accuracy -- into the ICON global atmospheric model. Second, we apply the gradient-free Nelder-Mead optimizer to automatically recalibrate the hybrid model against Earth observations, tuning in nested stages (2-, 7-, 30- and 365-day runs) to ensure stability and tractability. The tuned hybrid model substantially reduces long-standing biases in cloud cover -- particularly over the Southern Ocean (by 75%) and subtropical stratocumulus regions (by 44%) -- and remains robust under +4K surface warming. These results demonstrate that interpretable machine-learned parameterizations, paired with practical tuning, can efficiently and transparently strengthen ESM fidelity.
Jerry Lin, Zeyuan Hu, Tom Beucler, Katherine Frields, Hannah Christensen, Walter Hannah, Helge Heuer, Peter Ukkonnen, Laura A. Mansfield, Tian Zheng, Liran Peng, Ritwik Gupta, Pierre Gentine, Yusef Al-Naher, Mingjiang Duan, Kyo Hattori, Weiliang Ji, Chunhan Li, Kippei Matsuda, Naoki Murakami, Shlomo Ron, Marec Serlin, Hongjian Song, Yuma Tanabe, Daisuke Yamamoto, Jianyao Zhou, Mike Pritchard
Subgrid machine-learning (ML) parameterizations have the potential to introduce a new generation of climate models that incorporate the effects of higher-resolution physics without incurring the prohibitive computational cost associated with more explicit physics-based simulations. However, important issues, ranging from online instability to inconsistent online performance, have limited their operational use for long-term climate projections. To more rapidly drive progress in solving these issues, domain scientists and machine learning researchers opened up the offline aspect of this problem to the broader machine learning and data science community with the release of ClimSim, a NeurIPS Datasets and Benchmarks publication, and an associated Kaggle competition. This paper reports on the downstream results of the Kaggle competition by coupling emulators inspired by the winning teams' architectures to an interactive climate model (including full cloud microphysics, a regime historically prone to online instability) and systematically evaluating their online performance. Our results demonstrate that online stability in the low-resolution, real-geography setting is reproducible across multiple diverse architectures, which we consider a key milestone. All tested architectures exhibit strikingly similar offline and online biases, though their responses to architecture-agnostic design choices (e.g., expanding the list of input variables) can differ significantly. Multiple Kaggle-inspired architectures achieve state-of-the-art (SOTA) results on certain metrics such as zonal mean bias patterns and global RMSE, indicating that crowdsourcing the essence of the offline problem is one path to improving online performance in hybrid physics-AI climate simulation.
Sungduk Yu, Zeyuan Hu, Akshay Subramaniam, Walter Hannah, Liran Peng, Jerry Lin, Mohamed Aziz Bhouri, Ritwik Gupta, Björn Lütjens, Justus C. Will, Gunnar Behrens, Julius J. M. Busecke, Nora Loose, Charles I. Stern, Tom Beucler, Bryce Harrop, Helge Heuer, Benjamin R. Hillman, Andrea Jenney, Nana Liu, Alistair White, Tian Zheng, Zhiming Kuang, Fiaz Ahmed, Elizabeth Barnes, Noah D. Brenowitz, Christopher Bretherton, Veronika Eyring, Savannah Ferretti, Nicholas Lutsko, Pierre Gentine, Stephan Mandt, J. David Neelin, Rose Yu, Laure Zanna, Nathan Urban, Janni Yuval, Ryan Abernathey, Pierre Baldi, Wayne Chuang, Yu Huang, Fernando Iglesias-Suarez, Sanket Jantre, Po-Lun Ma, Sara Shamekh, Guang Zhang, Michael Pritchard
Modern climate projections lack adequate spatial and temporal resolution due to computational constraints, leading to inaccuracies in representing critical processes like thunderstorms that occur on the sub-resolution scale. Hybrid methods combining physics with machine learning (ML) offer faster, higher fidelity climate simulations by outsourcing compute-hungry, high-resolution simulations to ML emulators. However, these hybrid ML-physics simulations require domain-specific data and workflows that have been inaccessible to many ML experts. As an extension of the ClimSim dataset (Yu et al., 2024), we present ClimSim-Online, which also includes an end-to-end workflow for developing hybrid ML-physics simulators. The ClimSim dataset includes 5.7 billion pairs of multivariate input/output vectors, capturing the influence of high-resolution, high-fidelity physics on a host climate simulator's macro-scale state. The dataset is global and spans ten years at a high sampling frequency. We provide a cross-platform, containerized pipeline to integrate ML models into operational climate simulators for hybrid testing. We also implement various ML baselines, alongside a hybrid baseline simulator, to highlight the ML challenges of building stable, skillful emulators. The data (https://huggingface.co/datasets/LEAP/ClimSim_high-res) and code (https://leap-stc.github.io/ClimSim and https://github.com/leap-stc/climsim-online) are publicly released to support the development of hybrid ML-physics and high-fidelity climate simulations.
Fernando Iglesias-Suarez, Pierre Gentine, Breixo Solino-Fernandez, Tom Beucler, Michael Pritchard, Jakob Runge, Veronika Eyring
Climate models are essential to understand and project climate change, yet long-standing biases and uncertainties in their projections remain. This is largely associated with the representation of subgrid-scale processes, particularly clouds and convection. Deep learning can learn these subgrid-scale processes from computationally expensive storm-resolving models while retaining many features at a fraction of computational cost. Yet, climate simulations with embedded neural network parameterizations are still challenging and highly depend on the deep learning solution. This is likely associated with spurious non-physical correlations learned by the neural networks due to the complexity of the physical dynamical system. Here, we show that the combination of causality with deep learning helps removing spurious correlations and optimizing the neural network algorithm. To resolve this, we apply a causal discovery method to unveil causal drivers in the set of input predictors of atmospheric subgrid-scale processes of a superparameterized climate model in which deep convection is explicitly resolved. The resulting causally-informed neural networks are coupled to the climate model, hence, replacing the superparameterization and radiation scheme. We show that the climate simulations with causally-informed neural network parameterizations retain many convection-related properties and accurately generate the climate of the original high-resolution climate model, while retaining similar generalization capabilities to unseen climates compared to the non-causal approach. The combination of causal discovery and deep learning is a new and promising approach that leads to stable and more trustworthy climate simulations and paves the way towards more physically-based causal deep learning approaches also in other scientific disciplines.
Gunnar Behrens, Tom Beucler, Pierre Gentine, Fernando Iglesias-Suarez, Michael Pritchard, Veronika Eyring
Deep learning can accurately represent sub-grid-scale convective processes in climate models, learning from high resolution simulations. However, deep learning methods usually lack interpretability due to large internal dimensionality, resulting in reduced trustworthiness in these methods. Here, we use Variational Encoder Decoder structures (VED), a non-linear dimensionality reduction technique, to learn and understand convective processes in an aquaplanet superparameterized climate model simulation, where deep convective processes are simulated explicitly. We show that similar to previous deep learning studies based on feed-forward neural nets, the VED is capable of learning and accurately reproducing convective processes. In contrast to past work, we show this can be achieved by compressing the original information into only five latent nodes. As a result, the VED can be used to understand convective processes and delineate modes of convection through the exploration of its latent dimensions. A close investigation of the latent space enables the identification of different convective regimes: a) stable conditions are clearly distinguished from deep convection with low outgoing longwave radiation and strong precipitation; b) high optically thin cirrus-like clouds are separated from low optically thick cumulus clouds; and c) shallow convective processes are associated with large-scale moisture content and surface diabatic heating. Our results demonstrate that VEDs can accurately represent convective processes in climate models, while enabling interpretability and better understanding of sub-grid-scale physical processes, paving the way to increasingly interpretable machine learning parameterizations with promising generative properties
Griffin Mooers, Tom Beucler, Mike Pritchard, Stephan Mandt
Despite the importance of quantifying how the spatial patterns of extreme precipitation will change with warming, we lack tools to objectively analyze the storm-scale outputs of modern climate models. To address this gap, we develop an unsupervised machine learning framework to quantify how storm dynamics affect changes in precipitation extremes, without sacrificing spatial information. For the upper precipitation quantiles (above the 80th percentile), we find that the spatial patterns of extreme precipitation changes are dominated by spatial shifts in storm dynamical regimes rather than changes in how these storm regimes produce precipitation. Our study shows how unsupervised machine learning, paired with domain knowledge, may allow us to better understand the physics of the atmosphere and anticipate the changes associated with a warming world.
Tom Beucler, Tristan Abbott, Timothy Cronin, Michael Pritchard
Idealized convection-permitting simulations of radiative-convective equilibrium (RCE) have become a popular tool for understanding the physical processes leading to horizontal variability of tropical water vapor and rainfall. However, the applicability of idealized simulations to nature is still unclear given that important processes are typically neglected, such as lateral vapor advection by extratropical intrusions, or interactive ocean coupling. Here, we exploit spectral analysis to compactly summarize the multi-scale processes supporting convective aggregation. By applying this framework to high-resolution reanalysis data and satellite observations in addition to idealized simulations, we compare convective-aggregation processes across horizontal scales and data sets. The results affirm the validity of the RCE simulations as an analogy to the real world. Column moist static energy tendencies share similar signs and scale-selectivity in convection-permitting models and observations: Radiation increases variance at wavelengths above 1,000km, while advection damps variance across wavelengths, and surface fluxes mostly reduce variance between 1,000km and 10,000km.
Tom Beucler, Michael Pritchard, Stephan Rasp, Jordan Ott, Pierre Baldi, Pierre Gentine
Neural networks can emulate nonlinear physical systems with high accuracy, yet they may produce physically-inconsistent results when violating fundamental constraints. Here, we introduce a systematic way of enforcing nonlinear analytic constraints in neural networks via constraints in the architecture or the loss function. Applied to convective processes for climate modeling, architectural constraints enforce conservation laws to within machine precision without degrading performance. Enforcing constraints also reduces errors in the subsets of the outputs most impacted by the constraints.
Frederick Iat-Hin Tam, Tom Beucler, James H. Ruppert
Cloud radiative feedback impacts early tropical cyclone (TC) intensification, but limitations in existing diagnostic frameworks make them unsuitable for studying asymmetric or transient radiative heating. We propose a linear Variational Encoder-Decoder (VED) to learn the hidden relationship between radiation and the surface intensification of realistic simulated TCs. Limiting VED model inputs enables using its uncertainty to identify periods when radiation has more importance for intensification. A close examination of the extracted 3D radiative structures suggests that longwave radiative forcing from inner core deep convection and shallow clouds both contribute to intensification, with the deep convection having the most impact overall. We find that deep convection downwind of the shallow clouds is critical to the intensification of Haiyan. Our work demonstrates that machine learning can discover thermodynamic-kinematic relationships without relying on axisymmetric or deterministic assumptions, paving the way towards the objective discovery of processes leading to TC intensification in realistic conditions.
Milton Gomez, Louis Poulain--Auzeau, Alexis Berne, Tom Beucler
Numerical Weather Prediction (NWP) models that integrate coupled physical equations forward in time are the traditional tools for simulating atmospheric processes and forecasting weather. With recent advancements in deep learning, AI-based Weather Prediction models that rely on neural network architectures$\unicode{x2013}$Neural Weather Models (NeWMs)$\unicode{x2013}$have emerged as competent medium-range NWP emulators, with performances that compare favorably to state-of-the-art NWP models. However, they are commonly trained on reanalyses with limited spatial resolution (e.g., 0.25° horizontal grid spacing), which smooths out key features of weather systems. For example, tropical cyclones (TCs)$\unicode{x2013}$among the most impactful weather events due to their devastating effects on human activities$\unicode{x2013}$are challenging to forecast, as extrema are smoothed in deterministic forecasts at 0.25° resolution. To address this, we use our best observational estimates of wind gusts and minimum sea level pressure to train a hierarchy of post-processing models on NeWM outputs. Applied to Pangu-Weather and FourCastNet v2, the post-processing models produce accurate and reliable forecasts of TC intensity up to five days ahead. Our post-processing algorithm is tracking-independent, preventing full misses, and we demonstrate that even linear models extract predictive information from NeWM outputs beyond what is encoded in their initial conditions. While spatial masking improves probabilistic forecast consistency, we do not find clear advantages of convolutional architectures over simple multilayer perceptrons for our NeWM post-processing purposes. Overall, by combining the efficiency of NeWMs with a lightweight, tracking-independent postprocessing framework, our approach improves the accessibility of global TC intensity forecasts, marking a step toward their democratization.