Sungduk Yu, Zeyuan Hu, Akshay Subramaniam, Walter Hannah, Liran Peng, Jerry Lin, Mohamed Aziz Bhouri, Ritwik Gupta, Björn Lütjens, Justus C. Will, Gunnar Behrens, Julius J. M. Busecke, Nora Loose, Charles I. Stern, Tom Beucler, Bryce Harrop, Helge Heuer, Benjamin R. Hillman, Andrea Jenney, Nana Liu, Alistair White, Tian Zheng, Zhiming Kuang, Fiaz Ahmed, Elizabeth Barnes, Noah D. Brenowitz, Christopher Bretherton, Veronika Eyring, Savannah Ferretti, Nicholas Lutsko, Pierre Gentine, Stephan Mandt, J. David Neelin, Rose Yu, Laure Zanna, Nathan Urban, Janni Yuval, Ryan Abernathey, Pierre Baldi, Wayne Chuang, Yu Huang, Fernando Iglesias-Suarez, Sanket Jantre, Po-Lun Ma, Sara Shamekh, Guang Zhang, Michael Pritchard
Modern climate projections lack adequate spatial and temporal resolution due to computational constraints, leading to inaccuracies in representing critical processes like thunderstorms that occur on the sub-resolution scale. Hybrid methods combining physics with machine learning (ML) offer faster, higher fidelity climate simulations by outsourcing compute-hungry, high-resolution simulations to ML emulators. However, these hybrid ML-physics simulations require domain-specific data and workflows that have been inaccessible to many ML experts. As an extension of the ClimSim dataset (Yu et al., 2024), we present ClimSim-Online, which also includes an end-to-end workflow for developing hybrid ML-physics simulators. The ClimSim dataset includes 5.7 billion pairs of multivariate input/output vectors, capturing the influence of high-resolution, high-fidelity physics on a host climate simulator's macro-scale state. The dataset is global and spans ten years at a high sampling frequency. We provide a cross-platform, containerized pipeline to integrate ML models into operational climate simulators for hybrid testing. We also implement various ML baselines, alongside a hybrid baseline simulator, to highlight the ML challenges of building stable, skillful emulators. The data (https://huggingface.co/datasets/LEAP/ClimSim_high-res) and code (https://leap-stc.github.io/ClimSim and https://github.com/leap-stc/climsim-online) are publicly released to support the development of hybrid ML-physics and high-fidelity climate simulations.
Zeyuan Hu, Akshay Subramaniam, Zhiming Kuang, Jerry Lin, Sungduk Yu, Walter M. Hannah, Noah D. Brenowitz, Josh Romero, Michael S. Pritchard
Modern climate projections often suffer from inadequate spatial and temporal resolution due to computational limitations, resulting in inaccurate representations of sub-grid processes. A promising technique to address this is the Multiscale Modeling Framework (MMF), which embeds a kilometer-resolution cloud-resolving model within each atmospheric column of a host climate model to replace traditional convection and cloud parameterizations. Machine learning (ML) offers a unique opportunity to make MMF more accessible by emulating the embedded cloud-resolving model and reducing its substantial computational cost. Although many studies have demonstrated proof-of-concept success of achieving stable hybrid simulations, it remains a challenge to achieve near operational-level success with real geography and comprehensive variable emulation that includes, for example, explicit cloud condensate coupling. In this study, we present a stable hybrid model capable of integrating for at least 5 years with near operational-level complexity, including coarse-grid geography, seasonality, explicit cloud condensate and wind predictions, and land coupling. Our model demonstrates skillful online performance, achieving a 5-year zonal mean tropospheric temperature bias within 2K, water vapor bias within 1 g/kg, and a precipitation RMSE of 0.96 mm/day. Key factors contributing to our online performance include an expressive U-Net architecture and physical thermodynamic constraints for microphysics. With microphysical constraints mitigating unrealistic cloud formation, our work is the first to demonstrate realistic multi-year cloud condensate climatology under the MMF framework. Despite these advances, online diagnostics reveal persistent biases in certain regions, highlighting the need for innovative strategies to further optimize online performance.
Tao Ge, Jaideep Pathak, Akshay Subramaniam, Karthik Kashinath
Data-driven models, such as FourCastNet (FCN), have shown exemplary performance in high-resolution global weather forecasting. This performance, however, is based on supervision on mesh-gridded weather data without the utilization of raw climate observational data, the gold standard ground truth. In this work we develop a methodology to correct, remap, and fine-tune gridded uniform forecasts of FCN so it can be directly compared against observational ground truth, which is sparse and non-uniform in space and time. This is akin to bias correction and post-processing of numerical weather prediction (NWP), a routine operation at meteorological and weather forecasting centers across the globe. The Adaptive Fourier Neural Operator (AFNO) architecture is used as the backbone to learn continuous representations of the atmosphere. The spatially and temporally non-uniform output is evaluated by the non-uniform discrete inverse Fourier transform (NUIDFT) given the output query locations. We call this network the Deep-Learning-Corrector-Remapper (DLCR). The improvement in DLCR's performance against the gold standard ground truth over the baseline's performance shows its potential to correct, remap, and fine-tune the mesh-gridded forecasts under the supervision of observations.
Akshay Subramaniam, Dale Durran, David Pruitt, Nathaniel Cresswell-Clay, William Yik
Forecasting weather accurately and efficiently is a critical capability in our ability to adapt to climate change. Data driven approaches to this problem have enjoyed much success recently providing forecasts with accuracy comparable to physics based numerical prediction models but at significantly reduced computational expense. However, these models typically do not incorporate any physics priors. In this work, we demonstrate improved skill of data driven weather prediction approaches by incorporating physical constraints, specifically in the context of the DLWP model (Karlbauer et. al. 2024). Near hydrostatic balance, between the vertical pressure gradient and gravity, is one of the most fundamental and well satisfied constraints on atmospheric motions. We impose this balance through both hard and soft constraints, and demonstrate that the soft constraint improves the RMSE of many forecast fields, particularly at lead times beyond 7-10 days. The positive influence of hydrostatic balance is also clearly evident in improving the physicality and strength of a 10-day forecast for hurricane Irma. These results show that adding appropriate physical constraints can improve the skill and fidelity of data driven weather models in a way that does not impose any significant additional memory capacity or scalability challenges.
Noah D. Brenowitz, Tao Ge, Akshay Subramaniam, Peter Manshausen, Aayush Gupta, David M. Hall, Morteza Mardani, Arash Vahdat, Karthik Kashinath, Michael S. Pritchard
Climate modeling is reaching unprecedented resolution, producing petabytes of data. AI climate model emulators offer a path to computationally cheap analysis, enabling new scientific insight and scenario planning. Recent advances show promise in faithfully emulating climate data. However, prevailing auto-regressive paradigms are difficult to train on climate time horizons due to drifts, instabilities, and component-coupling challenges. They are hard to scale to high resolution and require sifting through troves of output to identify rare extremes of interest. We present Climate in a Bottle (cBottle), a generative diffusion-based framework emulating global 5 km climate simulations and reanalysis on the HEALPix grid. cBottle samples directly from the full distribution of atmospheric states, avoiding auto-regressive rollout, and is the first to reach this 12.5M-pixel global resolution. It consists of two stages: a coarse-resolution generator conditioned on sea surface temperatures and solar position, followed by a patch-based 16x super-resolution stage. cBottle passes a battery of tests, including diurnal-to-seasonal variability, large-scale modes of variability, tropical cyclone statistics, and trends of climate change and weather extremes. It is a step toward a foundation model: bridging data modalities (reanalysis and simulation), enabling zero-shot bias correction, downscaling, and data infilling. It also enables new interactivity via guided diffusion. For example, we train a tropical cyclone (TC) classifier alongside the generator, guide towards TC states, and obtain physically credible samples. This opens the door to guidance methods for a wide array of user queries and new ways of interacting with climate data.
Oliver Hennigh, Susheela Narasimhan, Mohammad Amin Nabian, Akshay Subramaniam, Kaustubh Tangsali, Max Rietmann, Jose del Aguila Ferrandis, Wonmin Byeon, Zhiwei Fang, Sanjay Choudhry
We present SimNet, an AI-driven multi-physics simulation framework, to accelerate simulations across a wide range of disciplines in science and engineering. Compared to traditional numerical solvers, SimNet addresses a wide range of use cases - coupled forward simulations without any training data, inverse and data assimilation problems. SimNet offers fast turnaround time by enabling parameterized system representation that solves for multiple configurations simultaneously, as opposed to the traditional solvers that solve for one configuration at a time. SimNet is integrated with parameterized constructive solid geometry as well as STL modules to generate point clouds. Furthermore, it is customizable with APIs that enable user extensions to geometry, physics and network architecture. It has advanced network architectures that are optimized for high-performance GPU computing, and offers scalable performance for multi-GPU and multi-Node implementation with accelerated linear algebra as well as FP32, FP64 and TF32 computations. In this paper we review the neural network solver methodology, the SimNet architecture, and the various features that are needed for effective solution of the PDEs. We present real-world use cases that range from challenging forward multi-physics simulations with turbulence and complex 3D geometries, to industrial design optimization and inverse problems that are not addressed efficiently by the traditional solvers. Extensive comparisons of SimNet results with open source and commercial solvers show good correlation.
Akshay Subramaniam, Man Long Wong, Raunak D Borker, Sravya Nimmagadda, Sanjiva K Lele
Generative Adversarial Networks (GANs) have been widely used for generating photo-realistic images. A variant of GANs called super-resolution GAN (SRGAN) has already been used successfully for image super-resolution where low resolution images can be upsampled to a $4\times$ larger image that is perceptually more realistic. However, when such generative models are used for data describing physical processes, there are additional known constraints that models must satisfy including governing equations and boundary conditions. In general, these constraints may not be obeyed by the generated data. In this work, we develop physics-based methods for generative enrichment of turbulence. We incorporate a physics-informed learning approach by a modification to the loss function to minimize the residuals of the governing equations for the generated data. We have analyzed two trained physics-informed models: a supervised model based on convolutional neural networks (CNN) and a generative model based on SRGAN: Turbulence Enrichment GAN (TEGAN), and show that they both outperform simple bicubic interpolation in turbulence enrichment. We have also shown that using the physics-informed learning can also significantly improve the model's ability in generating data that satisfies the physical governing equations. Finally, we compare the enriched data from TEGAN to show that it is able to recover statistical metrics of the flow field including energy metrics and well as inter-scale energy dynamics and flow morphology.
Hang Song, Kristen V. Matsuno, Jacob R. West, Akshay Subramaniam, Aditya S. Ghate, Sanjiva K. Lele
A scalable algorithm for solving compact banded linear systems on distributed memory architectures is presented. The proposed method factorizes the original system into two levels of memory hierarchies, and solves it using parallel cyclic reduction on both distributed and shared memory. This method has a lower communication footprint across distributed memory partitions compared to conventional algorithms involving data transpose or re-partitioning. The algorithm developed in this work is generalized to cyclic compact banded systems with flexible data decompositions. For cyclic compact banded systems, the method is a direct solver with a deterministic operation and communication counts depending on the matrix size, its bandwidth, and the partition strategy. The implementation and runtime configuration details are discussed for performance optimization. Scalability is demonstrated on the linear solver as well as on a representative fluid mechanics application problem, in which the dominant computational cost is solving the cyclic tridiagonal linear systems of compact numerical schemes on a 3D periodic domain. The algorithm is particularly useful for solving the linear systems arising from the application of compact finite difference operators to a wide range of partial differential equation problems, such as but not limited to the numerical simulations of compressible turbulent flows, aeroacoustics, elastic-plastic wave propagation, and electromagnetics. It alleviates obstacles to their use on modern high performance computing hardware, where memory and computational power are distributed across nodes with multi-threaded processing units.
Xiaopo Cheng, Akshay Subramaniam, Shixun Wu, Noah Brenowitz
HEALPix (Hierarchical Equal Area isoLatitude Pixelization) is a widely adopted spherical grid system in astrophysics, cosmology, and Earth sciences. Its equal-area, iso-latitude structure makes it particularly well-suited for large-scale data analysis on the sphere. However, implementing high-performance spherical harmonic transforms (SHTs) on HEALPix grids remains challenging due to irregular pixel geometry, latitude-dependent alignments, and the demands for high-resolution transforms at scale. In this work, we present cuHPX, an optimized CUDA library that provides functionality for spherical harmonic analysis and related utilities on HEALPix grids. Beyond delivering substantial performance improvements, cuHPX ensures high numerical accuracy, analytic gradients for integration with deep learning frameworks, out-of-core memory-efficient optimization, and flexible regridding between HEALPix, equiangular, and other common spherical grid formats. Through evaluation, we show that cuHPX achieves rapid spectral convergence and delivers over 20 times speedup compared to existing libraries, while maintaining numerical consistency. By combining accuracy, scalability, and differentiability, cuHPX enables a broad range of applications in climate science, astrophysics, and machine learning, effectively bridging optimized GPU kernels with scientific workflows.
Aayush Gupta, Akshay Subramaniam, Michael S. Pritchard, Karthik Kashinath, Sergey Frolov, Kelsey Lieberman, Christopher Miller, Nicholas Silverman, Noah D. Brenowitz
AI weather models now rival leading numerical weather prediction (NWP) systems in medium-range skill. However, almost all still rely on NWP data assimilation (DA) to provide initial conditions, tying them to expensive infrastructure and limiting the practical speed and accuracy gains of ML. More recently, ML-based DA systems have been proposed, which are often trained and evaluated end-to-end with a forecast model, making it difficult to assess the quality of their analysis fields. We introduce HealDA, a global ML-based DA system that maps a short window of satellite and conventional observations directly to a 1° atmospheric state on the HEALPix grid, using a smaller sensor suite than operational NWP and no background forecast at runtime. We treat HealDA strictly as a DA module: its analyses are used to initialize off-the-shelf ML forecast models without any fine-tuning of either. For a variety of off-the-shelf ML forecast models, including FourCastNet3 (FCN3), Aurora, and FengWu, HealDA-initialized forecasts lose less than one day of effective lead time when scored against ERA5. HealDA-initialized FCN3 ensembles similarly trail those of the ECMWF IFS ENS system by < 24 h. We find that forecast error growth in these models i unchanged from HealDA initialization, and the skill gap primarily arises from the larger initial error of the HealDA analysis. Spectral analysis reveals that this stems from overfitting to the large scales and upper-tropospheric fields. We also demonstrate that small changes in the verification setup can shift apparent skill by 12--24h, underscoring the need for consistent scoring. Taken together, these results clarify the current performance of ML-based DA systems and show that a relatively simple, background-free network can already provide initial conditions that are usable by state-of-the-art ML forecast models with only modest loss in medium-range skill.
Guang Chao Wang, Kenny Gross, Akshay Subramaniam
Deploying big-data Machine Learning (ML) services in a cloud environment presents a challenge to the cloud vendor with respect to the cloud container configuration sizing for any given customer use case. OracleLabs has developed an automated framework that uses nested-loop Monte Carlo simulation to autonomously scale any size customer ML use cases across the range of cloud CPU-GPU "Shapes" (configurations of CPUs and/or GPUs in Cloud containers available to end customers). Moreover, the OracleLabs and NVIDIA authors have collaborated on a ML benchmark study which analyzes the compute cost and GPU acceleration of any ML prognostic algorithm and assesses the reduction of compute cost in a cloud container comprising conventional CPUs and NVIDIA GPUs.
Morteza Mardani, Noah Brenowitz, Yair Cohen, Jaideep Pathak, Chieh-Yu Chen, Cheng-Chin Liu, Arash Vahdat, Mohammad Amin Nabian, Tao Ge, Akshay Subramaniam, Karthik Kashinath, Jan Kautz, Mike Pritchard
The state of the art for physical hazard prediction from weather and climate requires expensive km-scale numerical simulations driven by coarser resolution global inputs. Here, a generative diffusion architecture is explored for downscaling such global inputs to km-scale, as a cost-effective machine learning alternative. The model is trained to predict 2km data from a regional weather model over Taiwan, conditioned on a 25km global reanalysis. To address the large resolution ratio, different physics involved at different scales and prediction of channels beyond those in the input data, we employ a two-step approach where a UNet predicts the mean and a corrector diffusion (CorrDiff) model predicts the residual. CorrDiff exhibits encouraging skill in bulk MAE and CRPS scores. The predicted spectra and distributions from CorrDiff faithfully recover important power law relationships in the target data. Case studies of coherent weather phenomena show that CorrDiff can help sharpen wind and temperature gradients that co-locate with intense rainfall in cold front, and can help intensify typhoons and synthesize rain band structures. Calibration of model uncertainty remains challenging. The prospect of unifying methods like CorrDiff with coarser resolution global weather models implies a potential for global-to-regional multi-scale machine learning simulation.
A. Subramaniam, M. L. Wong, S. K. Lele
We present an improved high-order weighted compact high resolution (WCHR) scheme that extends the idea of weighted compact nonlinear schemes (WCNS's) using nonlinear interpolations in conjunction with compact finite difference schemes for shock-capturing in compressible turbulent flows. The proposed scheme has better resolution property than previous WCNS's. This is achieved by using a compact (or spatially implicit) form instead of the traditional fully explicit form for the nonlinear interpolation. Since compact interpolation schemes tend to have lower dispersion errors compared to explicit interpolation schemes, the proposed scheme has the ability to resolve more fine-scale features while still having the ability to provide sufficiently localized dissipation to capture shocks and discontinuities robustly. Approximate dispersion relation characteristics of this scheme are analyzed to show the superior resolution properties of the scheme compared to other WCNS's of similar orders of accuracy. Conservative and high-order accurate boundary schemes are also proposed for non-periodic problems. Further, a new conservative flux-difference form for compact finite difference schemes is derived and allows for the use of positivity-preserving limiters for improved robustness. Different test cases demonstrate the ability of this scheme to capture discontinuities in a robust and stable manner while also localizing the required numerical dissipation only to regions containing discontinuities and very high wavenumber features and hence preserving smooth flow features better in comparison to WCNS's.