Zhongwei Zhang, Elias Krainski, Peng Zhong, Håvard Rue, Raphaël Huser
This paper describes the methodology used by the team RedSea in the data competition organized for EVA 2021 conference. We develop a novel two-part model to jointly describe the wildfire count data and burnt area data provided by the competition organizers with covariates. Our proposed methodology relies on the integrated nested Laplace approximation combined with the stochastic partial differential equation (INLA-SPDE) approach. In the first part, a binary non-stationary spatio-temporal model is used to describe the underlying process that determines whether or not there is wildfire at a specific time and location. In the second part, we consider a non-stationary model that is based on log-Gaussian Cox processes for positive wildfire count data, and a non-stationary log-Gaussian model for positive burnt area data. Dependence between the positive count data and positive burnt area data is captured by a shared spatio-temporal random effect. Our two-part modeling approach performs well in terms of the prediction score criterion chosen by the data competition organizers. Moreover, our model results show that surface pressure is the most influential driver for the occurrence of a wildfire, whilst surface net solar radiation and surface pressure are the key drivers for large numbers of wildfires, and temperature and evaporation are the key drivers of large burnt areas.
Finn Lindgren, Haakon Bakka, David Bolin, Elias Krainski, Håvard Rue
Gaussian random fields with Matérn covariance functions are popular models in spatial statistics and machine learning. In this work, we develop a spatio-temporal extension of the Gaussian Matérn fields formulated as solutions to a stochastic partial differential equation. The spatially stationary subset of the models have marginal spatial Matérn covariances, and the model also extends to Whittle-Matérn fields on curved manifolds, and to more general non-stationary fields. In addition to the parameters of the spatial dependence (variance, smoothness, and practical correlation range) it additionally has parameters controlling the practical correlation range in time, the smoothness in time, and the type of non-separability of the spatio-temporal covariance. Through the separability parameter, the model also allows for separable covariance functions. We provide a sparse representation based on a finite element approximation, that is well suited for statistical inference and which is implemented in the R-INLA software. The flexibility of the model is illustrated in an application to spatio-temporal modeling of global temperature data.
Anna Freni Sterrantino, Denis Rustand, Janet van Niekerk, Elias Teixeira Krainski, Håvard Rue
In this work, we present a new approach for constructing models for correlation matrices with a user-defined graphical structure. The graphical structure makes correlation matrices interpretable and avoids the quadratic increase of parameters as a function of the dimension. We suggest an automatic approach to define a prior using a natural sequence of simpler models within the Penalized Complexity framework for the unknown parameters in these models. We illustrate this approach with three applications: a multivariate linear regression of four biomarkers, a multivariate disease mapping, and a multivariate longitudinal joint modelling. Each application underscores our method's intuitive appeal, signifying a substantial advancement toward a more cohesive and enlightening model that facilitates a meaningful interpretation of correlation matrices.
Haakon Bakka, Håvard Rue, Geir-Arne Fuglstad, Andrea Riebler, David Bolin, Elias Krainski, Daniel Simpson, Finn Lindgren
Coming up with Bayesian models for spatial data is easy, but performing inference with them can be challenging. Writing fast inference code for a complex spatial model with realistically-sized datasets from scratch is time-consuming, and if changes are made to the model, there is little guarantee that the code performs well. The key advantages of R-INLA are the ease with which complex models can be created and modified, without the need to write complex code, and the speed at which inference can be done even for spatial problems with hundreds of thousands of observations. R-INLA handles latent Gaussian models, where fixed effects, structured and unstructured Gaussian random effects are combined linearly in a linear predictor, and the elements of the linear predictor are observed through one or more likelihoods. The structured random effects can be both standard areal model such as the Besag and the BYM models, and geostatistical models from a subset of the Matérn Gaussian random fields. In this review, we discuss the large success of spatial modelling with R-INLA and the types of spatial models that can be fitted, we give an overview of recent developments for areal models, and we give an overview of the stochastic partial differential equation (SPDE) approach and some of the ways it can be extended beyond the assumptions of isotropy and separability. In particular, we describe how slight changes to the SPDE approach leads to straight-forward approaches for non-stationary spatial models and non-separable space-time models.
Esmail Abdul Fattah, Elias Krainski, Havard Rue
Bayesian inference often relies on Markov chain Monte Carlo (MCMC) methods, particularly required for non-Gaussian data families. When dealing with complex hierarchical models, the MCMC approach can be computationally demanding in workflows that require repeated model fitting or when working with models of large dimensions with limited hardware resources. The Integrated Nested Laplace Approximations (INLA) is a deterministic alternative for models with non-Gaussian data that belong to the class of latent Gaussian models (LGMs), yielding accurate approximations to posterior marginals in many applied settings. The INLA method was implemented in C as a standalone program, inla, that is widely used in R through the INLA package. This paper introduces PyINLA, a dedicated Python package that provides a Pythonic interface directly to the inla program. Therefore, PyINLA enables specifying LGMs, running INLA-based inference, and accessing posterior summaries directly from Python while leveraging the established INLA implementation. We describe the package design and illustrate its use on representative models, including generalized linear mixed models, time series forecasting, disease mapping, and geostatistical prediction, demonstrating how deterministic Bayesian inference can be performed in Python using INLA in a way that integrates naturally with common scientific computing workflows.
Janet van Niekerk, Elias Krainski, Denis Rustand, Haavard Rue
Integrated Nested Laplace Approximations (INLA) has been a successful approximate Bayesian inference framework since its proposal by Rue et al. (2009). The increased computational efficiency and accuracy when compared with sampling-based methods for Bayesian inference like MCMC methods, are some contributors to its success. Ongoing research in the INLA methodology and implementation thereof in the R package R-INLA, ensures continued relevance for practitioners and improved performance and applicability of INLA. The era of big data and some recent research developments, presents an opportunity to reformulate some aspects of the classic INLA formulation, to achieve even faster inference, improved numerical stability and scalability. The improvement is especially noticeable for data-rich models. We demonstrate the efficiency gains with various examples of data-rich models, like Cox's proportional hazards model, an item-response theory model, a spatial model including prediction, and a 3-dimensional model for fMRI data.
Lisa Gaedke-Merzhäuser, Elias Krainski, Radim Janalik, Håvard Rue, Olaf Schenk
Bayesian inference tasks continue to pose a computational challenge. This especially holds for spatial-temporal modeling where high-dimensional latent parameter spaces are ubiquitous. The methodology of integrated nested Laplace approximations (INLA) provides a framework for performing Bayesian inference applicable to a large subclass of additive Bayesian hierarchical models. In combination with the stochastic partial differential equations (SPDE) approach it gives rise to an efficient method for spatial-temporal modeling. In this work we build on the INLA-SPDE approach, by putting forward a performant distributed memory variant, INLA-DIST, for large-scale applications. To perform the arising computational kernel operations, consisting of Cholesky factorizations, solving linear systems, and selected matrix inversions, we present two numerical solver options, a sparse CPU-based library and a novel blocked GPU-accelerated approach which we propose. We leverage the recurring nonzero block structure in the arising precision (inverse covariance) matrices, which allows us to employ dense subroutines within a sparse setting. Both versions of INLA-DIST are highly scalable, capable of performing inference on models with millions of latent parameters. We demonstrate their accuracy and performance on synthetic as well as real-world climate dataset applications.
Esmail Abdul Fattah, Elias Krainski, Janet van Niekerk, Håvard Rue
This paper aims to extend the Besag model, a widely used Bayesian spatial model in disease mapping, to a non-stationary spatial model for irregular lattice-type data. The goal is to improve the model's ability to capture complex spatial dependence patterns and increase interpretability. The proposed model uses multiple precision parameters, accounting for different intensities of spatial dependence in different sub-regions. We derive a joint penalized complexity prior for the flexible local precision parameters to prevent overfitting and ensure contraction to the stationary model at a user-defined rate. The proposed methodology can be used as a basis for the development of various other non-stationary effects over other domains such as time. An accompanying R package 'fbesag' equips the reader with the necessary tools for immediate use and application. We illustrate the novelty of the proposal by modeling the risk of dengue in Brazil, where the stationary spatial assumption fails and interesting risk profiles are estimated when accounting for spatial non-stationary.
Chiara Forlani, Samir Bhatt, Michela Cameletti, Elias Krainski, Marta Blangiardo
In air pollution studies, dispersion models provide estimates of concentration at grid level covering the entire spatial domain, and are then calibrated against measurements from monitoring stations. However, these different data sources are misaligned in space and time. If misalignment is not considered, it can bias the predictions. We aim at demonstrating how the combination of multiple data sources, such as dispersion model outputs, ground observations and covariates, leads to more accurate predictions of air pollution at grid level. We consider nitrogen dioxide (NO2) concentration in Greater London and surroundings for the years 2007-2011, and combine two different dispersion models. Different sets of spatial and temporal effects are included in order to obtain the best predictive capability. Our proposed model is framed in between calibration and Bayesian melding techniques for data fusion red. Unlike other examples, we jointly model the response (concentration level at monitoring stations) and the dispersion model outputs on different scales, accounting for the different sources of uncertainty. Our spatio-temporal model allows us to reconstruct the latent fields of each model component, and to predict daily pollution concentrations. We compare the predictive capability of our proposed model with other established methods to account for misalignment (e.g. bilinear interpolation), showing that in our case study the joint model is a better alternative.
Danilo Alvares, Janet van Niekerk, Elias Teixeira Krainski, Håvard Rue, Denis Rustand
This tutorial shows how various Bayesian survival models can be fitted using the integrated nested Laplace approximation in a clear, legible, and comprehensible manner using the INLA and INLAjoint R-packages. Such models include accelerated failure time, proportional hazards, mixture cure, competing risks, multi-state, frailty, and joint models of longitudinal and survival data, originally presented in the article "Bayesian survival analysis with BUGS" (Alvares et al., 2021). In addition, we illustrate the implementation of a new joint model for a longitudinal semicontinuous marker, recurrent events, and a terminal event. Our proposal aims to provide the reader with syntax examples for implementing survival models using a fast and accurate approximate Bayesian inferential approach.
Denis Rustand, Janet van Niekerk, Elias Teixeira Krainski, Håvard Rue
This paper introduces the R package INLAjoint, designed as a toolbox for fitting a diverse range of regression models addressing both longitudinal and survival outcomes. INLAjoint relies on the computational efficiency of the integrated nested Laplace approximations methodology, an efficient alternative to Markov chain Monte Carlo for Bayesian inference, ensuring both speed and accuracy in parameter estimation and uncertainty quantification. The package facilitates the construction of complex joint models by treating individual regression models as building blocks, which can be assembled to address specific research questions. Joint models are relevant in biomedical studies where the collection of longitudinal markers alongside censored survival times is common. They have gained significant interest in recent literature, demonstrating the ability to rectify biases present in separate modeling approaches such as informative censoring by a survival event or confusion bias due to population heterogeneity. We provide a comprehensive overview of the joint modeling framework embedded in INLAjoint with illustrative examples. Through these examples, we demonstrate the practical utility of INLAjoint in handling complex data scenarios encountered in biomedical research.
Denis Rustand, Janet van Niekerk, Elias Teixeira Krainski, Håvard Rue, Cécile Proust-Lima
Modeling longitudinal and survival data jointly offers many advantages such as addressing measurement error and missing data in the longitudinal processes, understanding and quantifying the association between the longitudinal markers and the survival events and predicting the risk of events based on the longitudinal markers. A joint model involves multiple submodels (one for each longitudinal/survival outcome) usually linked together through correlated or shared random effects. Their estimation is computationally expensive (particularly due to a multidimensional integration of the likelihood over the random effects distribution) so that inference methods become rapidly intractable, and restricts applications of joint models to a small number of longitudinal markers and/or random effects. We introduce a Bayesian approximation based on the Integrated Nested Laplace Approximation algorithm implemented in the R package R-INLA to alleviate the computational burden and allow the estimation of multivariate joint models with fewer restrictions. Our simulation studies show that R-INLA substantially reduces the computation time and the variability of the parameter estimates compared to alternative estimation strategies. We further apply the methodology to analyze 5 longitudinal markers (3 continuous, 1 count, 1 binary, and 16 random effects) and competing risks of death and transplantation in a clinical trial on primary biliary cholangitis. R-INLA provides a fast and reliable inference technique for applying joint models to the complex multivariate data encountered in health research.
Abylay Zhumekenov, Elias T. Krainski, Håvard Rue
Performing Bayesian inference on large spatio-temporal models requires extracting inverse elements of large sparse precision matrices for marginal variances, as well as estimating model hyperparameters. Although direct matrix factorizations can be used for the inversion, such methods fail to scale well for distributed problems when run on large computing clusters. On the contrary, Krylov subspace methods for the selected inversion have been gaining traction. We propose a parallel hybrid approach based on domain decomposition, which extends the Rao-Blackwellized Monte Carlo estimator for distributed precision matrices. Our approach exploits the strength of Krylov subspace methods as global solvers and efficiency of direct factorizations as base case solvers to compute the marginal variances and the derivatives required for hyperparameter estimation using a divide-and-conquer strategy. By introducing subdomain overlaps, one can achieve greater accuracy at an increased computational effort with little to no additional communication. We demonstrate the speed improvements and efficient hyperparameter inference on both simulated models and a massive US daily temperature data.