Sherly Alfonso-Sánchez, Cristián Bravo, Kristina G. Stankova
Geographic context is often consider relevant to motor insurance risk, yet public actuarial datasets provide limited location identifiers, constraining how this information can be incorporated and evaluated in claim-frequency models. This study examines how geographic information from alternative data sources can be incorporated into actuarial models for Motor Third Party Liability (MTPL) claim prediction under such constraints. Using the BeMTPL97 dataset, we adopt a zone-level modeling framework and evaluate predictive performance on unseen postcodes. Geographic information is introduced through two channels: environmental indicators from OpenStreetMap and CORINE Land Cover, and orthoimagery released by the Belgian National Geographic Institute for academic use. We evaluate the predictive contribution of coordinates, environmental features, and image embeddings across three baseline models: generalized linear models (GLMs), regularized GLMs, and gradient-boosted trees, while raw imagery is modeled using convolutional neural networks. Our results show that augmenting actuarial variables with constructed geographic information improves accuracy. Across experiments, both linear and tree-based models benefit most from combining coordinates with environmental features extracted at 5 km scale, while smaller neighborhoods also improve baseline specifications. Generally, image embeddings do not improve performance when environmental features are available; however, when such features are absent, pretrained vision-transformer embeddings enhance accuracy and stability for regularized GLMs. Our results show that the predictive value of geographic information in zone-level MTPL frequency models depends less on model complexity than on how geography is represented, and illustrate that geographic context can be incorporated despite limited individual-level spatial information.
Nikeethan Selvaratnam, Dorinel Bastide, Clément Fernandes, Wojciech Pieczynski
Apr 23, 2026·q-fin.RM·PDF Predicting future operational risk losses gives rise to a significant challenge due to the heterogeneous and time-dependent structures present in real-world data. Furthermore, stress test exercises require examining the relationship with operational losses. To capture such relationship, we propose to use an extension of Hidden Markov Models to multivariate observations. This model introduces a third auxiliary variable designed to accommodate the economic covariates in the time-series data. We detail the unique aspects of operational risk data and describe how model calibration is achieved via the Expectation-Maximization (EM) algorithm. Additionally, we provide the calibration results for the various risk-event types and analyze the relevance of the inclusion of the macroeconomic covariates.
Mariko I. Ito, Hiroyuki Hasada, Yudai Honma, Takaaki Ohnishi, Tsutomu Watanabe, Kazuyuki Aihara
Market instability has been extensively studied using mathematical approaches to characterize complex trading dynamics and detect structural change points. This study explores the potential for early warning of market instability by applying the Dynamical Network Marker (DNM) theory to order placement and execution data from the Tokyo Stock Exchange. DNM theory identifies indicators associated with critical slowing down -- a precursor to critical transitions -- in high-dimensional systems of many interacting elements. In this study, market participants are identified using virtual server IDs from the trading system, and multivariate time series representing their trading activities are constructed. This framework treats each participant as an interacting element, thereby enabling the application of DNM theory to the resulting time series. The results suggest that early warning signals of large price movements can be detected on a daily time scale. These findings highlight the potential to develop practical DNM-based early-warning systems for large price movements by further refining forecasting horizons and integrating multiple time series capturing different aspects of trading behavior.
Alexander Barzykin, Axel Ciceri
Apr 22, 2026·q-fin.RM·PDF We study OTC bond market making on a size ladder with quadratic inventory penalty and a running target on the dealer's size-weighted hit ratio within a stochastic optimal control approach. We demonstrate that the corresponding reduced Hamilton-Jacobi-Bellman (HJB) equation remains separable by dualizing the hit ratio target term and provides the exact optimal controls through the inverse of the fill-probability function and the Hamiltonian derivative. We then focus on the quadratic approximation á la Bergault et al., which yields a Riccati equation for the inventory curvature while retaining the exact quote map. In its linearized form, this approximation produces explicit quote decompositions into riskless spread, inventory-risk correction, and hit-ratio correction. The formulation is general and applies to multi-bond, multi-client-tier scenarios, with special cases obtained by restricting the targeted tiers, their bond coverage, and their associated targets.
Robert Flassig, Emrah Gülay, Daniel Guterding
Apr 21, 2026·q-fin.CP·PDF The Nelson-Siegel-Svensson (NSS) interest rate curve model yields a separable nonlinear least-squares problem whose inner linear block is often ill-conditioned because the basis functions become nearly collinear. We analyze this instability via an exact orthogonal reparametrization of the design matrix. A thin QR decomposition produces orthogonal linear parameters for which, conditional on the nonlinear parameters, the Fisher information matrix is diagonal. We also derive a finite-horizon analytical orthogonalization: on $[0,T]$, the $4\times 4$ continuous Gram matrix has closed-form entries involving exponentials, logarithms, and the exponential integral $E_1$, yielding an explicit horizon-dependent orthogonal NSS basis. Together with Jacobian-rank and profile-likelihood arguments, this representation clarifies the degenerate manifold $λ_1=λ_2$, where the Svensson extension loses two degrees of freedom. Orthogonalization leaves the least-squares fit and uncertainty of the original linear parameters unchanged, but isolates the conditioning structure. When the decay parameters are estimated jointly, the full first-order covariance in orthogonal coordinates admits an explicit Schur-complement form. The approach also yields a scalar identifiability diagnostic through the QR element $R_{44}$ and separates model reduction from numerical instability. Synthetic experiments confirm that orthogonal parametrization eliminates correlations among the linear parameters and keeps their conditional uncertainty uniform. A daily U.S. Treasury study on a reduced fixed 9-tenor grid from 1981 to 2026 shows smoother orthogonal parameter series than classical NSS parameters while the moving QR basis remains nearly constant.
Shintaro Mori
Apr 20, 2026·q-fin.RM·PDF Can contagion be inferred from aggregated default data? We study this as a problem of identifiability, asking whether contagion generates components in default count distributions that remain distinct from those induced by macroeconomic fluctuations. We compare three dependence structures: cumulative contagion in the Lo-Davis model, threshold-type contagion in the Torri model, and common-factor dependence in the Vasicek model. Under an i.i.d. specification, the Vasicek model provides the best overall fit, especially in the tail, indicating that a smooth mixture structure captures annual default clustering more effectively than threshold-type contagion at the aggregate level. We then allow the default probability to vary across years through a hierarchical specification. Under this extension, most of the variation in annual default counts is explained by cross-year movements in default conditions rather than by within-year contagion. What remains, however, depends on the interaction mechanism. In the Torri model, threshold-type contagion does not leave a stable component that can be separated from macroeconomic heterogeneity after aggregation. In the Lo-Davis model, by contrast, a small but persistent component remains visible in both the variance decomposition and the tail behavior. These results clarify when contagion can still be inferred from coarse-grained data and when it is effectively absorbed into macroeconomic variation.
Anastasiia Zbandut, Carolina Goldstein
Apr 19, 2026·q-fin.RM·PDF We derive five tractable credit risk metrics for DeFi lending vault depositors, grounded in a formal three level decomposition of vault risk into mechanical loss channels (Level 1), governance quality (Level 2) and smart contract code integrity (Level 3). For Level 1, we show that six structural features of onchain execution (oracle execution divergence, endogenous recovery, full information run dynamics, timelock constrained governance, oracle manipulation and congestion driven liquidation failure) break canonical TradFi analogies and generate depositor loss channels absent from standard credit frameworks. Vault credit risk metrics translate these channels into measurable risk components which are aggregated into a vault credit score. The empirical contribution is an implementable estimation architecture for credit risk metrics, including required onchain data, identification strategies for core parameters, partial identification bounds and a coherent stress scenario methodology. The results have direct implications for vault risk management and for minimum transparency standards necessary for depositor risk assessment.
Nawaf Mohammed
We introduce joint exclusivity (JE), a form of extremal negative dependence that extends the classical notion of mutual exclusivity. The JE structure is analytically tractable and is defined by the exclusion of the interior of the non-negative orthant. We establish a sharp necessary and sufficient condition for the existence of a JE random vector with prescribed marginals, namely $\sum_{i\in N} \overline{F}_i(0) \leq n - 1$. We propose a canonical construction that distributes probability mass on lower-dimensional faces of the support, while allowing flexible copula specifications within each face. The framework is further extended to a generalized class (G-JE) via marginal distortion functions. Finally, we identify a correspondence between the support structures of JE and joint mixability, revealing a structural link between the two concepts.
Satya Narayana Panda, Aishworzo Saha
This paper develops a geospatial framework for climate risk stress testing in California with applications to banking and climate-exposed sectors such as agriculture, real estate, and tourism. The study integrates physical hazard mapping, sector-specific exposure analysis, and scenario-based financial risk assessment to evaluate how wildfires, drought, flooding, extreme heat, and transition risks may affect regional economic activity and financial stability. The framework is intended to support portfolio monitoring, climate scenario analysis, and institutional readiness under emerging disclosure and risk-management standards. In addition, the paper provides a survey-based implementation guide for benchmarking current climate-risk practices and data needs across industry and academic stakeholders.
Xia Han, Bin Li
Apr 17, 2026·q-fin.RM·PDF This paper studies optimal insurance design under asymmetric information in a Stackelberg framework, where a monopolistic insurer faces uncertainty about both the insured's risk attitude, captured by a risk-aversion parameter, and the insured's risk type, characterized by the loss distribution. In particular, when the risk type is unobservable, we allow the risk-aversion parameter to depend on the risk type. We construct a menu of contracts that maximizes the mean-variance utilities of both parties under the expected-value premium principle, subject to a truth-telling constraint that ensures the truthful revelation of private information. We show that when risk attitude is private information, the optimal coverage takes the form of excess-of-loss insurance with linear pricing in terms of the risk loading (defined as the premium minus the expected loss), designed to screen risk preferences. In contrast, when risk type is unobserved, we restrict the coverage function to an excess-of-loss form and derive an ordinary differential equation that characterizes the optimal risk loading. Under mild conditions, we establish the existence and uniqueness of the solution. The results show that equilibrium contracts exhibit nonlinear pricing with decreasing risk loadings, implying that higher-risk individuals face lower risk loadings in order to induce self-selection. Finally, numerical illustrations demonstrate how parameter values and the distributions of unobserved heterogeneity affect the structure of optimal contracts and the resulting pricing schedule.
Songrun He
Apr 15, 2026·q-fin.GN·PDF In this paper, I present the first comprehensive, around-the-clock analysis of systematic jump risk by combining high-frequency market data with contemporaneous news narratives identified as the underlying causes of market jumps. These narratives are retrieved and classified using a state-of-the-art open-source reasoning LLM. Decomposing market risk into interpretable jump categories reveals significant heterogeneity in risk premia, with macroeconomic news commanding the largest and most persistent premium. Leveraging this insight, I construct an annually rebalanced real-time Fama-MacBeth factor-mimicking portfolio that isolates the most strongly priced jump risk, achieving a high out-of-sample Sharpe ratio and delivering significant alphas relative to standard factor models. The results highlight the value of around-the-clock analysis and LLM-based narrative understanding for identifying and managing priced risks in real time.
Yiqing Wang
Apr 13, 2026·q-fin.RM·PDF The Kolmogorov-Smirnov (KS) statistic is widely used in credit risk model monitoring and validation to assess discriminatory power. In practice, a material decline in KS often triggers governance review and requires validation teams to identify the breach source and the potential business risk. However, such diagnosis is frequently conducted on an ad hoc basis, relying on the judgment of individual validators rather than a standardized analytical framework. This paper proposes a counterfactual diagnostic framework for explaining KS deterioration in credit risk model validation. The framework sequentially attributes observed KS decline to sampling variability, portfolio composition change, covariate shift, and residual deterioration consistent with model drift, with explicit gateway conditions governing escalation at each stage. Simulation experiments demonstrate that the proposed approach provides more interpretable and governance-relevant explanations than threshold-based review alone, and contributes to more consistent, transparent, and defensible performance-breach assessment in credit risk model validation.
Michele Azzone, Carlo Bechi, Gabriele Sbaiz
Apr 13, 2026·q-fin.PM·PDF Driven by the increasing frequency and intensity of natural disasters and chronic climate threats, we investigate the impact of physical climate risk on global equity portfolios. By employing a panel regression analysis on sectoral returns, we provide statistical evidence that extreme temperature events exert a negative effect on most sectors. We introduce two novel metrics based on these temperature anomalies, Climate Risk Exposure and Climate Exposure Volatility, in order to measure the environmental vulnerability of a portfolio. Unlike available static country-level indices, these metrics incorporate the time varying probability of extreme events and their relations with firm-specific asset intensity. We integrate these measures into a multi-objective portfolio optimization framework. This approach extends the traditional Mean-Variance paradigm, allowing investors to construct portfolios that are resilient to physical climate shocks without sacrificing diversification. Finally, we conduct a backtesting analysis to show the practical benefits of incorporating these climate risk metrics into the investment process, evaluating how climate-aware strategies perform relative to traditional benchmarks.
Zhenfeng Zou
Apr 12, 2026·q-fin.RM·PDF This paper introduces the Lambda extension of the Rényi entropic value-at-risk ($Λ$-EVaR), a novel family of risk measures that unifies the flexible confidence level structure of the $Λ$-framework with the higher-moment sensitivity of EVaR. We define $Λ$-EVaR, establish its foundational properties including monotonicity, cash subadditivity, and quasi-convexity, and provide a complete axiomatic characterization showing that convexity, concavity in mixtures and cash additivity hold only when $Λ$ is constant. A dual representation and an extended Rockafellar-Uryasev-type formula are derived, enabling efficient computation. We further analyze the worst-case behavior of $Λ$-EVaR under Wasserstein and mean-variance uncertainty, obtaining closed-form expressions that reveal its robustness properties. The proposed measure bridges the gap between adaptive risk tolerance and moment-sensitive risk assessment, offering a versatile tool for modern risk management.
Tenghan Zhong
Apr 12, 2026·q-fin.ST·PDF Volatility forecasting becomes challenging when market conditions shift and model performance varies across market states. Motivated by this instability, we develop a risk-sensitive specialist routing framework for ETF volatility forecasting. The framework uses online risk-sensitive evaluation and state-dependent gating to combine different forecasting specialists across calm and stressed market states. Using a daily panel of six ETFs under a rolling walk-forward design, we find that the strongest forecaster is regime-dependent rather than stable across all states. Relative to the rolling-best baseline, the proposed routing framework reduces high-volatility forecast loss by about 24% and underprediction loss by about 22%. These results suggest that specialist routing provides a practical forecasting architecture that adapts to changing market conditions.
Nolan Alexander, Frank Fabozzi
Apr 11, 2026·q-fin.RM·PDF This paper develops a decomposition of standard Risk Contribution (RC) into two economically interpretable components: inherent risk and correlation risk. Using a leave-one-out representation, each position's RC separates into a term reflecting its own volatility contribution independent of the portfolio and a term capturing its covariance with the remainder of the portfolio. The inherent component is always positive, arising from the intrinsic volatility of the position, while the correlation component may amplify or mitigate total portfolio risk depending on how the position moves relative to other holdings. Because the decomposition operates within standard RC, it preserves the property of strict additivity. This separation provides diagnostic insight not visible from aggregate risk contributions alone. It distinguishes whether a position contributes risk because it is volatile in isolation or because it is highly correlated with the rest of the portfolio, and it clarifies when a negatively correlated position functions as an effective hedge. Two approaches to time-series analysis are presented to track how inherent and correlation risk evolve across market regimes, revealing whether changes in portfolio risk during stress periods are driven by volatility shocks, correlation shifts, or both. Empirical illustrations suggest that the decomposition provides stable, transparent, and easily implementable risk diagnostics that can support portfolio risk reporting, stress testing, and performance attribution.
Marco Pollanen
Apr 11, 2026·q-fin.RM·PDF A credit rating of AAA asserts near-certainty of repayment. This paper asks whether the pre-crisis information environment could have supported that assertion for structured products. Bayes' theorem implies that any reliability target requires a minimum level of statistical discrimination between instruments that will repay and those that will not. At structured-finance base rates, a four-nines reliability target demands discrimination on the order of 10,000 to 1. A three-nines target demands 1,000 to 1. Nothing in the published credit-prediction literature provides an affirmative basis for believing that discrimination of this magnitude was achievable with the data available at rating time. Retrospectively, the realized system fell short of the four-nines benchmark by roughly 90,000-fold. The framework accommodates the historical feasibility of corporate AAA ratings, where high base rates and rich information produce low required discrimination. Illustrative calibrations for contemporary collateralized loan obligations suggest that material tension between the precision target and the information environment persists. The central implication is that the AAA precision claim itself likely exceeded what the available information could support.
Tenghan Zhong, Keyuan Wu
Daily ETF risk monitoring can become unreliable when market data quality degrades, market conditions shift, or predictive performance becomes unstable. This paper develops a reliability-aware risk monitoring service for next-day tail-risk surveillance. The proposed framework combines service-time quality checks, lower-tail prediction, uncertainty scoring, and risk-aware adjustment of the tail-risk estimate. We evaluate the system on a daily panel of multiple ETFs augmented with VIX and yield-curve information under a rolling walk-forward design. Empirically, the framework improves tail-risk monitoring, especially during stressed periods, while remaining reliable under simulated input degradation.
Nolan Alexander, Frank Fabozzi
Systematic investment strategies are exposed to a subtle but pervasive vulnerability: the progressive erosion of their effectiveness as market regimes change. Traditional risk measures, designed to capture volatility or drawdowns, overlook this form of structural fragility. This article introduces a quantitative framework for assessing the durability of systematic strategies through minimum regime performance (MRP), defined as the lowest realized risk-adjusted return across distinct historical regimes. MRP serves as a lower bound on a strategy's robustness, capturing how performance deteriorates when underlying relationships weaken or competitive pressures compress alpha. Applied to a broad universe of established factor strategies, the measure reveals a consistent trade-off between efficiency and resilience -- strategies with higher long-term Sharpe ratios do not always exhibit higher MRPs. By translating the persistence of investment efficacy into a measurable quantity, the framework provides investors with a practical diagnostic for identifying and managing strategy-decay risk, a novel dimension of portfolio fragility that complements traditional measures of market and liquidity risk.
Ruichao Jiang, Long Wen
Chitra et al. (2025) claim that Target Weight Mechanism (TWM) in Perpetual Demand Lending Pools (PDLPs) can lower the delta of the portfolio under certain condition. We prove that their condition is self-contradictory. Furthermore, we prove an impossibility result that no TWM can lower the delta uniformly.