Long-range electrostatics in atomistic machine learning: a physical perspective
/ Abstract
The inclusion of long-range electrostatics in atomistic machine learning (ML) is receiving increasing attention for achieving quantum-mechanical accuracy in predicting a wide range of molecular and material properties. However, there is still no general prescription on how long-range physical effects should be incorporated into the model while preserving well-established locality principles underlying most transferable ML representations. Here, we provide a physical perspective on the problem, by discussing how distinct contributions to the system's electrostatics can be captured through the adoption of different learning paradigms. Specifically, we discern between local charge models, which rely either on explicit charge-density decompositions or implicit auxiliary variables, and models where a notion of nonlocality is deliberately introduced, either via self-consistent procedures or by using nonlocal descriptors and learning architectures. We further address the related aspect of incorporating finite-field effects through the coupling with the system's polarization, relevant for the application of an external electric bias. We conclude by discussing the implications for the simulation of electrochemical interfaces, where long-range electrostatics are essential to capture the interplay between charge redistribution, interfacial dynamics, and ionic screening, and for ionic transport phenomena, which, although less explored, appear far less sensitive to their inclusion.