Unanticipated Adversarial Robustness of Semantic Communication
/ Authors
/ Abstract
Semantic communication, enabled by deep joint source-channel coding (DeepJSCC), is widely expected to inherit the vulnerability of deep learning to adversarial perturbations. This paper challenges this prevailing belief and reveals a counterintuitive finding: semantic communication systems exhibit unanticipated adversarial robustness that can exceed that of classical separate source-channel coding systems. On the theoretical front, we establish fundamental bounds on the minimum attack power required to induce a target distortion, overcoming the analytical intractability of highly nonlinear DeepJSCC models by leveraging Lipschitz smoothness. We prove that the implicit regularization from noisy training forces decoder smoothness, a property that inherently provides built-in protection against adversarial attacks. To enable rigorous and fair comparison, we develop two novel attack methodologies that address previously unexplored vulnerabilities: a structure-aware vulnerable set attack that, for the first time, exploits graph-theoretic vulnerabilities in LDPC codes to induce decoding failure with minimal energy, and a progressive gradient ascent attack that leverages the differentiability of DeepJSCC to efficiently find minimum-power perturbations. Designing such attacks is challenging, as classical systems lack gradient information while semantic systems require navigating high-dimensional, non-convex spaces; our methods fill these critical gaps in the literature. Extensive experiments demonstrate that semantic communication requires up to $14$-$16\times$ more attack power to achieve the same distortion as classical systems, empirically substantiating its superior robustness.