Distributional Statistics Restore Training Data Auditability in One-step Distilled Diffusion Models
cs.LG
/ Authors
/ Abstract
The proliferation of diffusion models trained on web-scale, provenance-uncertain image collections has made it essential, yet technically unresolved, to determine whether a model has learned from specific copyrighted data without authorization. Current methods primarily rely on the memorization effect, whereby models reconstruct their training images better than unseen ones, to detect unauthorized training data on a per-instance basis. This effect, however, vanishes under distillation, the now-dominant deployment pipeline that compresses compute-intensive teacher diffusion models into efficient {\em student one-step generators} mimicking the teacher's output for real-time user access. As the students train exclusively on teacher-generated outputs and never directly see the teacher's original training data, they carry no per-instance memorization of that upstream data, creating a model laundering loophole that severs the auditable link between a deployed model and its upstream training data. We nonetheless reveal that a distributional memory chain survives under distillation: the student's output distribution remains closer to the teacher's training distribution than to any non-training reference, even if no single training instance is memorized. Exploiting this chain, we develop a distributional unauthorized training data detector, grounded in kernel-based distribution discrepancy, that determines if a candidate dataset of unknown composition is statistically aligned with the student-generated distribution more than held-out non-training datasets, thus tracing provenance back to the teacher's training data. Evaluation across benchmarks and distillation setups confirms reliable detection even when unauthorized data forms a minority of the candidate set, establishing distribution-level auditing as a countermeasure to model laundering and a paradigm for accountable generative AI ecosystems.