Understanding and Embracing Imperfection in Physical Learning Networks
/ Authors
/ Abstract
Performing machine learning with analog signals offers advantages in speed and energy efficiency, but sensitivity to component and measurement imperfections often foils training without a system-specific companion digital model. Here we take a different perspective, accepting and characterizing these inherent imperfections and ultimately overcoming them without digital models. We train an analog network of self-adjusting resistors -- a contrastive local learning network -- for multiple tasks, and observe limit cycles and scaling behaviors that limit precision, erase memory of previous tasks, and are absent in `perfect'systems. We develop an analytical model capturing these phenomena as a consequence of an uncontrolled learning bias continuously modifying the underlying representation of learned tasks, reminiscent of representational drift in the brain. Finally, we introduce and demonstrate a system-agnostic training method that greatly suppresses these effects. Our work points to a new, scalable analog approach that eschews precise modeling and instead thrives in the mess of real systems.