Co-teaching for Unsupervised Domain Expansion
/ Authors
/ Abstract
Unsupervised Domain Adaptation (UDA) essentially trades a model's performance on a source domain for improving its performance on a target domain. To overcome this, Unsupervised Domain Expansion (UDE) has been introduced, which adapts the model to the target domain while preserving its performance in the source domain. In both UDA and UDE, a model tailored to a given domain is assumed to well handle samples from the given domain. We question the assumption by reporting the existence of cross-domain visual ambiguity: Due to the unclear boundary between the two domains, samples from one domain can be visually close to the other domain. Such sorts of samples are typically in the minority in their host domain, so they tend to be overlooked by the domain-specific model, but can be better handled by a model from the other domain. We exploit this finding by proposing Co-Teaching (CT), which is instantiated with knowledge distillation based CT (kdCT) plus mixup based CT (miCT). Specifically, kdCT leverages a dual-teacher architecture to enhance the student network's ability to handle cross-domain ambiguity. Meanwhile, miCT further enhances the generalization ability of the student. Extensive experiments on image classification and driving-scene segmentation show the viability of CT for UDE.
Journal: Conference on Multimedia Modeling