Quantcast
Channel: College of Arts and Sciences
Viewing all articles
Browse latest Browse all 1561

Surprisingly simple semi-supervised domain adaptation with pretraining and consistency

$
0
0
Surprisingly simple semi-supervised domain adaptation with pretraining and consistency Saligrama, Venkatesh; Saenko, Kate; Mishra, Samarth Most modern unsupervised domain adaptation (UDA) approaches are rooted in domain alignment, i.e., learning to align source and target features to learn a target domain classifier using source labels. In semi-supervised domain adaptation (SSDA), when the learner can access few target domain labels, prior approaches have followed UDA theory to use domain alignment for learning. We show that the case of SSDA is different and a good target classifier can be learned without needing alignment. We use self-supervised pretraining (via rotation prediction) and consistency regularization to achieve well separated target clusters, aiding in learning a low error target classifier. With our Pretraining and Consistency (PAC) approach, we achieve state of the art target accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets. PAC, while using simple techniques, performs remarkably well on large and challenging SSDA benchmarks like DomainNet and Visda-17, often outperforming recent state of the art by sizeable margins. Code for our experiments can be found at https://github.com/venkatesh-saligrama/PAC.

Viewing all articles
Browse latest Browse all 1561

Trending Articles