Abstract:Unsupervised domain adaptation (UDA) has achieved success in solving the problem that the training set (source domain) and the test set (target domain) come from different distributions. In the low energy consumption and open dynamic task environment, with the emergence of resource constraints and public classes, existing UDA methods encounter severe challenges. Source free open-set domain adaptation (SF-ODA) aims to transfer the knowledge from the source model to the unlabeled target domain where public classes appear, thus realizing the identification of common classes and detection of public class without the source data. Existing SF-ODA methods focus on designing source models that accurately detect public class or modifying the model structures. However, they not only require extra storage space and training overhead, but also are difficult to be implemented in the strict privacy scenarios. This study proposes a more practical scenario: Active learning source free open-set domain adaptive adaptation (ASF-ODA), based on a common training source model and a small number of valuable target samples labeled by experts to achieve a robust transfer. A local consistent active learning (LCAL) algorithm is proposed to achieve this objective. First of all, LCAL includes a new proposed active selection method, local diversity selection, to select more valuable samples of target domain and promote the separation of threshold fuzzy samples by taking advantage of the feature local labels in the consistent target domain. Then, based on information entropy, LCAL initially selects possible common class set and public class set, and corrects these two sets with labeled samples obtained in the first step to obtain two corresponding reliable sets. Finally, LCAL introduces open set loss and information maximization loss to further promote the separation of common and public classes, and introduces cross entropy loss to realize the discrimination of common classes. A large number of experiments on three publicly available benchmark datasets, Office-31, Office-Home, and VisDA-C, show that with the help of a small number of valuable target samples, LCAL significantly outperforms the existing active learning methods and SF-ODA methods, with over 20% HOS improvements in some transfer tasks.