Local Consistent Active Learning for Source Free Open-set Domian Adaptation
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Unsupervised domain adaptation (UDA) has achieved success in solving the problem that the training set (source domain) and the test set (target domain) come from different distributions. In the low energy consumption and open dynamic task environment, with the emergence of resource constraints and public classes, existing UDA methods encounter severe challenges. Source free open-set domain adaptation (SF-ODA) aims to transfer the knowledge from the source model to the unlabeled target domain where public classes appear, thus realizing the identification of common classes and detection of public class without the source data. Existing SF-ODA methods focus on designing source models that accurately detect public class or modifying the model structures. However, they not only require extra storage space and training overhead, but also are difficult to be implemented in the strict privacy scenarios. This study proposes a more practical scenario: Active learning source free open-set domain adaptive adaptation (ASF-ODA), based on a common training source model and a small number of valuable target samples labeled by experts to achieve a robust transfer. A local consistent active learning (LCAL) algorithm is proposed to achieve this objective. First of all, LCAL includes a new proposed active selection method, local diversity selection, to select more valuable samples of target domain and promote the separation of threshold fuzzy samples by taking advantage of the feature local labels in the consistent target domain. Then, based on information entropy, LCAL initially selects possible common class set and public class set, and corrects these two sets with labeled samples obtained in the first step to obtain two corresponding reliable sets. Finally, LCAL introduces open set loss and information maximization loss to further promote the separation of common and public classes, and introduces cross entropy loss to realize the discrimination of common classes. A large number of experiments on three publicly available benchmark datasets, Office-31, Office-Home, and VisDA-C, show that with the help of a small number of valuable target samples, LCAL significantly outperforms the existing active learning methods and SF-ODA methods, with over 20% HOS improvements in some transfer tasks.

    Reference
    Related
    Cited by
Get Citation

王帆,韩忠义,苏皖,尹义龙.局部一致性主动学习的源域无关开集域自适应.软件学报,2024,35(4):1651-1666

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:May 13,2023
  • Revised:July 07,2023
  • Adopted:
  • Online: September 11,2023
  • Published: April 06,2024
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063