Research on Fairness in Deep Learning Models
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In recent years, deep neural networks have been widely employed in real decision-making systems. Unfairness in decision-making systems will exacerbate social inequality and harm society. Therefore, researchers begin to carry out a lot of studies on the fairness of deep learning systems, where as most studies focus on group fairness and cannot guarantee fairness within the group. To this end, this study defines two individual fairness calculation methods. The first one is individual fairness rate IFRb based on labels of output, which is the probability of having the same predicted label for two similar samples. The second is individual fairness rate IFRp based on distributions of output, which is the probability of having similar predicted output distribution for two similar samples respectively, and the latter has stricter individual fairness. In addition, this study proposes an algorithm IIFR to improve the individual fairness of these models. The algorithm employs cosine similarity to measure the similarity between samples and then selects similar sample pairs via the similarity threshold decided by different applications. Finally, the output difference of the similar sample pairs is added to the objective function as an individual fairness loss item during the training, which penalizes the similar training samples with large differences in model output to improve the individual fairness of the model. The experimental results show that the proposed IIFR algorithm outperforms the state-of-the-art methods on individual fairness improvement, and can maintain group fairness of models while improving individual fairness.

    Reference
    Related
    Cited by
Get Citation

王昱颖,张敏,杨晶然,徐晟恺,陈仪香.深度学习模型中的公平性研究.软件学报,2023,34(9):4037-4055

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:August 23,2022
  • Revised:October 13,2022
  • Adopted:
  • Online: January 13,2023
  • Published:
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063