可信人工智能系统的质量属性与实现: 三级研究
作者:
作者简介:

李功源(1999-),男,硕士生,CCF学生会员,主要研究领域为人工智能的软件工程;刘博涵(1991-),男,博士,助理研究员,CCF专业会员,主要研究领域为软件过程,过程仿真建模,机器学习,软件资源库,经验软件工程;杨雨豪(1999-),男,硕士生,主要研究领域为人工智能的软件工程;邵栋(1976-),男,副教授,CCF专业会员,主要研究领域为软件研发效能,软件过程,敏捷软件开发,DevOps,高科技市场理论,软件工程教育,区块链,大数据.

通讯作者:

刘博涵,E-mail:bohanliu@nju.edu.cn

基金项目:

国家自然科学基金(62072227, 62202219); 国家重点研发计划(2019YFE0105500); 江苏省重点研发计划(BE2021002-2); 南京大学计算机软件新技术国家重点实验室创新项目(ZZKT2022A25); 海外开放课题(KFKT2022A09)


Quality Attributes and Practices of Trustworthy Artificial Intelligence Systems: A Tertiary Study
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [73]
  • |
  • 相似文献 [20]
  • | |
  • 资源附件
  • |
  • 文章评论
    摘要:

    人工智能系统以一种前所未有的方式, 被广泛地用于解决现实世界的各种挑战, 其已然成为推动人类社会发展的核心驱动力. 随着人工智能系统在各行各业的迅速普及, 人们对人工智能系统的可信性愈发感到担忧, 其主要原因在于, 传统软件系统的可信性已不足以完全描述人工智能系统的可信性. 对于人工智能系统的可信性的研究, 具有迫切需要. 目前已有大量相关研究, 且各有侧重, 但缺乏一个整体性、系统性的认识. 研究是一项以现有二级研究为研究对象的三级研究, 旨在揭示人工智能系统的可信性相关的质量属性和实践的研究现状, 建立一个更加全面的可信人工智能系统质量属性框架. 收集、整理和分析2022年3月前发表的34项二级研究, 识别21种与可信性相关的质量属性及可信性的度量方法和保障实践. 研究发现, 现有研究主要关注在安全性和隐私性上, 对于其他质量属性缺乏广泛且深入的研究. 对于需要跨学科协作的两个研究方向, 需要在未来的研究中引起重视, 一方面是人工智能系统本质上还是一个软件系统, 其作为一个软件系统的可信值得人工智能和软件工程专家合作研究; 另一方面, 人工智能是人类对于机器拟人化的探索, 如何从系统层面保障机器在社会环境下的可信, 如怎样满足人本主义, 值得人工智能和社会科学专家合作研究.

    Abstract:

    Artificial intelligence systems are widely used to solve various challenges in the real world in an unprecedented way, and they have become the core driving force for the development of human society. With the rapid popularization of artificial intelligence systems in all walks of life, the trustworthiness of artificial intelligence systems is becoming more and more worrying. The main reason is that the trustworthiness of traditional software systems is not enough to fully describe that of artificial intelligence systems. Therefore, research on the trustworthiness of artificial intelligence systems is urgently needed. At present, there have been a large number of relevant studies, which focus on different aspects. However, these studies lack a holistic and systematic understanding. This study is a tertiary study with the existing secondary study as the research object. It aims to reveal the research status of quality attributes and practices related to the trustworthiness of artificial intelligence systems and establish a more comprehensive quality attribute framework for trustworthy artificial intelligence systems. This study collects, sorts out, and analyzes 34 secondary studies published until March 2022. In addition, it identifies 21 quality attributes related to trustworthiness, as well as measurement methods and assurance practices of trustworthiness. The study finds that existing research mainly focuses on security and privacy, and extensive and in-depth research on other quality attributes is fewer. Furthermore, two research directions requiring interdisciplinary collaboration need more attention in future research. On the one hand, the artificial intelligence system is essentially a software system, and its trustworthiness as a software system is worthy of collaborative research by artificial intelligence and software engineering experts. On the other hand, artificial intelligence belongs to human’s exploration of machine anthropomorphism, and research on how to ensure the trustworthiness of machines in the social environment from the system level, such as how to satisfy humanism, is worthy of collaborative research by artificial intelligence and social science experts.

    参考文献
    [1] Anthes G. Artificial intelligence poised to ride a new wave. Communications of the ACM, 2017, 60(7): 19-21. [doi: 10.1145/3088342]
    [2] McGraw G, Bonett R, Figueroa H, Shepardson V. Security engineering for machine learning. Computer, 2019, 52(8): 54-57. [doi: 10.1109/MC.2019.2909955]
    [3] Angwin J, Larson J, Mattu S, Kirchner L. Machine bias: Risk assessment in criminal sentencing. ProPublica, 2016.
    [4] Duan RJ, Mao XF, Qin AK, Chen YF, Ye SK, He Y, Yang Y. Adversarial laser beam: Effective physical-world attack to DNNs in a blink. In: Proc. of the 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021. 16057-16066.
    [5] Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. In: Proc. of the 22nd ACM SIGSAC Conf. on Computer and Communications Security. Denver: ACM, 2015. 1322-1333.
    [6] Shneiderman B. Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems, 2020, 10(4): 26. [doi: 10.1145/3419764]
    [7] Buruk B, Ekmekci PE, Arda B. A critical perspective on guidelines for responsible and trustworthy artificial intelligence. Medicine, Health Care and Philosophy, 2020, 23(3): 387-399. [doi: 10.1007/s11019-020-09948-1]
    [8] Tolmeijer S, Kneer M, Sarasua C, Christen M, Bernstein A. Implementations in machine ethics: A survey. ACM Computing Surveys, 2021, 53(6): 132. [doi: 10.1145/3419633]
    [9] ISO. ISO/IEC TR 24028: 2020 Information technology—Artificial intelligence—Overview of trustworthiness in artificial intelligence. Int’l Organization for Standardization, 2020.
    [10] European Commission. Ethics guidelines for trustworthy AI. Publications Office of the European Union, 2019. https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1
    [11] European Commission. Assessment list for trustworthy artificial intelligence. Publications Office of the European Union, 2019. https://op.europa.eu/en/publication-detail/-/publication/73552fcd-f7c2-11ea-991b-01aa75ed71a1
    [12] 中国信息通信研究院, 京东探索研究院. 可信人工智能白皮书. 2021. http://www.caict.ac.cn/kxyj/qwfb/bps/202107/t20210708_380126.htm
    China Academy of Information and Communications Technology, JD Explore Academy. White Paper on Trustworthy Artificial Intelligence. 2021 (in Chinese). http://www.caict.ac.cn/kxyj/qwfb/bps/202107/t20210708_380126.htm
    [13] Kitchenham B, Charters S. Guidelines for performing systematic literature reviews in software engineering. Keele: Keele University, 2007. https://www.elsevier.com/__data/promis_misc/525444systematicreviewsguide.pdf
    [14] Kaur D, Uslu S, Durresi A. Requirements for trustworthy artificial intelligence-a review. In: Proc. of the 23rd Int’l Conf. on Network-based Information Systems. Cham: Springer, 2021. 105-115.
    [15] Xiong PL, Buffett S, Iqbal S, Lamontagne P, Mamun M, Molyneaux H. Towards a robust and trustworthy machine learning system development: An engineering perspective. Journal of Information Security and Applications, 2022, 65: 103121. [doi: 10.1016/j.jisa.2022.103121]
    [16] Kaur D, Uslu S, Rittichier KJ, Durresi A. Trustworthy artificial intelligence: A review. ACM Computing Surveys, 2023, 55(2): 39. [doi: 10.1145/3491209]
    [17] Tariq MI, Memon NA, Ahmed S, Tayyaba S, Mushtaq MT, Mian NA, Imran M, Ashraf MW. A review of deep learning security and privacy defensive techniques. Mobile Information Systems, 2020, 2020: 6535834. [doi: 10.1155/2020/6535834]
    [18] Boulemtafes A, Derhab A, Challal Y. A review of privacy-preserving techniques for deep learning. Neurocomputing, 2020, 384: 21-45. [doi: 10.1016/j.neucom.2019.11.041]
    [19] Hanif A, Zhang XY, Wood S. A survey on explainable artificial intelligence techniques and challenges. In: Proc. of the 25th IEEE Int’l Enterprise Distributed Object Computing Workshop (EDOCW). Gold Coast: IEEE, 2021. 81-89.
    [20] Kitchenham BA, Pretorius R, Budgen D, Brereton OP, Turner M, Niazi M, Linkman S. Systematic literature reviews in software engineering - A tertiary study. Information and Software Technology, 2010, 52(8): 792-805. [doi: 10.1016/j.infsof.2010.03.006]
    [21] Kitchenham BA, Budgen D, Brereton P. Evidence-based Software Engineering and Systematic Reviews. New York: CRC Press, 2015.
    [22] Dyba T, Kitchenham BA, Jorgensen M. Evidence-based software engineering for practitioners. IEEE Software, 2005, 22(1): 58-65. [doi: 10.1109/MS.2005.6]
    [23] Kitchenham BA, Dyba T, Jorgensen M. Evidence-based software engineering. In: Proc. of the 26th Int’l Conf. on Software Engineering. Edinburgh: IEEE, 2004. 273-281.
    [24] Serban A, van der Blom K, Hoos H, Visser J. Practices for engineering trustworthy machine learning applications. In: Proc. of the 1st IEEE/ACM Workshop on AI Engineering-software Engineering for AI (WAIN). Madrid: IEEE, 2021. 97-100.
    [25] Dilmaghani S, Brust MR, Danoy G, Cassagnes N, Pecero J, Bouvry P. Privacy and security of big data in AI systems: A research and standards perspective. In: Proc. of the 2019 IEEE Int’l Conf. on Big Data (Big Data). Los Angeles: IEEE, 2019. 5737-5743.
    [26] Fagbola TM, Thakur SC. Towards the development of artificial intelligence-based systems: Human-centered functional requirements and open problems. In: Proc. of the 2019 Int’l Conf. on Intelligent Informatics and Biomedical Sciences (ICIIBMS). Shanghai: IEEE, 2019. 200-204.
    [27] Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Computing Surveys, 2022, 54(6): 115. [doi: 10.1145/3457607]
    [28] Meenakshi K, Maragatham G. A review on security attacks and protective strategies of machine learning. In: Hemanth DJ, Kumar VDA, Malathi S, Castillo O, Patrut B, eds. Emerging Trends in Computing and Expert Technology. Cham: Springer, 2020. 1076-1087.
    [29] Zhang Y, Cai Y, Zhang M, Li X, Fan YF. A survey on privacy-preserving deep learning with differential privacy. In: Proc. of the 3rd Int’l Conf. on Big Data and Security. Shenzhen: Springer, 2022. 18-30.
    [30] Tiwari K, Shukla S, George JP. A systematic review of challenges and techniques of privacy-preserving machine learning. In: Shukla S, Unal A, Kureethara JV, Mishra DK, Han DS, eds. Data Science and Security. Singapore: Springer, 2021. 19-41.
    [31] Madhusudhanan S, Nair RR. Converging security threats and attacks insinuation in multidisciplinary machine learning applications: A survey. In: Proc. of the 2019 Int’l Seminar on Research of Information Technology and Intelligent Systems (ISRITI). Yogyakarta: IEEE, 2019. 217-222.
    [32] Chai CL, Wang JY, Luo YY, Niu ZP, Li GL. Data management for machine learning: A survey. IEEE Trans. on Knowledge and Data Engineering, 2023, 35(5): 4646-4667.
    [33] Došilović FK, Brčić M, Hlupić N. Explainable artificial intelligence: A survey. In: Proc. of the 41st Int’l Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). Opatija: IEEE, 2018. 210-215.
    [34] Chakraborty S, Tomsett R, Raghavendra R, Harborne D, Alzantot M, Cerutti F, Srivastava M, Preece A, Julier S, Rao RM, Kelley TD, Braines D, Sensoy M, Willis CJ, Gurram P. Interpretability of deep learning models: A survey of results. In: Proc. of the 2017 IEEE Smartworld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (smartworld/SCALCOM/UIC/ATC/CBDcom/IOP/SCI). San Francisco: IEEE, 2017. 1-6.
    [35] Vollert S, Atzmueller M, Theissler A. Interpretable machine learning: A brief survey from the predictive maintenance perspective. In: Proc. of the 26th IEEE Int’l Conf. on Emerging Technologies and Factory Automation (ETFA). Vasteras: IEEE, 2021. 1-8.
    [36] Ha T, Dang TK, Le H, Truong TA. Security and privacy issues in deep learning: A brief review. SN Computer Science, 2020, 1(5): 253. [doi: 10.1007/s42979-020-00254-4]
    [37] Gezici B, Tarhan AK. Systematic literature review on software quality for AI-based software. Empirical Software Engineering, 2022, 27(3): 66. [doi: 10.1007/s10664-021-10105-2]
    [38] França HL, Teixeira C, Laranjeiro N. Techniques for evaluating the robustness of deep learning systems: A preliminary review. In: Proc. of the 10th Latin-American Symp. on Dependable Computing (LADC). Florianópolis: IEEE, 2021. 1-5.
    [39] Liu B, Ding M, Shaham S, Rahayu W, Farokhi F, Lin ZH. When machine learning meets privacy: A survey and outlook. ACM Computing Surveys, 2022, 54(2): 31. [doi: 10.1145/3436755]
    [40] Guan ZY, Bian LX, Shang T, Liu JW. When machine learning meets security issues: A survey. In: Proc. of the 2018 IEEE Int’l Conf. on Intelligence and Safety for Robotics (ISR). Shenyang: IEEE, 2018. 158-165.
    [41] Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: A review of machine learning interpretability methods. Entropy, 2021, 23(1): 18. [doi: 10.3390/e23010018]
    [42] 刘俊旭, 孟小峰. 机器学习的隐私保护研究综述. 计算机研究与发展, 2020, 57(2): 346-362. [doi: 10.7544/issn1000-1239.2020.20190455]
    Liu JX, Meng XF. Survey on privacy-preserving machine learning. Journal of Computer Research and Development, 2020, 57(2): 346-362 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.2020.20190455]
    [43] 纪守领, 杜天宇, 李进锋, 沈超, 李博. 机器学习模型安全与隐私研究综述. 软件学报, 2021, 32(1): 41-67. http://www.jos.org.cn/1000-9825/6131.htm
    Ji SL, Du TY, Li JF, Shen C, Li B. Security and privacy of machine learning models: A survey. Ruan Jian Xue Bao/Journal of Software, 2021, 32(1): 41-67 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6131.htm
    [44] 纪守领, 李进锋, 杜天宇, 李博. 机器学习模型可解释性方法、应用与安全研究综述. 计算机研究与发展, 2019, 56(10): 2071-2096. [doi: 10.7544/issn1000-1239.2019.20190540]
    Ji SL, Li JF, Du TY, Li B. Survey on techniques, applications and security of machine learning interpretability. Journal of Computer Research and Development, 2019, 56(10): 2071-2096 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.2019.20190540]
    [45] 何英哲, 胡兴波, 何锦雯, 孟国柱, 陈恺. 机器学习系统的隐私和安全问题综述. 计算机研究与发展, 2019, 56(10): 2049-2070. [doi: 10.7544/issn1000-1239.2019.20190437]
    He YZ, Hu XB, He JW, Meng GZ, Chen K. Privacy and security issues in machine learning systems: A survey. Journal of Computer Research and Development, 2019, 56(10): 2049-2070 (in Chinese with English abstract). [doi: 10.7544/issn1000-1239.2019.20190437]
    [46] 谭作文, 张连福. 机器学习隐私保护研究综述. 软件学报, 2020, 31(7): 2127-2156. http://www.jos.org.cn/1000-9825/6052.htm
    Tan ZW, Zhang LF. Survey on privacy preserving techniques for machine learning. Ruan Jian Xue Bao/Journal of Software, 2020, 31(7): 2127-2156 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6052.htm
    [47] 刘睿瑄, 陈红, 郭若杨, 赵丹, 梁文娟, 李翠平. 机器学习中的隐私攻击与防御. 软件学报, 2020, 31(3): 866-892. http://www.jos.org.cn/1000-9825/5904.htm
    Liu RX, Chen H, Guo RY, Zhao D, Liang WJ, Li CP. Survey on privacy attacks and defenses in machine learning. Ruan Jian Xue Bao/Journal of Software, 2020, 31(3): 866-892 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5904.htm
    [48] 刘文炎, 沈楚云, 王祥丰, 金博, 卢兴见, 王晓玲, 查宏远, 何积丰. 可信机器学习的公平性综述. 软件学报, 2021, 32(5): 1404-1426. http://www.jos.org.cn/1000-9825/6214.htm
    Liu WY, Shen CY, Wang XF, Jin B, Lu XJ, Wang XL, Zha HY, He JF. Survey on fairness in trustworthy machine learning. Ruan Jian Xue Bao/Journal of Software, 2021, 32(5): 1404-1426 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6214.htm
    [49] Pessach D, Shmueli E. A review on fairness in machine learning. ACM Computing Surveys, 2023, 55(3): 51. [doi: 10.1145/3494672]
    [50] Hu YP, Kuang WX, Qin Z, Li KL, Zhang JL, Gao YS, Li WJ, Li KQ. Artificial intelligence security: Threats and countermeasures. ACM Computing Surveys, 2023, 55(1): 20. [doi: 10.1145/3487890]
    [51] Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. A survey of methods for explaining black box models. ACM computing surveys, 2019, 51(5): 93. [doi: 10.1145/3236009]
    [52] Zhang H, Babar MA, Tell P. Identifying relevant studies in software engineering. Information and Software Technology, 2011, 53(6): 625-637. [doi: 10.1016/j.infsof.2010.12.010]
    [53] ISO. ISO/IEC TR 24027:2021 Information technology—Artificial intelligence (AI)—Bias in AI systems and AI aided decision making. Int’l Organization for Standardization, 2021.
    [54] ISO. ISO/IEC TR 24029-1:2021 Artificial intelligence (AI)—Assessment of the robustness of neural network—Part 1: Overview. Int’l Organization for Standardization, 2021.
    [55] IEEE. IEEE 3652.1-2020 IEEE guide for architectural framework and application of federated machine learning. IEEE, 2021.
    [56] IEEE. IEEE P7001/D4-2021 IEEE approved draft standard for transparency of autonomous systems. IEEE. 2021. https://ieeexplore.ieee.org/document/9574622
    [57] Wohlin C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: Proc. of the 18th Int’l Conf. on Evaluation and Assessment in Software Engineering. London: ACM, 2014. 38.
    [58] Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology, 2006, 3(2): 77-101. [doi: 10.1191/1478088706qp063oa]
    [59] Charmaz K. Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. Thousand Oaks: Sage, 2006.
    [60] Jia CJ, Cai Y, Yu YT, Tse TH. 5W+1H pattern: A perspective of systematic mapping studies and a case study on cloud software testing. Journal of Systems and Software, 2016, 116: 206-219. [doi: 10.1016/j.jss.2015.01.058]
    [61] Hart G. The five W's: An old tool for the new task of audience analysis. Technical Communication, 1996, 43(2): 139-145.
    [62] Pan ZD, Kosicki GM. Framing analysis: An approach to news discourse. Political Communication, 1993, 10(1): 55-75. [doi: 10.1080/10584609.1993.9962963]
    [63] ISO. ISO/IEC 25010:2011 Systems and software engineering—Systems and software quality requirements and evaluation (SQuaRE)—System and software quality models. Int’l Organization for Standardization, 2011.
    [64] Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K. An empirical comparison of model validation techniques for defect prediction models. IEEE Transactions on Software Engineering, 2017, 43(1): 1-18. [doi: 10.1109/TSE.2016.2584050]
    [65] Wu SH, Zhang C, Wang FT. Extracting software security concerns of problem frames based on a mapping study. In: Proc. of the 24th Asia-Pacific Software Engineering Conf. Workshops. Nanjing: IEEE, 2017. 121-125.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

李功源,刘博涵,杨雨豪,邵栋.可信人工智能系统的质量属性与实现: 三级研究.软件学报,2023,34(9):3941-3965

复制
分享
文章指标
  • 点击次数:1826
  • 下载次数: 3941
  • HTML阅读次数: 2633
  • 引用次数: 0
历史
  • 收稿日期:2022-09-04
  • 最后修改日期:2022-10-13
  • 在线发布日期: 2023-01-13
  • 出版日期: 2023-09-06
文章二维码
您是第19701019位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号