机器遗忘综述
作者:
基金项目:

国家自然科学基金(61941121, 91846204, 6217242)


Survey on Machine Unlearning
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [99]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    近年来, 机器学习在人们日常生活中应用愈发广泛, 这些模型在历史数据上进行训练, 预测未来行为, 极大地便利了人们生活. 然而, 机器学习存在隐私泄露隐患: 当用户不希望个人数据被使用时, 单纯地把其数据从训练集中删去并不够, 已训练好的模型仍包含用户信息, 可能造成隐私泄露. 为了解决这一问题, 让机器学习模型“遗忘”该用户个人数据, 最简单的方法是在不包含其数据的训练集上重新训练, 此时得到的新模型必定不包含个人数据的信息. 然而, 重新训练往往代价较大, 成本较高, 由此产生“机器遗忘”的关键问题: 能否以更低的代价, 获取与重新训练模型尽可能相似的模型. 对研究这一问题的文献进行梳理归纳, 将已有机器遗忘方法分为基于训练的方法、基于编辑的方法和基于生成的方法这3类, 介绍机器遗忘的度量指标, 并对已有方法进行测试和评估, 最后对机器遗忘作未来展望.

    Abstract:

    Machine learning has become increasingly prevalent in daily life. Various machine learning methods are proposed to utilize historical data for making predictions, making people’s life more convenient. However, there is a significant challenge associated with machine learning-privacy leakage. Mere deletion of a user’s data from the training set is not sufficient for avoiding privacy leakage, as the trained model may still harbor this information. To tackle this challenge, the conventional approach entails retraining the model on a new training set that excludes the data of the user. However, this method can be costly, prompting the exploration for a more efficient way to “unlearn” specific data while yielding a model comparable to a retrained one. This study summarizes the current literature on this topic, categorizing existing unlearning methods into three groups: training-based, editing-based, and generation-based methods. Additionally, various metrics are introduced to assess unlearning methods. The study also evaluates current unlearning methods in deep learning and concludes with future research directions in this field.

    参考文献
    [1] Shokri R, Shmatikov V. Privacy-preserving deep learning. In: Proc. of the 22nd ACM SIGSAC Conf. on Computer and Communications Security. Denver: ACM, 2015. 1310–1321. [doi: 10.1145/2810103.2813687]
    [2] Schelter S. “Amnesia”—Machine learning models that can forget user data very fast. In: Proc. of the 10th Conf. on Innovative Data Systems Research. 2020.
    [3] Zanella-Béguelin S, Wutschitz L, Tople S, Rühle V, Paverd A, Ohrimenko O, Köpf B, Brockschmidt M. Analyzing information leakage of updates to natural language models. In: Proc. of the 2020 ACM SIGSAC Conf. on Computer and Communications Security. Virtual Event: ACM, 2020. 363–375. [doi: 10.1145/3372297.3417880]
    [4] Shokri R, Stronati M, Song CZ, Shmatikov V. Membership inference attacks against machine learning models. In: Proc. of the 2017 IEEE Symp. on Security and Privacy. San Jose: IEEE, 2017. 3–18. [doi: 10.1109/SP.2017.41]
    [5] Newman AL. What the “right to be forgotten” means for privacy in a digital age. Science, 2015, 347(6221): 507–508.
    [6] 中华人民共和国国家质量监督检验检疫总局, 中国国家标准化管理委员会. GB/T 35273-2020 信息安全技术 个人信息安全规范. 北京: 中国标准出版社, 2018. https://std.samr.gov.cn/gb/search/gbDetailed?id=A0280129495AEBB4E05397BE0A0AB6FE
    General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China, Standardization Administration. GB/T 35273-2020 Information security technology—Personal information security specification. Beijing: Standards Press of China, 2018 (in Chinese). https://std.samr.gov.cn/gb/search/gbDetailed?id=A0280129495AEBB4E05397BE0A0AB6FE
    [7] European Commission. Regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). 2012. https://gdpr.eu/article-17-right-to-be-forgotten/
    [8] Kwak C, Lee J, Lee H. Forming a dimension of digital human rights: Research agenda for the right to be forgotten. In: Proc. of the 50th Hawaii Int’l Conf. on System Sciences. Hilton Waikoloa Village: IEEE, 2017. 982–989.
    [9] Cao YZ, Yang JF. Towards making systems forget with machine unlearning. In: Proc. of the 2015 IEEE Symp. on Security and Privacy. San Jose: IEEE, 2015. 463–480. [doi: 10.1109/SP.2015.35]
    [10] Sharir O, Peleg B, Shoham Y. The cost of training NLP models: A concise overview. arXiv:2004.08900, 2020.
    [11] Guo C, Goldstein T, Hannun A, van der Maaten L. Certified data removal from machine learning models. In: Proc. of the 37th Int’l Conf. on Machine Learning. 2020. 3832–3842.
    [12] Liu Y, Fan MY, Chen C, Liu XM, Ma Z, Wang L, Ma JF. Backdoor defense with machine unlearning. In: Proc. of the 2022 IEEE Conf. on Computer Communications (INFOCOM 2022). London: IEEE, 2022. 280–289. [doi: 10.1109/INFOCOM48880.2022.9796974]
    [13] Cao YZ, Yu AF, Adat A, Stahl E, Merwine J, Yang JF. Efficient repair of polluted machine learning systems via causal unlearning. In: Proc. of the 2018 Asia Conf. on Computer and Communications Security. Incheon: ACM, 2018. 735–747. [doi: 10.1145/3196494.3196517]
    [14] Wang BL, Yao YS, Shan S, Li HY, Viswanath B, Zheng HT, Zhao BY. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In: Proc. of the 2019 IEEE Symp. on Security and Privacy. San Francisco: IEEE, 2019. 707–723.
    [15] Nguyen TT, Huynh TT, Nguyen PL, Liew AWC, Yin HZ, Nguyen QVH. A survey of machine unlearning. arXiv:2209.02299, 2022.
    [16] Xu J, Wu ZH, Wang C, Jia XH. Machine unlearning: Solutions and challenges. IEEE Trans. on Emerging Topics in Computational Intelligence, 2024, 8(3): 2150–2168.
    [17] Zhang HB, Nakamura T, Isohara T, Sakurai K. A review on machine unlearning. SN Computer Science, 2023, 4(4): 337.
    [18] Xu H, Zhu TQ, Zhang LF, Zhou WL, Yu PS. Machine unlearning: A survey. ACM Computing Surveys, 2023, 56(1): 9.
    [19] Liu X, Tsaftaris SA. Have you forgotten? A method to assess if machine learning models have forgotten data. In: Proc. of the 23rd Int’l Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI 2020). Lima: Springer, 2020. 95–105.
    [20] Golatkar A, Achille A, Soatto S. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In: Proc. of the 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 9301–9309. [doi: 10.1109/CVPR42600.2020.00932]
    [21] Thudi A, Deza G, Chandrasekaran V, Papernot N. Unrolling SGD: Understanding factors influencing machine unlearning. In: Proc. of the 7th IEEE European Symp. on Security and Privacy (EuroS&P). Genoa: IEEE, 2022. 303–319.
    [22] Chai CL, Wang JY, Luo YY, Niu ZP, Li GL. Data management for machine learning: A survey. IEEE Trans. on Knowledge and Data Engineering, 2023, 35(5): 4646–4667.
    [23] Liu B, Liu Q, Stone P. Continual learning and private unlearning. arXiv:2203.12817, 2022.
    [24] Brophy J, Lowd D. Machine unlearning for random forests. In: Proc. of the 38th Int’l Conf. on Machine Learning. 2021. 1092–1104.
    [25] Schelter S, Grafberger S, Dunning T. HedgeCut: Maintaining randomised trees for low-latency machine unlearning. In: Proc. of the 2021 Int’l Conf. on Management of Data. Virtual Event: ACM, 2021. 1545–1557.
    [26] Bourtoule L, Chandrasekaran V, Choquette-Choo CA, Jia HR, Travers A, Zhang BW, Lie D, Papernot N. Machine unlearning. In: Proc. of the 2021 IEEE Symp. on Security and Privacy. San Francisco: IEEE, 2021. 141–159. [doi: 10.1109/SP40001.2021.00019]
    [27] Felps DL, Schwickerath AD, Williams JD, Vuong TN, Briggs A, Hunt M, Sakmar E, Saranchak DD, Shumaker T. Class clown: Data redaction in machine unlearning at enterprise scale. arXiv:2012.04699, 2020.
    [28] Chen YT, Xiong J, Xu WH, Zuo JW. A novel online incremental and decremental learning algorithm based on variable support vector machine. Cluster Computing, 2019, 22(S3): 7435–7445.
    [29] Khan ME, Swaroop S. Knowledge-adaptation priors. In: Proc. of the 35th Int’l Conf. on Neural Information Processing Systems. Curran Associates Inc., 2021. 19757–19770.
    [30] Baumhauer T, Schöttle P, Zeppelzauer M. Machine unlearning: Linear filtration for logit-based classifiers. Machine Learning, 2022, 111(9): 3203–3226.
    [31] Kim J, Woo SS. Efficient two-stage model retraining for machine unlearning. In: Proc. of the 2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops. New Orleans: IEEE, 2022. 4360–4368. [doi: 10.1109/CVPRW56347.2022.00482]
    [32] Neel S, Roth A, Sharifi-Malvajerdi S. Descent-to-delete: Gradient-based methods for machine unlearning. In: Proc. of the 32nd Int’l Conf. on Algorithmic Learning Theory. 2021. 931–962.
    [33] Nguyen QP, Oikawa R, Divakaran DM, Chan MC, Low BKH. Markov chain Monte Carlo-based machine unlearning: Unlearning what needs to be forgotten. In: Proc. of the 2022 ACM on Asia Conf. on Computer and Communications Security. Nagasaki: ACM, 2022. 351–363. [doi: 10.1145/3488932.3517406]
    [34] Zeng YY, Wang TH, Chen S, Just HA, Jin R, Jia RX. Learning to refit for convex learning problems. arXiv:2111.12545, 2022.
    [35] Gao J, Garg S, Mahmoody M, Vasudevan PN. Deletion inference, reconstruction, and compliance in machine (un)learning. Proc. on Privacy Enhancing Technologies, 2022(3): 415–436.
    [36] Marchant NG, Rubinstein BIP, Alfeld S. Hard to forget: Poisoning attacks on certified machine unlearning. In: Proc. of the 36th AAAI Conf. on Artificial Intelligence. Virtually: AAAI, 2022. 7691–7700. [doi: 10.1609/aaai.v36i7.20736]
    [37] Hu HS, Wang S, Chang JM, Zhong HN, Sun RX, Hao S, Zhu HJ, Xue MH. A duty to forget, a right to be assured? Exposing vulnerabilities in machine unlearning services. arXiv:2309.08230, 2024.
    [38] Yoon Y, Nam J, Yun H, Kim D, Ok J. Few-shot unlearning by model inversion. arXiv:2205.15567, 2023.
    [39] Wu YJ, Dobriban E, Davidson S. DeltaGrad: Rapid retraining of machine learning models. In: Proc. of the 37th Int’l Conf. on Machine Learning. 2020. 10355–10366.
    [40] Ginart AA, Guan MY, Valiant G, Zou J. Making AI forget you: Data deletion in machine learning. In: Proc. of the 33rd Int’l Conf. on Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2019. 3518–3531.
    [41] Fu SP, He FX, Tao DC. Knowledge removal in sampling-based Bayesian inference. arXiv:2203.12964, 2022.
    [42] Mehta R, Pal S, Singh V, Ravi SN. Deep unlearning via randomized conditionally independent Hessians. In: Proc. of the 2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022. 10412–10421. [doi: 10.1109/CVPR52688.2022.01017]
    [43] Izzo Z, Smart MA, Chaudhuri K, Zou J. Approximate data deletion from machine learning models. In: Proc. of the 24th Int’l Conf. on Artificial Intelligence and Statistics. 2021. 2008–2016.
    [44] Huang HX, Ma XJ, Erfani SM, Bailey J, Wang YS. Unlearnable examples: Making personal data unexploitable. arXiv:2101.04898, 2021.
    [45] Fu SP, He FZ, Xu Y, Tao DC. Bayesian inference forgetting. arXiv:2101.06417, 2021.
    [46] Kurmanji M, Triantafillou P, Hayes J, Triantafillou E. Towards unbounded machine unlearning. In: Proc. of the 37th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2023. 1957–1987.
    [47] Parne N, Puppaala K, Bhupathi N, Patgiri R. An investigation on learning, polluting, and unlearning the spam emails for lifelong learning. arXiv:2111.14609, 2021.
    [48] Ye JW, Fu YF, Song J, Yang XY, Liu SH, Jin X, Song ML, Wang XC. Learning with recoverable forgetting. In: Proc. of the 17th European Conf. on Computer Vision. Tel Aviv: Springer, 2022. 87–103. [doi: 10.1007/978-3-031-20083-0_6]
    [49] Chen KY, Wang YW, Huang Y. Lightweight machine unlearning in neural network. arXiv:2111.05528, 2021.
    [50] Aldaghri N, Mahdavifar H, Beirami A. Coded machine unlearning. IEEE Access, 2021, 9: 88137–88150.
    [51] Yan HN, Li XG, Guo ZY, Li H, Li FH, Lin XD. ARCANE: An efficient architecture for exact machine unlearning. In: Proc. of the 31st Int’l Joint Conf. on Artificial Intelligence (IJCAI 2022). Vienna: Morgan Kaufmann, 2022. 4006–4013. [doi: 10.24963/ijcai.2022/556]
    [52] He YZ, Meng GZ, Chen K, He JW, Hu XB. DeepObliviate: A powerful charm for erasing data residual memory in deep neural networks. arXiv:2105.06209, 2021.
    [53] Gupta V, Jung C, Neel S, Roth A, Sharifi-Malvajerdi S, Waites C. Adaptive machine unlearning. In: Proc. of the 35th Int’l Conf. on Neural Information Processing Systems. Curran Associates Inc., 2021. 16319–16330.
    [54] Chundawat VS, Tarun AK, Mandal M, Kankanhalli M. Zero-shot machine unlearning. arXiv:2201.05629, 2023.
    [55] Chen C, Sun F, Zhang M, Ding BL. Recommendation unlearning. In: Proc. of the 2022 ACM Web Conf. Virtual Event: ACM, 2022. 2768–2777. [doi: 10.1145/3485447.3511997]
    [56] Golatkar A, Achille A, Ravichandran A, Polito M, Soatto S. Mixed-privacy forgetting in deep networks. In: Proc. of the 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021. 792–801. [doi: 10.1109/CVPR46437.2021.00085]
    [57] Peste A, Alistarh D, Lampert CH. SSSE: Efficiently erasing samples from trained machine learning models. arXiv:2107.03860, 2021.
    [58] Golatkar A, Achille A, Soatto S. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In: Proc. of the 16th European Conf. on Computer Vision (ECCV 2020). Glasgow: Springer, 2020. 383–398.
    [59] Dwork C, Lei J. Differential privacy and robust statistics. In: Proc. of the 41st Annual ACM Int’l Symp. on Theory of Computing. Bethesda: ACM, 2009. 371–380. [doi: 10.1145/1536414.1536466]
    [60] Liu JX, Xue MS, Lou J, Zhang XY, Xiong L, Qin Z. MUter: Machine unlearning on adversarially trained models. In: Proc. of the 2023 IEEE/CVF Int’l Conf. on Computer Vision. Paris: IEEE, 2023. 4869–2879. [doi: 10.1109/ICCV51070.2023.00451]
    [61] Du M, Chen Z, Liu C, Oak R, Song D. Lifelong anomaly detection through unlearning. In: Proc. of the 2019 ACM SIGSAC Conf. on Computer and Communications Security. London: ACM. 2019. 1283–1297.
    [62] Liu Y, Ma Z, Liu XM, Liu J, Jiang ZY, Ma JF, Yu P, Ren K. Learn to forget: Machine unlearning via neuron masking. arXiv:2003.10933, 2021.
    [63] Ganhör C, Penz D, Rekabsaz N, Lesota O, Schedl M. Unlearning protected user attributes in recommendations with adversarial training. In: Proc. of the 45th Int’l ACM SIGIR Conf. on Research and Development in Information Retrieval. Madrid: ACM, 2022. 2142–2147. [doi: 10.1145/3477495.3531820]
    [64] Wu G, Hashemi M, Srinivasa C. PUMA: Performance unchanged model augmentation for training data removal. In: Proc. of the 36th AAAI Conf. on Artificial Intelligence. Virtually: AAAI, 2022. 8675–8682. [doi: 10.1609/aaai.v36i8.20846]
    [65] Ye DY, Zhu TQ, Zhu CC, Wang DR, Shi ZW, Shen S, Zhou WL, Xue MH. Reinforcement unlearning. arXiv:2312.15910, 2024.
    [66] Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proc. of the 27th Int’l Conf. on Neural Information Processing Systems. Montreal: MIT Press, 2014. 2672–2680.
    [67] Ullah E, Mai T, Rao A, Rossi RA, Arora R. Machine unlearning via algorithmic stability. In: Proc. of the 34th Conf. on Learning Theory. 2021. 4126–4142.
    [68] Nguyen QP, Kian B, Low H, Jaillet P. Variational Bayesian unlearning. In: Proc. of the 34th Int’l Conf. on Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2020. 16025–16036.
    [69] Chen KY, Huang Y, Wang YW. Machine unlearning via GAN. arXiv:2111.11869, 2021.
    [70] Chundawat VS, Tarun AK, Mandal M, Kankanhalli M. Can bad teaching induce forgetting? Unlearning in deep networks using an incompetent teacher. arXiv:2205.08096, 2023.
    [71] Zhang PF, Bai GD, Huang Z, Xu XS. Machine unlearning for image retrieval: A generative scrubbing approach. In: Proc. of the 30th ACM Int’l Conf. on Multimedia. Lisboa: ACM, 2022. 237–245. [doi: 10.1145/3503161.3548378]
    [72] Thudi A, Jia HR, Shumailov I, Papernot N. On the necessity of auditable algorithmic definitions for machine unlearning. In: Proc. of the 31st USENIX Security Symp. (USENIX Security 22). Boston: USENIX Association, 2022. 4007–4022.
    [73] Huang Y, Li XX, Li K. EMA: Auditing data removal from trained models. In: Proc. of the 24th Int’l Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). Strasbourg: Springer, 2021. 793–803. [doi: 10.1007/978-3-030-87240-3_76]
    [74] Goel S, Prabhu A, Kumaraguru P. Evaluating inexact unlearning requires revisiting forgetting. arXiv:2201.06640, 2023.
    [75] Warnecke A, Pirch L, Wressnegger C, Rieck K. Machine unlearning of features and labels. arXiv:2108.11577, 2023.
    [76] Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc. of the IEEE, 1998, 86(11): 2278–2324.
    [77] Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. Technical Report, University of Toronto, 2009. https://www.cs.toronto.edu/%7Ekriz/cifar.html
    [78] Sakar CO, Polat SO, Katircioglu M, Kastro Y. Real-time prediction of online shoppers’ purchasing intention using multilayer perceptron and LSTM recurrent neural networks. Neural Computing & Applications, 2019, 31(10): 6893–6908.
    [79] Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. In: Proc. of the 2009 IEEE Conf. on Computer Vision and Pattern Recognition. Miami: IEEE, 2009. 248–255. [doi: 10.1109/CVPR.2009.5206848]
    [80] Xiao H, Rasul K, Vollgraph R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747, 2017.
    [81] Goodfellow IJ, Bulatov Y, Ibarz J, Arnoud S, Shet V. Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv:1312.6082, 2014.
    [82] Sekhari A, Acharya J, Kamath G, Suresh AT. Remember what you want to forget: Algorithms for machine unlearning. In: Proc. of the 35th Int’l Conf. on Neural Information Processing Systems. Online: Curran Associates Inc., 2021. 18075–18086.
    [83] Chen M, Zhang ZK, Wang TH, Backes M, Humbert M, Zhang Y. Graph unlearning. In: Proc. of the 2022 ACM SIGSAC Conf. on Computer and Communications Security. Los Angeles: ACM, 2022. 499–513.
    [84] Zhu XR, Li GY, Hu W. 2023. Heterogeneous federated knowledge graph embedding learning and unlearning. In: Proc. of the 2023 ACM Web Conf. Austin: ACM, 2023. 2444–2454. [doi: 10.1145/3543507.3583305]
    [85] Li YT, Wang CH, Cheng G. Online forgetting process for linear regression models. In: Proc. of the 24th Int’l Conf. on Artificial Intelligence and Statistics. 2021. 217–225.
    [86] Mirzasoleiman B, Karbasi A, Krause A. Deletion-robust submodular maximization: Data summarization with “the right to be forgotten”. In: Proc. of the 34th Int’l Conf. on Machine Learning. 2017. 2449–2458.
    [87] Yu C, Jeoung S, Kasi A, Yu PF, Ji H. Unlearning bias in language models by partitioning gradients. In: Findings of the Association for Computational Linguistics. Toronto: ACL, 2023. 6032–6048. [doi: 10.18653/v1/2023.findings-acl.375]
    [88] Yao YS, Xu XJ, Liu Y. Large language model unlearning. arXiv:2310.10683, 2024.
    [89] Chen JA, Yang DY. Unlearn what you want to forget: Efficient unlearning for LLMs. arXiv:2310.20150, 2023.
    [90] Liu Z, Kalinli O. Forgetting private textual sequences in language models via leave-one-out ensemble. arXiv:2309.16082, 2023.
    [91] Ni SW, Chen DW, Li CM, Hu XP, Xu RF, Yang M. Forgetting before learning: Utilizing parametric arithmetic for knowledge updating in large language models. arXiv:2311.08011, 2024.
    [92] Liu GY, Ma XQ, Yang Y, Wang C, Liu JC. FedEraser: Enabling efficient client-level data removal from federated learning models. In: Proc. of the 29th IEEE/ACM Int’l Symp. on Quality of Service. Tokyo: IEEE, 2021. 1–10.
    [93] Liu Y, Xu L, Yuan XL, Wang C, Li B. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In: Proc. of the IEEE INFOCOM 2022—IEEE Conf. on Computer Communications. London: IEEE, 2022. 1749–1758.
    [94] Liu Y, Ma Z, Liu XM, Ma JF. Learn to forget: User-level memorization elimination in federated learning. arXiv:2003.10933, 2021.
    [95] Gong J, Kang J, Simeone O, Kassab R. Forget-SVGD: Particle-based Bayesian federated unlearning. In: Proc. of the 2022 IEEE Data Science and Learning Workshop. Singapore: IEEE, 2022. 1–6. [doi: 10.1109/DSLW53931.2022.9820602]
    [96] Wang JX, Guo S, Xie X, Qi H. Federated unlearning via class-discriminative pruning. In: Proc. of the 2022 ACM Web Conf. Virtual Event: ACM, 2022. 622–632. [doi: 10.1145/3485447.3512222]
    [97] Che TS, Zhou Y, Zhang ZJ, Lyu LJ, Liu J, Yan D, Dou DJ, Huan J. Fast federated machine unlearning with nonlinear functional theory. In: Proc. of the 40th Int’l Conf. on Machine Learning. 2023. 4241–4268.
    [98] Chen M, Zhang ZK, Wang TH, Backes M, Humbert M, Zhang Y. When machine unlearning jeopardizes privacy. In: Proc. of the 2021 ACM SIGSAC Conf. on Computer and Communications Security. Virtual Event: ACM, 2021. 896–911. [doi: 10.1145/3460120.3484756]
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

李梓童,孟小峰,王雷霞,郝新丽.机器遗忘综述.软件学报,2025,36(4):1637-1664

复制
相关视频

分享
文章指标
  • 点击次数:377
  • 下载次数: 769
  • HTML阅读次数: 113
  • 引用次数: 0
历史
  • 收稿日期:2023-03-17
  • 最后修改日期:2024-04-29
  • 在线发布日期: 2024-11-18
文章二维码
您是第19983538位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号