Research on Comment Quality Evaluation for Code Comment Generation Tasks
Author:
Affiliation:

Clc Number:

TP311

  • Article
  • | |
  • Metrics
  • |
  • Reference [102]
  • | |
  • Cited by
  • | |
  • Comments
    Abstract:

    Code comment generation is an important research task in software engineering. Mainstream methods for comment generation train deep learning models to generate comments, relying on metrics such as BLEU to evaluate comment quality on open code comment datasets. These evaluations mainly reflect the similarity between generated comments and manual reference comments in the datasets. However, the quality of the manual reference comments in open comment datasets varies widely, which leads to more and more doubts about the effectiveness of these metrics. Therefore, for code comment generation tasks, there is an urgent need for direct and effective methods to evaluate code comment quality. Such methods can improve the quality of open comment datasets and enhance the evaluation of generated comments. This study conducts research and analysis on existing quantifiable methods for code comment quality evaluation and applies a set of multi-dimensional metrics to directly evaluate the quality of code comments in mainstream open datasets, comments generated by traditional methods, and comments generated by ChatGPT. The study reveals the following findings. 1) The quality of code comments in mainstream open datasets needs improvement, with issues such as inaccuracy, poor readability, excessive simplicity, and a lack of useful information. 2) Comments generated by traditional methods are more lexically and semantically similar to the code but lack information that is more useful to developers, such as high-level intentions of the code. 3) One important reason for the low BLEU scores of generated comments is the large number of poor-quality reference comments in datasets, which lack relevance with the code or exhibit poor naturalness. These kinds of reference comments should be filtered or improved. 4) Comments generated by LLMs like ChatGPT are rich in content but tend to be lengthy. Their quality evaluation needs to be tailored to developer intentions and specific scenarios. Based on these findings, this study provides several suggestions for future research in code comment generation and comment quality evaluation.

    Reference
    [1] Woodfield SN, Dunsmore HE, Shen VY. The effect of modularization and comments on program comprehension. In: Proc. of the 5th Int’l Conf. on Software Engineering. San Diego: IEEE, 1981. 215–223.
    [2] He H. Understanding source code comments at large-scale. In: Proc. of the 27th ACM Joint Meeting on European Software Engineering Conf. and Symp. on the Foundations of Software Engineering. Tallinn: ACM, 2019. 1217–1219.
    [3] de Souza SCB, Anquetil N, de Oliveira KM. A study of the documentation essential to software maintenance. In: Proc. of the 23rd Annual Int’l Conf. on Design of communication: Documenting & Designing for Pervasive Information. Coventry: ACM, 2005. 68–75. [doi: 10.1145/1085313.1085331]
    [4] Tan L, Yuan D, Zhou YY. Hotcomments: How to make program comments more useful? In: Proc. of the 11th USENIX Workshop on Hot Topics in Operating Systems. San Diego: USENIX Association, 2007. 19.
    [5] Xia X, Bao LF, Lo D, Xing ZC, Hassan AE, Li SP. Measuring program comprehension: A large-scale field study with professionals. IEEE Trans. on Software Engineering, 2018, 44(10): 951–976.
    [6] Haiduc S, Aponte J, Moreno L, Marcus A. On the use of automated text summarization techniques for summarizing source code. In: Proc. of the 17th Working Conf. on Reverse Engineering. Beverly: IEEE, 2010. 35–44. [doi: 10.1109/WCRE.2010.13]
    [7] Haiduc S, Aponte J, Marcus A. Supporting program comprehension with source code summarization. In: Proc. of the 32nd ACM/IEEE Int’l Conf. on Software Engineering. Cape Town: IEEE, 2010. 223–226. [doi: 10.1145/1810295.1810335]
    [8] Bai Y, Zhang LP, Zhao FR. A survey on research of code comment. In: Proc. of the 3rd Int’l Conf. on Management Engineering, Software Engineering and Service Sciences. Wuhan: ACM, 2019. 45–51. [doi: 10.1145/3312662.3312710]
    [9] Song XT, Sun HL, Wang X, Yan JF. A survey of automatic generation of source code comments: Algorithms and techniques. IEEE Access, 2019, 7: 111411–111428.
    [10] Zhao FR, Zhao JQ, Bai Y. A survey of automatic generation of code comments. In: Proc. of the 4th Int’l Conf. on Management Engineering, Software Engineering and Service Sciences. Wuhan: ACM, 2020. 21–25. [doi: 10.1145/3380625.3380649]
    [11] 陈翔, 杨光, 崔展齐, 孟国柱, 王赞. 代码注释自动生成方法综述. 软件学报, 2021, 32(7): 2118–2141. http://www.jos.org.cn/1000-9825/6258.htm
    Chen X, Yang G, Cui ZQ, Meng GZ, Wang Z. Survey of state-of-the-art automatic code comment generation. Ruan Jian Xue Bao/Journal of Software, 2021, 32(7): 2118–2141 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6258.htm
    [12] Iyer S, Konstas I, Cheung A, Zettlemoyer L. Summarizing source code using a neural attention model. In: Proc. of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin: ACL, 2016. 2073–2083. [doi: 10.18653/v1/P16-1195]
    [13] Hu X, Li G, Xia X, Lo D, Jin Z. Deep code comment generation. In: Proc. of the 26th Conf. on Program Comprehension. Gothenburg: ACM, 2018. 200–210. [doi: 10.1145/3196321.319633]
    [14] Ahmad W, Chakraborty S, Ray B, Chang KW. A transformer-based approach for source code summarization. In: Proc. of the 58th Annual Meeting of the Association for Computational Linguistics. ACL, 2020. 4998–5007.
    [15] Chen QY, Zhou MH. A neural framework for retrieval and summarization of source code. In: Proc. of the 33rd ACM/IEEE Int’l Conf. on Automated Software Engineering. Montpellier: IEEE, 2018. 826–831. [doi: 10.1145/3238147.3240471]
    [16] Hu X, Li G, Xia X, Lo D, Jin Z. Deep code comment generation with hybrid lexical and syntactical information. Empirical Software Engineering, 2020, 25(3): 2179–2217.
    [17] LeClair A, Haque S, Wu LF, McMillan C. Improved code summarization via a graph neural network. In: Proc. of the 28th Int’l Conf. on Program Comprehension. Seoul: ACM, 2020. 184–195. [doi: 10.1145/3387904.3389268]
    [18] Zhang J, Wang X, Zhang HY, Sun HL, Liu XD. Retrieval-based neural source code summarization. In: Proc. of the 42nd ACM/IEEE Int’l Conf. on Software Engineering. Seoul: IEEE, 2020. 1385–1397. [doi: 10.1145/3377811.338038]
    [19] Gros D, Sezhiyan H, Devanbu P, Yu Z. Code to comment “translation”: Data, metrics, baselining & evaluation. In: Proc. of the 35th IEEE/ACM Int’l Conf. on Automated Software Engineering. Melbourne: IEEE, 2020. 746–757.
    [20] Hu X, Xia X, Lo D, Wan ZY, Chen QY, Zimmermann T. Practitioners’ expectations on automated code comment generation. In: Proc. of the 44th Int’l Conf. on Software Engineering. Pittsburgh: IEEE, 2022. 1693–1705. [doi: 10.1145/3510003.3510152]
    [21] Steidl D, Hummel B, Juergens E. Quality analysis of source code comments. In: Proc. of the 21st Int’l Conf. on Program Comprehension. San Francisco: IEEE, 2013. 83–92. [doi: 10.1109/ICPC.2013.6613836]
    [22] 余海, 李斌, 王培霞, 贾荻, 王永吉. 基于组合分类算法的源代码注释质量评估方法. 计算机应用, 2016, 36(12): 3448–3453, 3467.
    Yu H, Li B, Wang PX, Jia D, Wang YJ. Source code comments quality assessment method based on aggregation of classification algorithms. Journal of Computer Applications, 2016, 36(12): 3448–3453, 3467 (in Chinese with English abstract).
    [23] Papineni K, Roukos S, Ward T, Zhu WJ. BLEU: A method for automatic evaluation of machine translation. In: Proc. of the 40th Annual Meeting of the Association for Computational Linguistics. Philadelphia: ACL, 2002. 311–318. [doi: 10.3115/1073083.1073135]
    [24] Banerjee S, Lavie A. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In: Proc. of the 2005 ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Ann Arbor: ACL, 2005. 65–72.
    [25] Lin CY. ROUGE: A package for automatic evaluation of summaries. In: Proc. of the 2004 Text Summarization Branches Out. Barcelona: ACL, 2004. 74–81.
    [26] Vedantam R, Lawrence Zitnick C, Parikh D. CIDEr: Consensus-based image description evaluation. In: Proc. of the 2015 IEEE Conf. on Computer Vision and Pattern Recognition. Boston: IEEE, 2015. 4566–4575. [doi: 10.1109/CVPR.2015.7299087]
    [27] Stapleton S, Gambhir Y, LeClair A, Eberhart Z, Weimer W, Leach K, Huang Y. A human study of comprehension and code summarization. In: Proc. of the 28th Int’l Conf. on Program Comprehension. Seoul: ACM, 2020. 2–13. [doi: 10.1145/3387904.3389258]
    [28] Roy D, Fakhoury S, Arnaoudova V. Reassessing automatic evaluation metrics for code summarization tasks. In: Proc. of the 29th ACM Joint Meeting on European Software Engineering Conf. and Symp. on the Foundations of Software Engineering. Athens: ACM, 2021. 1105–1116. [doi: 10.1145/3468264.3468588]
    [29] Haque S, Eberhart Z, Bansal A, McMillan C. Semantic similarity metrics for evaluating source code summarization. In: Proc. of the 30th IEEE/ACM Int’l Conf. on Program Comprehension. ACM, 2022. 36–47.
    [30] Shi L, Mu FW, Chen X, Wang S, Wang JJ, Yang Y, Li G, Xia X, Wang Q. Are we building on the rock? On the importance of data preprocessing for code summarization. In: Proc. of the 30th ACM Joint European Software Engineering Conf. and Symp. on the Foundations of Software Engineering. Singapore: ACM, 2022. 107–119. [doi: 10.1145/3540250.3549145]
    [31] Wang C, He H, Pal U, Marinov D, Zhou MH. Suboptimal comments in Java projects: From independent comment changes to commenting practices. ACM Trans. on Software Engineering and Methodology, 2023, 32(2): 45.
    [32] Mahmud J, Faisal F, Arnob RI, Anastasopoulos A, Moran K. Code to comment translation: A comparative study on model effectiveness & errors. In: Proc. of the 1st Workshop on Natural Language Processing for Programming. ACL, 2021. 1–16.
    [33] 宋晓涛, 孙海龙. 基于神经网络的自动源代码摘要技术综述. 软件学报, 2022, 33(1): 55–77. http://www.jos.org.cn/1000-9825/6337.htm
    Song XT, Sun HL. Survey on neural network-based automatic source code summarization technologies. Ruan Jian Xue Bao/Journal of Software, 2022, 33(1): 55–77 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6337.htm
    [34] Rani P, Blasi A, Stulova N, Panichella S, Gorla A, Nierstrasz O. A decade of code comment quality assessment: A systematic literature review. Journal of Systems and Software, 2023, 195: 111515.
    [35] 中华人民共和国国家质量监督检验检疫总局, 中国国家标准化管理委员会. GB/T 19000-2016 质量管理体系 基础和术语. 北京: 中国标准出版社, 2016.
    General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China, Standardization Administration of the People’s Republic of China. GB/T 19000-2016 Quality management systems—Fundamentals and vocabulary. Beijing: Standards Press of China, 2016 (in Chinese).
    [36] Badihi S, Heydarnoori A. Generating code summaries using the power of the crowd. arXiv:1612.03618, 2016.
    [37] Shi ES, Wang YL, Du L, Chen JJ, Han S, Zhang HY, Zhang DM, Sun HB. On the evaluation of neural code summarization. In: Proc. of the 44th Int’l Conf. on Software Engineering. Pittsburgh: ACM, 2022. 1597–1608. [doi: 10.1145/3510003.3510060]
    [38] Sun WS, Fang CR, You YD, Miao Y, Liu Y, Li YK, Deng GL, Huang SH, Chen YC, Zhang QJ, Qian HW, Liu Y, Chen ZY. Automatic code summarization via ChatGPT: How far are we? arXiv:2305.12865, 2023.
    [39] Nie PY, Zhang JY, Li JJ, Mooney R, Gligoric M. Impact of evaluation methodologies on code summarization. In: Proc. of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin: ACL, 2022. 4936–4960. [doi: 10.18653/v1/2022.acl-long.339]
    [40] Hu X, Li G, Xia X, Lo D, Lu S, Jin Z. Summarizing source code with transferred API knowledge. In: Proc. of the 27th Int’l Joint Conf. on Artificial Intelligence. Stockholm: AAAI Press, 2018. 2269–2275.
    [41] LeClair A, Jiang SY, McMillan C. A neural model for generating natural language summaries of program subroutines. In: Proc. of the 41st Int’l Conf. on Software Engineering. Montreal: IEEE, 2019. 795–806. [doi: 10.1109/ICSE.2019.00087]
    [42] Husain H, Wu HH, Gazit T, Allamanis M, Brockschmidt M. CodeSearchNet challenge: Evaluating the state of semantic code search. arXiv:1909.09436, 2019.
    [43] Wang Y, Wang WS, Joty S, Hoi SCH. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In: Proc. of the 2021 Conf. on Empirical Methods in Natural Language Processing. Punta Cana: ACL, 2021. 8696–8708. [doi: 10.18653/v1/2021.emnlp-main.685]
    [44] Liu SQ, Chen Y, Xie XF, Siow JK, Liu Y. Retrieval-augmented generation for code summarization via hybrid GNN. In: Proc. of the 9th Int’l Conf. on Learning Representations. OpenReview.net, 2021.
    [45] Wong E, Liu TY, Tan L. CloCom: Mining existing source code for automatic comment generation. In: Proc. of the 22nd IEEE Int’l Conf. on Software Analysis, Evolution, and Reengineering. Montreal: IEEE, 2015. 380–389. [doi: 10.1109/SANER.2015.7081848]
    [46] Zhou Y, Zhang XQ, Shen JJ, Han TT, Chen TL, Gall H. Adversarial robustness of deep code comment generation. ACM Trans. on Software Engineering and Methodology, 2022, 31(4): 60.
    [47] Li Z, Wu YH, Peng B, Chen X, Sun ZY, Liu Y, Yu DL. SeCNN: A semantic CNN parser for code comment generation. Journal of Systems and Software, 2021, 181: 111036.
    [48] Mu FW, Chen X, Shi L, Wang S, Wang Q. Developer-intent driven code comment generation. In: Proc. of the 45th Int’l Conf. on Software Engineering. Melbourne: IEEE, 2023. 768–780. [doi: 10.1109/ICSE48619.2023.00073]
    [49] Wang ZN, Yu XH, Feng YS, Zhao DY. An intra-class relation guided approach for code comment generation. In: Proc. of the 2023 Findings of the Association for Computational Linguistics. Dubrovnik: ACL, 2023. 2023. 1321–1333. [doi: 10.18653/v1/2023.findings-eacl.97]
    [50] Fowkes J, Chanthirasegaran P, Ranca R, Allamanis M, Lapata M, Sutton C. Autofolding for source code summarization. IEEE Trans. on Software Engineering, 2017, 43(12): 1095–1109.
    [51] Wang RY, Zhang HW, Lu GL, Lyu L, Lyu C. Fret: Functional reinforced transformer with BERT for code summarization. IEEE Access, 2020, 8: 135591–135604.
    [52] Wang YL, Shi ES, Du L, Yang XD, Hu YX, Han S, Zhang HY, Zhang DM. CoCoSum: Contextual code summarization with multi-relational graph neural network. arXiv:2107.01933, 2021.
    [53] Zeng JW, Zhang T, Xu Z. DG-Trans: Automatic code summarization via dynamic graph attention-based transformer. In: Proc. of the 21st IEEE Int’l Conf. on Software Quality, Reliability and Security. Hainan: IEEE, 2021. 786–795.
    [54] Gong Z, Gao CY, Wang YS, Gu WC, Peng Y, Xu ZL. Source code summarization with structural relative position guided Transformer. In: Proc. of the 2022 IEEE Int’l Conf. on Software Analysis, Evolution and Reengineering. Honolulu: IEEE, 2022. 13–24.
    [55] Chai L, Li M. Pyramid attention for source code summarization. In: Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2022. 1485.
    [56] Xie R, Hu TX, Ye W, Zhang SK. Low-resources project-specific code summarization. In: Proc. of the 37th IEEE/ACM Int’l Conf. on Automated Software Engineering. Rochester: ACM, 2022. 68. [doi: 10.1145/3551349.3556909]
    [57] Wang Y, Dong Y, Lu XS, Zhou AY. GypSum: Learning hybrid representations for code summarization. In: Proc. of the 30th IEEE/ACM Int’l Conf. on Program Comprehension. Pittsburgh: IEEE, 2022. 12–23. [doi: 10.1145/3524610.3527903]
    [58] Cheng WY, Hu P, Wei SZ, Mo R. Keyword-guided abstractive code summarization via incorporating structural and contextual information. Information and Software Technology, 2022, 150: 106987.
    [59] Ye T, Wu LF, Ma TF, Zhang XH, Du YK, Liu PY, Ji SL, Wang WH. Tram: A token-level retrieval-augmented mechanism for source code summarization. In: Proc. of the 2024 Findings of the Association for Computational Linguistics. Mexico City: ACL, 2024. 2959–2971. [doi: 10.18653/v1/2024.findings-naacl.186]
    [60] Zhou ZY, Yu HQ, Fan GS, Huang ZJ, Yang K. Towards retrieval-based neural code summarization: A meta-learning approach. IEEE Trans. on Software Engineering, 2023, 49(4): 3008–3031.
    [61] Gao SZ, Gao CY, He YL, Zeng JC, Nie LY, Xia X, Lyu M. Code structure-guided Transformer for source code summarization. ACM Trans. on Software Engineering and Methodology, 2023, 32(1): 23.
    [62] Zhang ML, Zhou G, Yu WT, Huang NB, Liu WF. GA-SCS: Graph-augmented source code summarization. ACM Trans. on Asian and Low-resource Language Information Processing, 2023, 22(2): 53.
    [63] Zeng JW, He YT, Zhang T, Xu Z, Han Q. CLG-Trans: Contrastive learning for code summarization via graph attention-based transformer. Science of Computer Programming, 2023, 226: 102925.
    [64] Mu FW, Chen X, Shi L, Wang S, Wang Q. Automatic comment generation via multi-pass deliberation. In: Proc. of the 37th IEEE/ACM Int’l Conf. on Automated Software Engineering. Rochester: ACM, 2022. 14. [doi: 10.1145/3551349.3556917]
    [65] Chen FX, Kim M, Choo J. Novel natural language summarization of program code via leveraging multiple input representations. In: Proc. of the 2021 Findings of the Association for Computational Linguistics. Punta Cana: ACL, 2021. 2510–2520. [doi: 10.18653/v1/2021.findings-emnlp.214]
    [66] Rai S, Gaikwad T, Jain S, Gupta A. Method level text summarization for Java code using nano-patterns. In: Proc. of the 24th Asia-Pacific Software Engineering Conf. Nanjing: IEEE, 2017. 199–208. [doi: 10.1109/APSEC.2017.26]
    [67] Sharma R, Chen FX, Fard F. LAMNER: Code comment generation using character language model and named entity recognition. In: Proc. of the 30th IEEE/ACM Int’l Conf. on Program Comprehension. Pittsburgh: IEEE, 2022. 48–59. [doi: 10.1145/3524610.3527924]
    [68] McBurney PW, McMillan C. Automatic documentation generation via source code summarization of method context. In: Proc. of the 22nd Int’l Conf. on Program Comprehension. Hyderabad: ACM, 2014. 279–290. [doi: 10.1145/2597008.2597149]
    [69] McBurney PW, McMillan C. Automatic source code summarization of context for Java methods. IEEE Trans. on Software Engineering, 2016, 42(2): 103–119.
    [70] Badihi S, Heydarnoori A. CrowdSummarizer: Automated generation of code summaries for Java programs through crowdsourcing. IEEE Software, 2017, 34(2): 71–80.
    [71] Wang X, Peng X, Sun J, Zhao YF, Chen C, Fan JK. A topic guided pointer-generator model for generating natural language code summaries. arXiv:2107.01642, 2021.
    [72] Choi YS, Bak JY, Na CW, Lee JH. Learning sequential and structural information for source code summarization. In: Proc. of the 2021 Findings of the Association for Computational Linguistics. ACL, 2021. 2842–2851. [doi: 10.18653/v1/2021.findings-acl.251]
    [73] Wei BL, Li YM, Li G, Xia X, Jin Z. Retrieve and refine: Exemplar-based neural comment generation. In: Proc. of the 35th IEEE/ACM Int’l Conf. on Automated Software Engineering. Melbourne: IEEE, 2020. 349–360.
    [74] Li JA, Li YM, Li G, Hu X, Xia X, Jin Z. EditSum: A retrieve-and-edit framework for source code summarization. In: Proc. of the 36th IEEE/ACM Int’l Conf. on Automated Software Engineering. Melbourne: IEEE, 2021. 155–166. [doi: 10.1109/ASE51524.2021.9678724]
    [75] Shi ES, Wang YL, Du L, Zhang HY, Han S, Zhang DM, Sun HB. CAST: Enhancing code summarization with hierarchical splitting and reconstruction of abstract syntax trees. In: Proc. of the 2021 Conf. on Empirical Methods in Natural Language Processing. Punta Cana: ACL, 2021. 4053–4062. [doi: 10.18653/v1/2021.emnlp-main.332]
    [76] Wang WH, Zhang YQ, Sui YL, Wan Y, Zhao Z, Wu J, Yu PS, Xu GD. Reinforcement-learning-guided source code summarization using hierarchical attention. IEEE Trans. on Software Engineering, 2022, 48(1): 102–119.
    [77] Sridhara G, Hill E, Muppaneni D, Pollock L, Vijay-Shanker K. Towards automatically generating summary comments for Java methods. In: Proc. of the 25th IEEE/ACM Int’l Conf. on Automated Software Engineering. Antwerp: ACM, 2010. 43–52.
    [78] Tan L, Yuan D, Krishna G, Zhou YY. /*icomment: Bugs or bad comments?*/. In: Proc. of the 21st ACM SIGOPS Symp. on Operating Systems Principles. Stevenson: ACM, 2007. 145–158. [doi: 10.1145/1294261.129427]
    [79] Tan SH, Marinov D, Tan L, Leavens GT. @tComment: Testing Javadoc comments to detect comment-code inconsistencies. In: Proc. of the 5th IEEE Int’l Conf. on Software Testing, Verification and Validation. Montreal: IEEE, 2012. 260–269.
    [80] Blasi A, Gorla A. Replicomment: Identifying clones in code comments. In: Proc. of the 26th Conf. on Program Comprehension. Gothenburg: ACM, 2018. 320–323. [doi: 10.1145/3196321.3196360]
    [81] Corazza A, Maggio V, Scanniello G. On the coherence between comments and implementations in source code. In: Proc. of the 41st Euromicro Conf. on Software Engineering and Advanced Applications. Madeira: IEEE, 2015. 76–83. [doi: 10.1109/SEAA.2015.20]
    [82] Corazza A, Maggio V, Scanniello G. Coherence of comments and method implementations: A dataset and an empirical investigation. Software Quality Journal, 2018, 26(2): 751–777.
    [83] McBurney PW, McMillan C. An empirical study of the textual similarity between source code and source code summaries. Empirical Software Engineering, 2016, 21(1): 17–42.
    [84] Iammarino M, Aversano L, Bernardi ML, Cimitile M. A topic modeling approach to evaluate the comments consistency to source code. In: Proc. of the 2020 Int’l Joint Conf. on Neural Networks. Glasgow: IEEE, 2020. 1–8. [doi: 10.1109/IJCNN48605.2020.9207651]
    [85] Rabbi F, Haque N, Kadir E, Siddik S, Kabir A. An ensemble approach to detect code comment inconsistencies using topic modeling. In: Proc. of the 32nd Int’l Conf. on Software Engineering and Knowledge Engineering. KSI Research Inc., 2020. 392–395.
    [86] Khamis N, Witte R, Rilling J. Automatic quality assessment of source code comments: The JavadocMiner. In: Proc. of the 15th Int’l Conf. on Applications of Natural Language to Information Systems. Cardiff: Springer, 2010. 68–79. [doi: 10.1007/978-3-642-13881-2_7]
    [87] Sun XB, Geng Q, Lo D, Duan YC, Liu XY, Li B. Code comment quality analysis and improvement recommendation: An automated approach. Int’l Journal of Software Engineering and Knowledge Engineering, 2016, 26(6): 981–1000.
    [88] Scalabrino S, Linares-Vásquez M, Poshyvanyk D, Oliveto R. Improving code readability models with textual features. In: Proc. of the 24th IEEE Int’l Conf. on Program Comprehension. Austin: IEEE, 2016. 1–10. [doi: 10.1109/ICPC.2016.7503707]
    [89] Scalabrino S, Linares-Vásquez M, Oliveto R, Poshyvanyk D. A comprehensive model for code readability. Journal of Software: Evolution and Process, 2018, 30(6): e1958.
    [90] Aman H, Amasaki S, Yokogawa T, Kawahara M. A Doc2Vec-based assessment of comments and its application to change-prone method analysis. In: Proc. of the 25th Asia-Pacific Software Engineering Conf. Nara: IEEE, 2018. 643–647. [doi: 10.1109/APSEC.2018.00082]
    [91] Sridhara G, Pollock L, Vijay-Shanker K. Generating parameter comments and integrating with method summaries. In: Proc. of the 19th IEEE Int’l Conf. on Program Comprehension. Kingston: IEEE, 2011. 71–80. [doi: 10.1109/ICPC.2011.28]
    [92] Pawelka T, Juergens E. Is this code written in English? A study of the natural language of comments and identifiers in practice. In: Proc. of the 2015 IEEE Int’l Conf. on Software Maintenance and Evolution. Bremen: IEEE, 2015. 401–410.
    [93] Hata H, Treude C, Kula RG, Ishio T. 9.6 Million links in source code comments: Purpose, evolution, and decay. In: Proc. of the 41st IEEE/ACM Int’l Conf. on Software Engineering. Montreal: IEEE, 2019. 1211–1221. [doi: 10.1109/ICSE.2019.00123]
    [94] Pan XL, Liu CX, Zou YZ, Xie T, Xie B. MESIA: Understanding and leveraging supplementary nature of method-level comments for automatic comment generation. In: Proc. of the 32nd IEEE/ACM Int'l Conf. on Program Comprehension. Lisbon: IEEE, 2024. 74–86. [doi: 10.1145/3643916.3644401]
    [95] Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. In: Proc. of the 1st Int’l Conf. on Learning Representations. Scottsdale, 2013.
    [96] DuBay WH. The principles of readability. Technical Report, Costa Mesa: Impact Information, 2004.
    [97] Xie R, Ye W, Sun JN, Zhang SK. Exploiting method names to improve code summarization: A deliberation multi-task learning approach. In: Proc. of the 29th IEEE/ACM Int’l Conf. on Program Comprehension. Madrid: IEEE, 2021. 138–148.
    [98] Shannon CE. A mathematical theory of communication. The Bell System Technical Journal, 1948, 27(3): 379–423.
    Related
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

赵衔麟,潘兴禄,邹艳珍,刘陈晓,谢冰.面向代码注释生成任务的注释质量评价研究.软件学报,,():1-25

Copy
Share
Article Metrics
  • Abstract:67
  • PDF: 650
  • HTML: 0
  • Cited by: 0
History
  • Received:November 07,2023
  • Revised:April 01,2024
  • Online: December 04,2024
You are the first2035075Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063