知识图谱补全技术及应用
作者:
中图分类号:

TP182

基金项目:

国家自然科学基金(62120106008); 中央高校基本科研业务费专项资金(JZ2023HGTB0270, PA2022GDSK0038)


Knowledge Graph Completion: Techniques and Applications
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [141]
  • | |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    知识图谱以其独特的知识管理方式和表示能力被广泛运用于知识问答等知识计算领域. 但是, 现实中的知识图谱或多或少存在信息不完整的问题, 影响知识图谱的质量, 限制了下游任务的效果, 如不完整的知识图谱不能给出准确的知识问答结果. 因此, 知识图谱补全技术应运而生, 旨在通过不同的策略对知识图谱事实三元组中缺失的内容进行预测以改善知识图谱的质量. 近年来, 人们对知识图谱补全进行了大量的研究. 根据构建模型所需样本的数量将现有的知识图谱补全技术分为3大类, 即零样本知识图谱补全、少样本知识图谱补全和多样本知识图谱补全. 为了调研并作为研究人员掌握知识图谱补全研究核心思想和研究现状的第一手材料, 从理论研究、实验分析以及具体应用, 如华谱系统, 对已有的知识图谱补全技术进行全面的回顾, 总结当前知识图谱补全技术所面临的问题与挑战, 并对未来可能的研究方向进行探讨.

    Abstract:

    Knowledge graph (KG), with their unique approach to knowledge management and representation capabilities, have been widely applied in various knowledge computing fields, including question answering. However, incomplete information is often present in KG, which undermines their quality and limits the performance of downstream tasks. As a result, knowledge graph completion (KGC) has emerged, aiming to enhance the quality of KG by predicting the missing information in triples using different methods. In recent years, extensive research has been conducted in the field of KGC. This study classifies KGC techniques into three categories based on the number of samples used: zero-shot KGC, few-shot KGC, and multi-shot KGC. To investigate and provide a first-hand reference for the core concepts and current status of KGC research, this study offers a comprehensive review of the latest research advancements in KGC from theoretical research, experimental analysis, and practical applications, such as the Huapu system. The problems and challenges faced by the current KGC technologies are summarized, and potential research directions for the future are discussed.

    参考文献
    [1] Steiner T, Verborgh R, Troncy R, Gabarró J, van de Walle R. Adding realtime coverage to the Google knowledge graph. In: Proc. of the 2012 Int’l Conf. on Posters & Demonstrations Track. Boston: CEUR-WS.org, 2012. 65–68.
    [2] Liu K, Zhao J, He SZ, Zhang YZ. Question answering over knowledge bases. IEEE Intelligent Systems, 2015, 30(5): 26–35.
    [3] Guo QM, Wang X, Zhu ZF, Liu PY, Xu LC. A knowledge inference model for question answering on an incomplete knowledge graph. Applied Intelligence, 2023, 53(7): 7634–7646.
    [4] 毕鑫, 聂豪杰, 赵相国, 袁野, 王国仁. 面向知识图谱约束问答的强化学习推理技术. 软件学报, 2023, 34(10): 4565–4583. http://www.jos.org.cn/1000-9825/6889.htm
    Bi X, Nie HJ, Zhao XG, Yuan Y, Wang GR. Reinforcement learning inference techniques for knowledge graph constrained question answering. Ruan Jian Xue Bao/Journal of Software, 2023, 34(10): 4565–4583 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6889.htm
    [5] 乔少杰, 杨国平, 于泳, 韩楠, 覃晓, 屈露露, 冉黎琼, 李贺. QA-KGNet: 一种语言模型驱动的知识图谱问答模型. 软件学报, 2023, 34(10): 4584–4600. http://www.jos.org.cn/1000-9825/6882.htm
    Qiao SJ, Yang GP, Yu Y, Han N, Tan X, Qu LL, Ran LQ, Li H. QA-KGNet: Language model-driven knowledge graph question-answering model. Ruan Jian Xue Bao/Journal of Software, 2023, 34(10): 4584–4600 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6882.htm
    [6] Jannach D, Manzoor A, Cai WL, Chen L. A survey on conversational recommender systems. ACM Computing Surveys (CSUR), 2022, 54(5): 105.
    [7] Wang HW, Zhao M, Xie X, Li WJ, Guo MY. Knowledge graph convolutional networks for recommender systems. In: Proc. of the 2019 World Wide Web Conf. San Francisco: ACM, 2019. 3307–3313. [doi: 10.1145/3308558.3313417]
    [8] Li LS, Dong JY, Qin XY. Dual-view graph neural network with gating mechanism for entity alignment. Applied Intelligence, 2023, 53(15): 18189–18204.
    [9] Ji SX, Pan SR, Cambria E, Marttinen P, Yu PS. A Survey on knowledge graphs: Representation, acquisition, and applications. IEEE Trans. on Neural Networks and Learning Systems, 2022, 33(2): 494–514.
    [10] Zhang X, Zhang CX, Guo JT, Peng C, Niu ZD, Wu XD. Graph attention network with dynamic representation of relations for knowledge graph completion. Expert Systems with Applications, 2023, 219: 119616.
    [11] Li WD, Peng R, Li Z. Knowledge graph completion by jointly learning structural features and soft logical rules. IEEE Trans. on Knowledge and Data Engineering, 2023, 35(3): 2724–2735.
    [12] 张宁豫, 谢辛, 陈想, 邓淑敏, 叶宏彬, 陈华钧. 基于知识协同微调的低资源知识图谱补全方法. 软件学报, 2022, 33(10): 3531–3545. http://www.jos.org.cn/1000-9825/6628.htm
    Zhang NY, Xie X, Chen X, Deng SM, Ye HB, Chen HJ. Knowledge collaborative fine-tuning for low-resource knowledge graph completion. Ruan Jian Xue Bao/Journal of Software, 2022, 33(10): 3531–3545 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6628.htm
    [13] Wang Q, Wang B, Guo L. Knowledge base completion using embeddings and rules. In: Proc. of the 24th Int’l Conf. on Artificial Intelligence. Buenos Aires: AAAI Press, 2015. 1859–1865.
    [14] Wang Q, Mao ZD, Wang B, Guo L. Knowledge graph embedding: A survey of approaches and applications. IEEE Trans. on Knowledge and Data Engineering, 2017, 29(12): 2724–2743.
    [15] Rossi A, Barbosa D, Firmani D, Matinata A, Merialdo P. Knowledge graph embedding for link prediction: A comparative analysis. ACM Trans. on Knowledge Discovery from Data (TKDD), 2021, 15(2): 14.
    [16] Shen T, Zhang F, Cheng JW. A comprehensive overview of knowledge graph completion. Knowledge-based Systems, 2022, 255: 109597.
    [17] 杜雪盈, 刘名威, 沈立炜, 彭鑫. 面向链接预测的知识图谱表示学习方法综述. 软件学报, 2024, 35(1): 87–117. http://www.jos.org.cn/1000-9825/6902.htm
    Du XY, Liu MW, Shen LW, Peng X. Survey on representation learning methods of knowledge graph for link prediction. Ruan Jian Xue Bao/Journal of Software, 2024, 35(1): 87–117 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6902.htm
    [18] 吴国栋, 刘涵伟, 何章伟, 李景霞, 王雪妮. 知识图谱补全技术研究综述. 小型微型计算机系统, 2023, 44(3): 471–482.
    Wu GD, Liu HW, He ZW, Li JX, Wang XN. Review of knowledge graph completion technology. Journal of Chinese Computer Systems, 2023, 44(3): 471–482 (in Chinese with English abstract).
    [19] Wu TX, Qi GL, Li C, Wang M. A survey of techniques for constructing Chinese knowledge graphs and their applications. Sustainability, 2018, 10(9): 3245.
    [20] Lin YK, Han X, Xie RB, Liu ZY, Sun MS. Knowledge representation learning: A quantitative review. arXiv:1812.10901, 2018.
    [21] Socher R, Chen DQ, Manning CD, Ng AY. Reasoning with neural tensor networks for knowledge base completion. In: Proc. of the 27th Int’l Conf. on Neural Information Processing Systems. Lake Tahoe: Curran Associates Inc., 2013. 926–934.
    [22] Gesese GA, Biswas R, Alam M, Sack H. A survey on knowledge graph embeddings with literals: Which model links better literal-ly? Semantic Web, 2021, 12(4): 617–647. [doi: 10.3233/SW-200404]
    [23] Chen Z, Wang YH, Zhao B, Cheng J, Zhao X, Duan ZT. Knowledge graph completion: A review. IEEE Access, 2020, 8: 192435–192456.
    [24] Zamini M, Reza H, Rabiei M. A review of knowledge graph completion. Information, 2022, 13(8): 396.
    [25] 彭晏飞, 张睿思, 王瑞华, 郭家隆. 少样本知识图谱补全技术研究. 计算机科学与探索, 2023, 17(6): 1268–1284. [doi: 10.3778/j.issn.1673-9418.2209069]
    Peng YF, Zhang RS, Wang RH, Guo JL. Survey on few-shot knowledge graph completion technology. Journal of Frontiers of Computer Science and Technology, 2023, 17(6): 1268–1284 (in Chinese with English abstract). [doi: 10.3778/j.issn.1673-9418.2209069]
    [26] Wang JP, Wang BY, Qiu MK, Pan SR, Xiong B, Liu H, Luo LH, Liu TF, Hu YL, Yin BC, Gao W. A survey on temporal knowledge graph completion: Taxonomy, progress, and prospects. arXiv:2308.02457, 2023.
    [27] Cai BR, Xiang Y, Gao LX, Zhang H, Li YF, Li JX. Temporal knowledge graph completion: A survey. In: Proc. of the 32nd Int’l Joint Conf. on Artificial Intelligence. Macao: IJCAI, 2023. 6545–6553.
    [28] Kejriwal M. Advanced topic: Knowledge graph completion. In: Kejriwal M, ed. Domain-specific Knowledge Graph Construction. Cham: Springer, 2019. 59–74. [doi: 10.1007/978-3-030-12375-8_4]
    [29] Cai HY, Zheng VW, Chang KCC. A comprehensive survey of graph embedding: Problems, techniques, and applications. IEEE Trans. on Knowledge and Data Engineering, 2018, 30(9): 1616–1637.
    [30] Shah H, Villmow J, Ulges A, Schwanecke U, Shafait F. An open-world extension to knowledge graph completion models. In: Proc. of the 33rd AAAI Conf. on Artificial Intelligence, 2019. 3044–3051. [doi: 10.1609/aaai.v33i01.33013044]
    [31] Zhu WP, Zhi XL, Tong WQ. Extracting short entity descriptions for open-world extension to knowledge graph completion models. In: Proc. of the 13th Int’l Conf. on Knowledge Science, Engineering and Management. Hangzhou: Springer, 2020. 16–27. [doi: 10.1007/978-3-030-55130-8_2]
    [32] Zhou YY, Shi S, Huang HY. Weighted aggregator for the open-world knowledge graph completion. In: Proc. of the 6th Int’l Conf. of Pioneering Computer Scientists, Engineers and Educators on Data Science. Taiyuan: Springer, 2020. 283–291. [doi: 10.1007/978-981-15-7981-3_19]
    [33] Wang JB, Lei J, Sun SN, Guo K. Embeddings based on relation-specific constraints for open world knowledge graph completion. Applied Intelligence, 2023, 53(12): 16192–16204.
    [34] Shi BX, Weninger T. Open-world knowledge graph completion. In: Proc. of the 32nd AAAI Conf. on Artificial Intelligence. New Orleans: AAAI, 2018. 1957–1964. [doi: 10.1609/aaai.v32i1.11535]
    [35] Niu L, Fu CP, Yang Q, Li ZX, Chen ZG, Liu QS, Zheng K. Open-world knowledge graph completion with multiple interaction attention. World Wide Web, 2021, 24(1): 419–439.
    [36] Yao L, Mao CS, Luo Y. KG-BERT: BERT for knowledge graph completion. arXiv:1909.03193, 2019.
    [37] Kim B, Hong T, Ko Y, Seo J. Multi-task learning for knowledge graph completion with pre-trained language models. In: Proc. of the 28th Int’l Conf. on Computational Linguistics. Barcelona: Int’l Committee on Computational Linguistics, 2020. 1737–1743. [doi: 10.18653/v1/2020.coling-main.153]
    [38] Choi B, Jang D, Ko Y. MEM-KGC: Masked entity model for knowledge graph completion with pre-trained language model. IEEE Access, 2021, 9: 132025–132032.
    [39] Yao L, Peng JZ, Mao CS, Luo Y. Exploring large language models for knowledge graph completion. arXiv:2308.13916, 2023.
    [40] Zhang YC, Chen Z, Guo LB, Xu YJ, Zhang W, Chen HJ. Making large language models perform better in knowledge graph completion. In: Proc. of the 32nd ACM Int’l Conf. on Multimedia. Melbourne: ACM, 2024. 233–242. [doi: 10.1145/3664647.3681327]
    [41] Xu DR, Zhang ZH, Lin ZX, Wu X, Zhu ZH, Xu T, Zhao XY, Zheng YF, Chen EH. Multi-perspective improvement of knowledge graph completion with large language models. In: Proc. of the 2024 Joint Int’l Conf. on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). Torino: ELRA, ICCL, 2024. 11956–11968.
    [42] Shu D, Chen TL, Jin MY, Zhang C, Du MN, Zhang YF. Knowledge graph large language model (KG-LLM) for link prediction. In: Proc. of the 16th Asian Conf. on Machine Learning. Hanoi: ACML, 2024. 143–158.
    [43] Xiong WH, Yu M, Chang SY, Guo XX, Wang WY. One-shot relational learning for knowledge graphs. In: Proc. of the 2018 Conf. on Empirical Methods in Natural Language Processing. Brussels: Association for Computational Linguistics, 2018. 1980–1990. [doi: 10.18653/v1/D18-1223]
    [44] Zhang CX, Yao HX, Huang C, Jiang M, Li ZH, Chawla NV. Few-shot knowledge graph completion. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 3041–3048. [doi: 10.1609/aaai.v34i03.5698]
    [45] Sheng JW, Guo S, Chen ZY, Yue JW, Wang LH, Liu TW, Xu HB. Adaptive attentional network for few-shot knowledge graph completion. In: Proc. of the 2020 Conf. on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2020. 1681–1691. [doi: 10.18653/v1/2020.emnlp-main.131]
    [46] Liang Y, Zhao S, Cheng B, Yin YW, Yang H. Exploring entity interactions for few-shot relation learning (student abstract). In: Proc. of the 36th AAAI Conf. on Artificial Intelligence. AAAI, 2022. 13003–13004. [doi: 10.1609/aaai.v36i11.21638]
    [47] Huisman M, van Rijn JN, Plaat A. A survey of deep meta-learning. Artificial Intelligence Review, 2021, 54(6): 4483–4541.
    [48] Sun QR, Liu YY, Chua TS, Schiele B. Meta-Transfer learning for few-shot learning. In: Proc. of the 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 403–412. [doi: 10.1109/CVPR.2019.00049]
    [49] Chen MY, Zhang W, Zhang W, Chen Q, Chen HJ. Meta relational learning for few-shot link prediction in knowledge graphs. In: Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing. Hong Kong: Association for Computational Linguistics, 2019. 4216–4225. [doi: 10.18653/v1/D19-1431]
    [50] Niu GL, Li Y, Tang CG, Geng RY, Dai J, Liu Q, Wang H, Sun J, Huang F, Si L. Relational learning with gated and attentive neighbor aggregator for few-shot knowledge graph completion. In: Proc. of the 44th Int’l ACM SIGIR Conf. on Research and Development in Information Retrieval. ACM, 2021. 213–222. [doi: 10.1145/3404835.3462925]
    [51] Wu H, Yin J, Rajaratnam B, Guo JY. Hierarchical relational learning for few-shot knowledge graph completion. In: Proc. of the 11th Int’l Conf. on Learning Representations. Kigali: OpenReview.net, 2023.
    [52] Zheng SJ, Mai SJ, Sun Y, Hu HF, Yang YD. Subgraph-aware few-shot inductive link prediction via meta-learning. IEEE Trans. on Knowledge and Data Engineering, 2023, 35(6): 6512–6517.
    [53] Lv X, Gu YX, Han X, Hou L, Li JZ, Liu ZY. Adapting meta knowledge graph information for multi-hop reasoning over few-shot relations. In: Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing (EMNLP-IJCNLP). Hong Kong: Association for Computational Linguistics, 2019. 3374–3379. [doi: 10.18653/v1/D19-1334]
    [54] Zhang CX, Yu L, Saebi M, Jiang M, Chawla N. Few-shot multi-hop relation reasoning over knowledge bases. In: Proc. of the 2020 Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, 2020. 580–585. [doi: 10.18653/v1/2020.findings-emnlp.51]
    [55] Zhang YM, Qian YY, Ye YF, Zhang CX. Adapting distilled knowledge for few-shot relation reasoning over knowledge graphs. In: Proc. of the 2022 SIAM Int’l Conf. on Data Mining. Alexandria: Society for Industrial and Applied Mathematics, 2022. 666–674. [doi: 10.1137/1.9781611977172.75]
    [56] Zheng SF, Chen W, Zhao PP, Liu A, Fang JH, Zhao L. When hardness makes a difference: Multi-hop knowledge graph reasoning over few-shot relations. In: Proc. of the 30th ACM Int’l Conf. on Information & Knowledge Management. ACM, 2021. 2688–2697. [doi: 10.1145/3459637.3482402]
    [57] Qin PD, Wang X, Chen WH, Zhang CY, Xu WR. Generative adversarial zero-shot relational learning for knowledge graphs. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 8673–8680. [doi: 10.1609/aaai.v34i05.6392]
    [58] Du ZX, Zhou C, Ding M, Yang HX, Tang J. Cognitive knowledge graph reasoning for one-shot relational learning. arXiv:1906.05489, 2019.
    [59] Zhang NY, Deng SM, Sun ZL, Chen JY, Zhang W, Chen HJ. Relation adversarial network for low resource knowledge graph completion. In: Proc. of the 2020 Web Conf. Taipei: ACM, 2020. 1–12. [doi: 10.1145/3366423.3380089]
    [60] Xie HC, Li AP, Jia Y. Few-shot knowledge reasoning method based on attention mechanism. In: Proc. of the 8th Int’l Conf. on Computing and Pattern Recognition. Beijing: ACM, 2019. 46–51. [doi: 10.1145/3373509.3373574]
    [61] Li JL, Tang TY, Zhao WX, Wei ZC, Yuan NJ, Wen JR. Few-shot knowledge graph-to-text generation with pretrained language models. In: Proc. of the 2021 Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics, 2021. 1558–1568. [doi: 10.18653/v1/2021.findings-acl.136]
    [62] Zhao ZH, Wallace E, Feng S, Klein D, Singh S. Calibrate before use: Improving few-shot performance of language models. In: Proc. of the 38th Int’l Conf. on Machine Learning. JMLR.org, 2021. 12697–12706.
    [63] Li Q, Chen Z, Ji C, Jiang SQ, Li JX. LLM-based multi-level knowledge generation for few-shot knowledge graph completion. In: Proc. of the 33rd Int’l Joint Conf. on Artificial Intelligence. Jeju: IJCAI, 2024. 2135–2143.
    [64] Nickel M, Tresp V, Kriegel HP. A three-way model for collective learning on multi-relational data. In: Proc. of the 28th Int’l Conf. on Machine Learning. Bellevue: Omnipress, 2011. 809–816.
    [65] Yang BS, Yih WT, He XD, Gao JF, Deng L. Embedding entities and relations for learning and inference in knowledge bases. In: Proc. of the 3rd Int’l Conf. on Learning Representations. San Diego, 2015.
    [66] Trouillon T, Welbl J, Riedel S, Gaussier é, Bouchard G. Complex embeddings for simple link prediction. In: Proc. of the 33rd Int’l Conf. on Machine Learning. New York: JMLR.org, 2016. 2071–2080.
    [67] Zhang S, Tay Y, Yao Y, Li Q. Quaternion knowledge graph embeddings. In: Proc. of the 33rd Int’l Conf. on Neural Information Processing Systems. Curran Associates Inc., 2019. 2731–2741.
    [68] Balazevic I, Allen C, Hospedales T. TuckER: Tensor factorization for knowledge graph completion. In: Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int’l Joint Conf. on Natural Language Processing (EMNLP-IJCNLP). Hong Kong: Association for Computational Linguistics, 2019. 5184–5193. [doi: 10.18653/v1/D19-1522]
    [69] Liu HX, Wu YX, Yang YM. Analogical inference for multi-relational embeddings. In: Proc. of the 34th Int’l Conf. on Machine Learning. Sydney: JMLR.org, 2017. 2168–2178.
    [70] Dettmers T, Minervini P, Stenetorp P, Riedel S. Convolutional 2D knowledge graph embeddings. In: Proc. of the 32nd AAAI Conf. on Artificial Intelligence. New Orleans: AAAI, 2018. 1811–1818. [doi: 10.1609/aaai.v32i1.11573]
    [71] Vashishth S, Sanyal S, Nitin V, Agrawal N, Talukdar P. InteractE: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 3009–3016. [doi: 10.1609/aaai.v34i03.5694]
    [72] Nguyen DQ, Nguyen TD, Nguyen DQ, Phung D. A novel embedding model for knowledge base completion based on convolutional neural network. In: Proc. of the 2018 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans: Association for Computational Linguistics, 2018. 327–333. [doi: 10.18653/v1/N18-2053]
    [73] Jiang XT, Wang Q, Wang B. Adaptive convolution for multi-relational learning. In: Proc. of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Minneapolis: Association for Computational Linguistics, 2019. 978–987. [doi: 10.18653/v1/N19-1103]
    [74] Schlichtkrull M, Kipf TN, Bloem P, Van den Berg R, Titov I, Welling M. Modeling relational data with graph convolutional networks. In: Proc. of the 15th Int’l Conf. on the Semantic Web. Heraklion: Springer, 2018. 593–607. [doi: 10.1007/978-3-319-93417-4_38]
    [75] Cai L, Yan B, Mai GC, Janowicz K, Zhu R. TransGCN: Coupling transformation assumptions with graph convolutional networks for link prediction. In: Proc. of the 10th Int’l Conf. on Knowledge Capture. Marina: ACM, 2019. 131–138. [doi: 10.1145/3360901.3364441]
    [76] Shang C, Tang Y, Huang J, Bi JB, He XD, Zhou BW. End-to-end structure-aware convolutional networks for knowledge base completion. In: Proc. of the 33rd AAAI Conf. on Artificial Intelligence. Honolulu: AAAI, 2019. 3060–3067. [doi: 10.1609/aaai.v33i01.33013060]
    [77] Vashishth S, Sanyal S, Nitin V, Talukdar P. Composition-based multi-relational graph convolutional networks. In: Proc. of the 8th Int’l Conf. on Learning Representations. Addis Ababa: OpenReview.net, 2020.
    [78] 张天成, 田雪, 孙相会, 于明鹤, 孙艳红, 于戈. 知识图谱嵌入技术研究综述. 软件学报, 2023, 34(1): 277–311. http://www.jos.org.cn/1000-9825/6429.htm
    Zhang TC, Tian X, Sun XH, Yu MH, Sun YH, Yu G. Overview on knowledge graph embedding technology research. Ruan Jian Xue Bao/Journal of Software, 2023, 34(1): 277–311 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6429.htm
    [79] Bordes A, Usunier N, García-Durán A, Weston J, Yakhnenko O. Translating embeddings for modeling multi-relational data. In: Proc. of the 27th Int’l Conf. on Neural Information Processing Systems. Lake Tahoe: Curran Associates Inc., 2013. 2787–2795.
    [80] Wang Z, Zhang JW, Feng JL, Chen Z. Knowledge graph embedding by translating on hyperplanes. In: Proc. of the 28th AAAI Conf. on Artificial Intelligence. Québec City: AAAI, 2014. 1112–1119. [doi: 10.1609/aaai.v28i1.8870]
    [81] Lin YK, Liu ZY, Sun MS, Liu Y, Zhu X. Learning entity and relation embeddings for knowledge graph completion. In: Proc. of the 29th AAAI Conf. on Artificial Intelligence. Austin: AAAI, 2015. 2181–2187. [doi: 10.1609/aaai.v29i1.9491]
    [82] Ji GL, He SZ, Xu LH, Liu K, Zhao J. Knowledge graph embedding via dynamic mapping matrix. In: Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Int’l Joint Conf. on Natural Language Processing. Beijing: Association for Computational Linguistics, 2015. 687–696. [doi: 10.3115/v1/P15-1067]
    [83] Yoon HG, Song HJ, Park SB, Park SY. A translation-based knowledge graph embedding preserving logical property of relations. In: Proc. of the 2016 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego: Association for Computational Linguistics, 2016. 907–916. [doi: 10.18653/v1/n16-1105]
    [84] Do K, Tran T, Venkatesh S. Knowledge graph embedding with multiple relation projections. In: Proc. of the 24th Int’l Conf. on Pattern Recognition. Beijing: IEEE, 2018. 332–337.
    [85] Fan M, Zhou Q, Chang E, Zheng TF. Transition-based knowledge graph embedding with relational mapping properties. In: Proc. of the 28th Pacific Asia Conf. on Language, Information and Computing. Phuket: Department of Linguistics, Chulalongkorn University, 2014. 328–337.
    [86] Xie QZ, Ma XZ, Dai ZH, Hovy E. An interpretable knowledge transfer model for knowledge base completion. In: Proc. of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver: Association for Computational Linguistics, 2017. 950–962. [doi: 10.18653/v1/P17-1088]
    [87] Nguyen DQ, Sirts K, Qu LZ, Johnson M. Neighborhood mixture model for knowledge base completion. In: Proc. of the 20th SIGNLL Conf. on Computational Natural Language Learning. Berlin: Association for Computational Linguistics, 2016. 40–50. [doi: 10.18653/v1/K16-1005]
    [88] Wang PF, Han JL, Li CL, Pan R. Logic attention based neighborhood aggregation for inductive knowledge graph embedding. In: Proc. of the 33rd AAAI Conf. on Artificial Intelligence. Honolulu: AAAI, 2019. 7152–7159. [doi: 10.1609/aaai.v33i01.33017152]
    [89] Li WD, Zhang XY, Wang YQ, Yan ZH, Peng R. Graph2Seq: Fusion embedding learning for knowledge graph completion. IEEE Access, 2019, 7: 157960–157971.
    [90] Zhang Z, Zhuang FZ, Zhu HS, Shi ZP, Xiong H, He Q. Relational graph neural network with hierarchical attention for knowledge graph completion. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 9612–9619. [doi: 10.1609/aaai.v34i05.6508]
    [91] Oh B, Seo S, Lee KH. Knowledge graph completion by context-aware convolutional learning with multi-hop neighborhoods. In: Proc. of the 27th ACM Int’l Conf. on Information and Knowledge Management. Torino: ACM, 2018. 257–266. [doi: 10.1145/3269206.3271769]
    [92] Ferré S. Link prediction in knowledge graphs with concepts of nearest neighbours. In: Proc. of the 16th Int’l Conf. on the Semantic Web. Portoro?: Springer, 2019. 84–100. [doi: 10.1007/978-3-030-21348-0_6]
    [93] Borrego A, Ayala D, Hernández I, Rivero CR, Ruiz D. CAFE: Knowledge graph completion using neighborhood-aware features. Engineering Applications of Artificial Intelligence, 2021, 103: 104302.
    [94] Yin WP, Yaghoobzadeh Y, Schütze H. Recurrent one-hop predictions for reasoning over knowledge graphs. In: Proc. of the 27th Int’l Conf. on Computational Linguistics. Santa Fe: Association for Computational Linguistics, 2018. 2369–2378.
    [95] Lao N, Mitchell T, Cohen WW. Random walk inference and learning in a large scale knowledge base. In: Proc. of the 2011 Conf. on Empirical Methods in Natural Language Processing. Edinburgh: Association for Computational Linguistics, 2011. 529–539.
    [96] Gardner M, Talukdar P, Krishnamurthy J, Mitchell T. Incorporating vector space similarity in random walk inference over knowledge bases. In: Proc. of the 2014 Conf. on Empirical Methods in Natural Language Processing (EMNLP). Doha: Association for Computational Linguistics, 2014. 397–406. [doi: 10.3115/v1/D14-1044]
    [97] Neelakantan A, Roth B, McCallum A. Compositional vector space models for knowledge base completion. In: Proc. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Int’l Joint Conf. on Natural Language Processing. Beijing: Association for Computational Linguistics, 2015. 156–166. [doi: 10.3115/v1/P15-1016]
    [98] Lin YK, Liu ZY, Luan HB, Sun MS, Rao SW, Liu S. Modeling relation paths for representation learning of knowledge bases. In: Proc. of the 2015 Conf. on Empirical Methods in Natural Language Processing (EMNLP). Lisbon, Portugal: Association for Computational Linguistics, 2015. 705–714.
    [99] Garcia-Durán A, Bordes A, Usunier N, Grandvalet Y. Combining two and three-way embeddings models for link prediction in knowledge bases. arXiv:1506.00999, 2015.
    [100] Xiong SW, Huang WT, Duan PF. Knowledge graph embedding via relation paths and dynamic mapping matrix. In: Proc. of the 2018 Int’l Conf. on Conceptual Modeling. Xi’an: Springer, 2018. 106–118. [doi: 10.1007/978-3-030-01391-2_18]
    [101] Ku?elka O, Davis J. Markov logic networks for knowledge base completion: A theoretical analysis under the MCAR assumption. In: Proc. of the 35th Uncertainty in Artificial Intelligence Conf. Tel Aviv: AUAI Press, 2019. 1138–1148.
    [102] Yang F, Yang ZL, Cohen WW. Differentiable learning of logical rules for knowledge base reasoning. In: Proc. of the 31st Int’l Conf. on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 2319–2328.
    [103] Cohen WW. Tensorlog: A differentiable deductive database. arXiv:1605.06523, 2016.
    [104] Ghiasnezhad Omran P, Wang KW, Wang Z. Scalable rule learning via learning representation. In: Proc. of the 27th Int’l Joint Conf. on Artificial Intelligence. Stockholm: AAAI, 2018. 2149–2155.
    [105] Rockt?schel T. Deep prolog: End-to-end differentiable proving in knowledge bases. In: Proc. of the 2nd Conf. on Artificial Intelligence and Theorem Proving. Obergurgl: AITP, 2017. 37.
    [106] Meilicke C, Wudage Chekol M, Ruffinelli D, Stuckenschmidt H. Anytime bottom-up rule learning for knowledge graph completion. In: Proc. of the 28th Int’l Joint Conf. on Artificial Intelligence. Macao: AAAI, 2019. 3137–3143.
    [107] Ma JT, Qiao YQ, Hu GW, Wang YJ, Zhang CQ, Huang YZ, Kumar Sangaiah A, Wu HG, Zhang HP, Ren K. ELPKG: A high-accuracy link prediction approach for knowledge graph completion. Symmetry, 2019, 11(9): 1096.
    [108] Niu GL, Zhang YF, Li B, Cui P, Liu S, Li JY, Zhang XW. Rule-guided compositional representation learning on knowledge graphs. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 2950–2958. [doi: 10.1609/aaai.v34i03.5687]
    [109] Jiang TS, Liu TY, Ge T, Sha L, Li SJ, Chang BB, Sui ZF. Encoding temporal information for time-aware link prediction. In: Proc. of the 2016 Conf. on Empirical Methods in Natural Language Processing. Austin: Association for Computational Linguistics, 2016. 2350–2354. [doi: 10.18653/v1/D16-1260]
    [110] Ni RY, Ma ZG, Yu KH, Xu XH. Specific time embedding for temporal knowledge graph completion. In: Proc. of the 19th IEEE Int’l Conf. on Cognitive Informatics & Cognitive Computing. Beijing: IEEE, 2020. 105–110. [doi: 10.1109/ICCICC50026.2020.9450214]
    [111] Dasgupta SS, Ray SN, Talukdar P. HyTE: Hyperplane-based temporally aware knowledge graph embedding. In: Proc. of the 2018 Conf. on Empirical Methods in Natural Language Processing. Brussels: Association for Computational Linguistics, 2018. 2001–2011. [doi: 10.18653/v1/D18-1225]
    [112] Tang XL, Yuan R, Li QY, Wang TY, Yang HZ, Cai YD, Song HJ. Timespan-aware dynamic knowledge graph embedding by incorporating temporal evolution. IEEE Access, 2020, 8: 6849–6860.
    [113] Wang XZ, Gao TY, Zhu ZC, Zhang ZY, Liu ZY, Li JZ, Tang J. KEPLER: A unified model for knowledge embedding and pre-trained language representation. Trans. of the Association for Computational Linguistics, 2021, 9: 176–194.
    [114] Liu S, Qin YF, Xu M, Kolmani? S. Knowledge graph completion with triple structure and text representation. Int’l Journal of Computational Intelligence Systems, 2023, 16(1): 95.
    [115] Liu WJ, Zhou P, Zhao Z, Wang ZR, Ju Q, Deng HT, Wang P. K-BERT: Enabling language representation with knowledge graph. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 2901–2908. [doi: 10.1609/aaai.v34i03.5681]
    [116] Wang M, Wang S, Yang H, Zhang Z, Chen X, Qi GL. Is visual context really helpful for knowledge graph? A representation learning perspective. In: Proc. of the 29th ACM Int’l Conf. on Multimedia. ACM, 2021. 2735–2743. [doi: 10.1145/3474085.3475470]
    [117] Zhang X, Liang X, Zheng XP, Wu B, Guo YH. MULTIFORM: Few-shot knowledge graph completion via multi-modal contexts. In: Proc. of 2022 the European Conf. on Machine Learning and Knowledge Discovery in Databases. Grenoble: Springer, 2022. 172–187. [doi: 10.1007/978-3-031-26390-3_11]
    [118] Wei YY, Chen W, Zhang XF, Zhao PP, Qu JF, Zhao L. Multi-modal siamese network for few-shot knowledge graph completion. In: Proc. of the 40th Int’l Conf. on Data Engineering. Utrecht: IEEE, 2024. 719–732. [doi: 10.1109/ICDE60146.2024.00061]
    [119] Liang S, Zhu AJ, Zhang JS, Shao J. Hyper-node relational graph attention network for multi-modal knowledge graph completion. ACM Trans. on Multimedia Computing, Communications and Applications, 2023, 19(2): 62.
    [120] Zhang YC, Chen Z, Zhang W. MACO: A modality adversarial and contrastive framework for modality-missing multi-modal knowledge graph completion. In: Proc. of the 12th National CCF Conf. on Natural Language Processing and Chinese Computing. Foshan: Springer, 2023. 123–134. [doi: 10.1007/978-3-031-44693-1_10]
    [121] Zhang YC, Chen Z, Liang L, Chen HJ, Zhang W. Unleashing the power of imbalanced modality information for multi-modal knowledge graph completion. In: Proc. of the 2024 Joint Int’l Conf. on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024). Torino: ELRA, ICCL, 2024. 17120–17130.
    [122] Zhang YC, Chen Z, Guo LB, Xu YJ, Hu BB, Liu ZQ, Zhang W, Chen HJ. NativE: Multi-modal knowledge graph completion in the wild. In: Proc. of the 47th Int’l ACM SIGIR Conf. on Research and Development in Information Retrieval. Washington: ACM, 2024. 91–101. [doi: 10.1145/3626772.3657800]
    [123] Zhao Y, Cai XR, Wu YK, Zhang HW, Zhang Y, Zhao GQ, Jiang N. MoSE: Modality split and ensemble for multimodal knowledge graph completion. In: Proc. of the 2022 Conf. on Empirical Methods in Natural Language Processing. Abu Dhabi: Association for Computational Linguistics, 2022. 10527–10536. [doi: 10.18653/v1/2022.emnlp-main.719]
    [124] Wang YP, Ning B, Wang X, Li GY. Multi-hop neighbor fusion enhanced hierarchical Transformer for multi-modal knowledge graph completion. World Wide Web (WWW), 2024, 27(1).
    [125] Zhang YC, Chen Z, Guo LB, Xu YJ, Hu BB, Liu ZQ, Chen HJ, Zhang W. MyGO: Discrete modality information as fine-grained tokens for multi-modal knowledge graph completion. arXiv:2404.09468, 2024.
    [126] Zhang YC, Chen Z, Guo LB, Xu YJ, Hu BB, Liu ZQ, Zhang W, Chen HJ. Multiple heads are better than one: Mixture of modality knowledge experts for entity representation learning. In: Proc. of the 13th Int’l Conf. on Learning Representations. Singapor: OpenReview.net, 2025. 172–188.
    [127] 陈强, 张栋, 李寿山, 周国栋. 融合任务知识的多模态知识图谱补全. 软件学报, 2025, 36(4): 1590–1603. http://www.jos.org.cn/1000-9825/7213.htm
    Chen Q, Zhang D, Li SS, Zhou GD. Task knowledge fusion for multimodal knowledge graph completion. Ruan Jian Xue Bao/Journal of Software, 2025, 36(4): 1590–1603 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/7213.htm
    [128] 李正洁, 沈立炜, 李弋, 彭鑫. 面向文本描述的CPS资源能力知识图谱构建. 软件学报, 2023, 34(5): 2268–2285. http://www.jos.org.cn/1000-9825/6410.htm
    Li ZJ, Shen LW, Li G, Peng X. Text-oriented construction for CPS resource capability knowledge graph. Ruan Jian Xue Bao/Journal of Software, 2023, 34(5): 2268–2285 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6410.htm
    [129] Zhang ZQ, Cai JY, Zhang YD, Wang J. Learning hierarchy-aware knowledge graph embeddings for link prediction. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI, 2020. 3065–3072. [doi: 10.1609/aaai.v34i03.5701]
    [130] Li RR, Wu X, Wu X, Wang W. Few-shot learning for new user recommendation in location-based social networks. In: Proc. of the 2020 Web Conf. 2020. Taipei: ACM, 2020. 2472–2478. [doi: 10.1145/3366423.3379994]
    [131] 吴信东, 李娇, 周鹏, 卜晨阳. 碎片化家谱数据的融合技术. 软件学报, 2021, 32(9): 2816–2836. http://www.jos.org.cn/1000-9825/6010.htm
    Wu XD, Li J, Zhou P, Bu CY. Fusion technique for fragmented genealogy data. Ruan Jian Xue Bao/Journal of Software, 2021, 32(9): 2816–2836 (in Chine
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

郑修林,周鹏,李培培,张赞,黄艳香,吴信东.知识图谱补全技术及应用.软件学报,,():1-30

复制
分享
文章指标
  • 点击次数:78
  • 下载次数: 63
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2023-11-15
  • 最后修改日期:2024-10-02
  • 在线发布日期: 2025-06-25
文章二维码
您是第20233398位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号