Graph Neural Networks for Table-based Fact Verification
Author:
Affiliation:

Fund Project:

Key Research and Development Program of China (2018AAA0101900, 2018AAA0101902); National Natural Science Foundation of China (91646202, 61772039)

  • Article
  • | |
  • Metrics
  • |
  • Reference [21]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    In the study of natural language understanding and semantic representation, the fact verification task is very important to verify whether a textual statement is based on given factual evidence. Existing research is mainly limited to dealing with textual fact verification, while verification under structured evidence has yet to be explored, such as fact verification based on forms. TabFact is the latest table-based fact verification data set, but the baseline methods do not make good use of the structural characteristics of the table. This study takes advantage of the structural characteristics of the table and designs two models, Row-GVM (Row-level GNN-based verification model) and Cell-GVM (cell-level GNN-based verification model). They have achieved performances of 2.62% and 2.77% higher than the baseline model respectively. The results prove that these two methods using table features are indeed effective.

    Reference
    [1] Dagan I, Glickman O, Magnini B. The PASCAL recognising textual entailment challenge. In:Proc. of the Machine Learning Challenges Workshop. Berlin, Heidelberg:Springer-Verlag, 2005. 177-190.
    [2] Bowman S, Angeli G, Potts C, et al. A large annotated corpus for learning natural language inference. In:Proc. of the 2015 Conf. on Empirical Methods in Natural Language Processing. 2015. 632-642.
    [3] Thorne J, Vlachos A, Christodoulopoulos C, et al. FEVER:A large-scale dataset for fact extraction and verification. In:Proc. of the 2018 Conf. of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Vol.1:Long Papers. 2018. 809-819.
    [4] Suhr A, Lewis M, Yeh J, et al. A corpus of natural language for visual reasoning. In:Proc. of the 55th Annual Meeting of the Association for Computational Linguistics, Vol.2:Short Papers. 2017. 217-223.
    [5] Suhr A, Zhou S, Zhang A, et al. A corpus for reasoning about natural language grounded in photographs. In:Proc. of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. 6418-6428.
    [6] Chen W, Wang H, Chen J, et al. TabFact:A large-scale dataset for table-based fact verification. arXiv preprint arXiv:1909.02164, 2019.
    [7] Pasupat P, Liang P. Inferring logical forms from denotations. In:Proc. of the 54th Annual Meeting of the Association for Computational Linguistics, Vol.1:Long Papers. 2016. 23-32.
    [8] Scarselli F, Gori M, Tsoi AC, et al. The graph neural network model. IEEE Trans. on Neural Networks, 2008,20(1):61-80.
    [9] Zhou J, Han X, Yang C, et al. GEAR:Graph-based evidence aggregating and reasoning for fact verification. In:Proc. of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. 892-901.
    [10] Veličković P, Cucurull G, Casanova A, et al. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
    [11] Liu Z, Xiong C, Sun M. Fine-Graind fact verification with kernel graph attention network. arXiv preprint arXiv:1910.09796, 2019.
    [12] Zhong W, Xu J, Tang D, et al. Reasoning over semantic-level graph for fact checking. arXiv preprint arXiv:1909.03745, 2019.
    [13] Kipf TN, Welling M. Semi-Supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
    [14] De Cao N, Aziz W, Titov I. Question answering by reasoning across documents with graph convolutional networks. In:Proc. of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Vol.1:Long and Short Papers. 2019. 2306-2317.
    [15] Parikh A, Täckström O, Das D, et al. A decomposable attention model for natural language inference. In:Proc. of the 2016 Conf. on Empirical Methods in Natural Language Processing. 2016. 2249-2255.
    [16] Chen Q, Zhu X, Ling ZH, et al. Enhanced LSTM for natural language inference. In:Proc. of the 55th Annual Meeting of the Association for Computational Linguistics, Vol.1:Long Papers. 2017. 1657-1668.
    [17] Williams A, Nangia N, Bowman SR. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017.
    [18] Devlin J, Chang MW, Lee K, et al. BERT:Pre-training of deep bidirectional transformers for language understanding. In:Proc. of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Vol.1:Long and Short Papers. 2019. 4171-4186.
    [19] Peters M, Neumann M, Iyyer M, et al. Deep contextualized word representations. In:Proc. of the 2018 Conf. of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Vol.1:Long Papers. 2018. 2227-2237.
    [20] Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training. 2018. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/languageunderstandingpaper.pdf
    [21] Maas AL, Hannun AY, Ng AY. Rectifier nonlinearities improve neural network acoustic models. Proc. of the ICML, 2013,30(1):3.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

邓哲也,张铭.用于表格事实检测的图神经网络模型.软件学报,2021,32(3):753-762

Copy
Share
Article Metrics
  • Abstract:2091
  • PDF: 5511
  • HTML: 3513
  • Cited by: 0
History
  • Received:August 23,2020
  • Revised:September 03,2020
  • Online: January 21,2021
  • Published: March 06,2021
You are the first2033145Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063