一种面向统计机器翻译的协同权重训练方法
作者:

Co-Training Framework for Feature Weight Optimization of Statistic Machine Translation
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [25]
  • |
  • 相似文献
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    分析了统计机器翻译中的特征权重的领域自适应问题,并针对该问题提出了协同的权重训练方法.该方法使用来自不同解码器的译文作为准参考译文,并将其加入到开发集中,使得特征权重的训练过程向测试集所在的领域倾斜.此外,提出了使用最小贝叶斯风险的系统融合方法来选择准参考译文,进一步提高了协同权重训练的性能.实验结果表明,使用最小贝叶斯风险系统融合的协同训练方法,可以在一定程度上解决特征权重的领域自适应问题,并显著地提高了在目标领域内机器翻译结果的质量.

    Abstract:

    In this paper, based on the investigation of domain adaptation for feature weight, the study proposes to use a co-training framework to handle domain adaptation for feature weight, i.e. The study uses the translation results from another heterogeneous decoder as pseudo references and adds them to the development data set for minimum error rate training to bias the feature weight to the domain of test data set. Furthermore, the study uses a minimum Bayes- Risk combination for pseudo reference selection, which can pick proper translation results from the translation candidates from both decoders to smooth the training process. Experimental results show that this co-training method with a minimum Bayes-Risk combination can yield significant improvements in target domain.

    参考文献
    [1] Zhao TJ, et al. The Principle of Machine Translation. Harbin: Harbin Institute of Technology Press, 2001 (in Chinese).
    [2] Brown P, Pietra S, Peitra V, Mercer R. The mathematics of statistical machine translation: Parameter estimation. ComputationalLinguistics, 1993,19(2):263-311.
    [3] Och FJ, Ney H. Discriminative training and maximum entropy models for statistical machine translation. In: Proc. of theAssociation for Computational Linguistics (ACL). 2002. 295-302. [doi: 10.3115/1073083.1073133]
    [4] Koehn P, Schroeder J. Experiments in domain adaptation for statistical machine translation. In: Proc. of the 2nd Workshop onStatistical Machine Translation. 2007. 224-227.
    [5] Lü YJ, Huang J, Liu Q. Improving statistical machine translation performance by training data selection and optimization. In: Proc.of the Empirical Methods in Natural Language Processing (EMNLP). 2007. 343-350.
    [6] Sanchis-Trilles G, Cettolo M. Online language model adaptation via n-gram mixtures for statistical machine translation. In: Proc. ofthe European Association for Machine Translation (EAMT). 2010.
    [7] Wu H, Wang H, Zong C. Domain adaptation for statistical machine translation with domain dictionary and monolingual corpora. In:Proc. of the Int'l Conf. on Computational Linguistics (COLING). 2008. 993-1000.
    [8] Cao J, Lü YJ, Su JS, Liu Q. SMT domain adaptation based on monolingual context information. Journal of Chinese InformationProcessing, 2010,24(6):50-56 (in Chinese with English abstract).
    [9] Ueffing N, Haffari G, Sarkar A. Transductive learning for statistical machine translation. In: Proc. of the 2nd Workshop onStatistical Machine Translation. 2007. 25-32.
    [10] Li M, Zhao Y, Zhang D, Zhou M. Adaptive development data selection for log-linear model in statistical machine translation. In:Proc. of the Int'l Conf. on Computational Linguistics (COLING). 2010. 662-670.
    [11] Wu D. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 1997,23(3):377-403.
    [12] Xiong D, Liu Q, Lin S. Maximum entropy based phrase reordering model for statistical machine translation. In: Proc. of theAssociation for Computational Linguistics (ACL). 2006. 521-528. [doi: 10.3115/1220175.1220241]
    [13] Chiang W. Hierarchical phrase-based translation. Computational Linguistics, 2007,33(2):201-228. [doi: 10.1162/coli.2007.33.2.201]
    [14] Och FJ, Ney H. A systematic comparison of various statistical alignment models. Computational Linguistics, 2003,29(1):19-51.
    [doi: 10.1162/089120103321337421]
    [15] Och FJ. Minimum error rate training in statistical machine translation. In: Proc. of the Association for Computational Linguistics(ACL). 2003. 160-167. [doi: 10.3115/1075096.1075117]
    [16] Blum A, Mitchell T. Combining labeled and unlabeled data with co-training. In: Proc. of the Annual Conf. on Learning Theory(COLT). 1998. 92-100. [doi: 10.1145/279943.279962]
    [17] Pierce D, Cardie C. Limitations of co-training for natural language learning from large data sets. In: Proc. of the Empirical Methodsin Natural Language Processing (EMNLP). 2001. 1-9.
    [18] Sarkar A. Applying co-training methods to statistical parsing. In: Proc. of the North American Chapter of the Association forComputational Linguistics (NAACL). 2001. 1-8. [doi: 10.3115/1073336.1073359]
    [19] Hwa R, Osborne M, Anoop S, Mark S. Corrected co-training for statistical parsers. In: Proc. of the ICML 2003 Workshop on theContinuum from Labeled to Unlabeled Data in Machine Learning and Data Mining. 2003.
    [20] Wang W, Zhou Z. Analyzing co-training style algorithms. In: Proc. of the European Conf. on Machine Learning (ECML). 2007.454-465. [doi: 10.1007/978-3-540-74958-5_42]
    [21] Liang P, Alexandre B, Dan K, Ben T. An end-to-end discriminative approach to machine translation. In: Proc. of the Int'l Conf. onComputational Linguistics (COLING) and the Association for Computational Linguistics (ACL). Sydney, 2006. 761-768. [doi: 10.3115/1220175.1220271]
    [22] Watanabe T, Suzuki J, Tsukada H, Isozaki H. Online large-margin training for statistical machine translation. In: Proc. of theEmpirical Methods in Natural Language Processing (EMNLP) and the Special Interest Group on Natural Language Learning of theAssociation for Computational Linguistics (CoNLL). 2007. 764-773.
    [23] Rosti AV, Ayan NF, Xiang B, Matsoukas S, Schwartz R, Dorr BJ. Combining outputs from multiple machine translation systems.In: Proc. of the North American Chapter of the Association for Computational Linguistics (NAACL) and Human LanguageTechnologies (HLT). 2007. 228-235.
    [24] Koehn P. Statistical significance tests for machine translation evaluation. In: Proc. of the Empirical Methods in Natural LanguageProcessing (EMNLP). 2004. 388-395.
    相似文献
引用本文

刘树杰,李志灏,李沐,周明.一种面向统计机器翻译的协同权重训练方法.软件学报,2012,23(12):3101-3114

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2011-09-01
  • 最后修改日期:2012-03-15
  • 在线发布日期: 2012-12-05
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号