基于大间隔方法的汉语组块分析
DOI:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

Supported by the National Natural Science Foundation of China under Grant Nos.60673043, 60773173 (国家自然科学基金); theNational High-Tech Research and Development Plan of China under Grant No.2006AA01Z143 (国家高技术研究发展计划(863)); the Natural Science Foundation of Jiangsu Province of China under Grant No.BK2006117 (江苏省自然科学基金); the Natural Science Foundation of Jiangsu Higher Education Institutions of China under Grant No.07KJB520057 (江苏省高校自然科学基金)


Chinese Chunking with Large Margin Method
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    汉语组块分析是中文信息处理领域中一项重要的子任务.在一种新的结构化SVMs(support vectormachines)模型的基础上,提出一种基于大间隔方法的汉语组块分析方法.首先,针对汉语组块分析问题设计了序列化标注模型;然后根据大间隔思想给出判别式的序列化标注函数的优化目标,并应用割平面算法实现对特征参数的近似优化训练.针对组块识别问题设计了一种改进的F1 损失函数,使得F1损失值能够依据每个句子的实际长度进行相应的调整,从而能够引入更有效的约束不等式.通过在滨州中文树库CTB4 数据集上的实验数据显示,基于改进的F1 损失函数所产生的识别结果优于Hamming 损失函数,各种类型组块识别的总的F1 值为91.61%,优于CRFs(conditional random fields)和SVMs 方法.

    Abstract:

    Chinese chunking plays an important role in natural language processing. This paper presents a large margin method for Chinese chunking based on structural SVMs (support vector machines). First, a sequence labeling model and the formulation of the learning problem are introduced for Chinese chunking problem, and then the cutting plane algorithm is applied to efficiently approximate the optimal solution of the optimization problem.Finally, an improved F1 loss function is proposed to tackle Chinese chunking. The loss function can scale the F1loss value to the length of the sentence to adjust the margin accordingly, leading to more effective constraintinequalities. Experiments are conducted on UPENN Chinese Treebank-4 (CTB4), and the hamming loss function is compared with the improved F1 loss function. The experimental results show that the training algorithm with the improved F1 loss function can achieve higher performance than the Hamming loss function. The overall F1 score of Chinese chunking obtained with this approach is 91.61%, which is higher than the performance produced by the state-of-the-art machine learning models, such as CRFs (conditional random fields) and SVMs models.

    参考文献
    相似文献
    引证文献
引用本文

周俊生,戴新宇,陈家骏,曲维光.基于大间隔方法的汉语组块分析.软件学报,2009,20(4):870-877

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2007-03-13
  • 最后修改日期:2007-11-05
  • 录用日期:
  • 在线发布日期:
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号