[关键词]
[摘要]
弱标记学习是多标记学习的一个重要分支,近几年已被广泛研究并被应用于多标记样本的缺失标记补全和预测等问题.然而,针对特征集合较大、更容易拥有多个语义标记和出现标记缺失的高维数据问题,现有弱标记学习方法普遍易受这类数据包含的噪声和冗余特征的干扰.为了对高维多标记数据进行准确的分类,提出了一种基于标记与特征依赖最大化的弱标记集成分类方法EnWL.EnWL首先在高维数据的特征空间多次利用近邻传播聚类方法,每次选择聚类中心构成具有代表性的特征子集,降低噪声和冗余特征的干扰;再在每个特征子集上训练一个基于标记与特征依赖最大化的半监督多标记分类器;最后,通过投票集成这些分类器实现多标记分类.在多种高维数据集上的实验结果表明,EnWL在多种评价度量上的预测性能均优于已有相关方法.
[Key word]
[Abstract]
Weak label learning is an important sub-branch of multi-label learning which has been widely studied and applied in replenishing missing labels of partially labeled instances or classifying new instances. However, existing weak label learning methods are generally vulnerable to noisy and redundant features in high-dimensional data where multiple labels and missing labels are more likely present. To accurately classify high-dimensional multi-label instances, in this paper, an ensemble weak label classification method is proposed by maximizing dependency between labels and features (EnWL for short). EnWL first repeatedly utilizes affinity propagation clustering in the feature space of high-dimensional data to find cluster centers. Next, it uses the obtained cluster centers to construct representative feature subsets and to reduce the impact of noisy and redundant features. Then, EnWL trains a semi-supervised multi-label classifier by maximizing the dependency between labels and features on each feature subset. Finally, it combines these base classifiers into an ensemble classifier via majority vote. Experimental results on several high-dimensional datasets show that EnWL significantly outperforms other related methods across various evaluation metrics.
[中图分类号]
[基金项目]
国家自然科学基金(61402378,61571163,61532014,61671189);重庆市基础与前沿研究项目(cstc2014jcyjA40031,cstc2016jcyjA0351)