等谱流形学习算法
作者:
基金项目:

国家自然科学基金(60970067,61033013,60775045);东吴学者计划(14317360);苏州大学国家预研基金(SDY2011A25)


Isospectral Manifold Learning Algorithm
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [32]
  • |
  • 相似文献 [20]
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    基于谱方法的流形学习算法的目标是发现嵌入在高维数据空间中的低维表示.近年来,该算法已得到广泛的应用.等谱流形学习是谱方法中的主要内容之一.等谱流形学习源于这样的结论:只要两个流形的谱相同,其内部结构就是相同的.而谱计算难以解决的问题是近邻参数的选择以及如何构造合理邻接权.为此,提出了等谱流形学习算法(isospectral manifold learning algorithm,简称IMLA).它通过直接修正稀疏重构权矩阵,将类内的判别监督信息和类间的判别监督信息同时融入邻接图,达到既能保持数据间稀疏重建关系,又能利用监督信息的目的,与PCA等算法相比具有明显的优势.该算法在3 个常用人脸数据集(Yale,ORL,Extended Yale B)上得到了验证,这进一步说明了IMLA 算法的有效性.

    Abstract:

    Manifold learning based on spectral method has been widely used recently for discovering a low-dimensional representation in the high-dimensional vector space. Isospectral manifold learning is one of the main contents of spectrum method. Isospectral manifold learning stems from the conclusions that if only the spectrums of manifold are the same, so are their internal structures. However, the difficult task about the calculation of the spectrum is how to select the optimal neighborhood size and construct reasonable neighboring weights. In this paper, a supervised technique called isospectral manifold learning algorithm (IMLA) is proposed. By modifying directly sparse reconstruction weight, IMLA takes into account the within-neighboring information and between-neighboring information. Thus, it not only preserves the sparse reconstructive relationship, but also sufficiently utilizes discriminant information. Compared with PCA and other algorithms, IMLA has obvious advantages. Experimental results on face databases (Yale, ORL and Extended Yale B) show the effectiveness of the IMLA method.

    参考文献
    [1] Seung HS, Lee DD. The manifold ways of perception. Science, 2000,290(5500):2268-2269.[doi: 10.1126/science.290.5500.2268]
    [2] Tenenbaum J, Von De Silva, Langford J. A global geometric framework for nonlinear dimension reduction. Science, 2000, 290(5500):2319-2323.[doi: 10.1126/science.290.5500.2319]
    [3] Roweis S, Saul L. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000,290(5500):2323-2326.[doi: 10. 1126/science.290.5500.2323]
    [4] Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 2003,15(6): 1373-1396.[doi: 10.1162/089976603321780317]
    [5] Zhang Z, Zha H. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM Journal of Scientific Computing, 2004,26(1):313-338.[doi: 10.1137/S1064827502419154]
    [6] Bengio Y, Paiement JF, Vincent P, Delalleau O, Le Roux N, Ouimet M. Out-of-Sample extensions for LLE, isomap, MDS, eigenmaps, and spectral clustering. In: Proc. of the Advances in Neural Information Processing Systems16 (NIPS 2003). 2004. 177-184.
    [7] He XF, Niyogi P. Locality preserving projections. In: Proc. of the Neural Information Processing Systems (NIPS). 2003.
    [8] He XF, Cai D, Yan SC, Zhang HJ. Neighborhood preserving embedding. In: Proc. of the IEEE Int''l Conf. on Computer Vision (ICCV). 2005.[doi: 10.1109/ICCV.2005.167]
    [9] Fu Y, Huang TS. Graph embedded analysis for head pose estimation. In: Proc. of the IEEE Int''l Conf. on Automatic Face and Gesture Recognition. 2006.[doi: 10.1109/FGR.2006.60]
    [10] Li FZ, Zhang L, Yang JW, Qian XP, Wang BJ, He SP. Lie Group Machine Learning. Hefei: University of Science and Technology of China Press, 2013 (in Chinese).
    [11] Zhang ZY, Wang J, Zha HY. Adaptive manifold learning. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2012,34(2): 253-265.[doi: 10.1109/TPAMI.2011.115]
    [12] Timofte R, Van Gool L. Iterative nearest neighbors for classification and dimensionality reduction. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). 2012.[doi: 10.1109/CVPR.2012.6247960]
    [13] Zhang LM, Chen SC, Qiao LS. Graph optimization for dimensionality reduction with sparsity constraints. Pattern Recognition, 2012,45(3):1205-1210.[doi: 10.1016/j.patcog.2011.08.015]
    [14] Zhu XF, Huang Z, Yang Y, Shen HT, Xu CS, Luo JB. Self-Taught dimensionality reduction on the high-dimensional small-sized data. Pattern Recognition, 2013,46:215-229.[doi: 10.1016/j.patcog.2012.07.018]
    [15] Qiao LS, Chen SC, Tan XY. Sparsity preserving projections with applications to face recognition. Pattern Recognition, 2010,43(1): 331-341.[doi: 10.1016/j.patcog.2009.05.005]
    [16] Chen SSB, Donoho DL, Saunders MA. Atomic decomposition by basis pursuit. Siam Review, 2001,43(1):129-159.[doi: 10.1137/S003614450037906X]
    [17] Wright J, Yang A, Sastry S, Ma Y. Robust face recognition via sparse representation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2009,31(2):210-227.[doi: 10.1109/TPAMI.2008.79]
    [18] Sergios T, Konstantinos K. Pattern Recognition. Beijing: Electronic Industrial Publishing House, 2006 (in Chinese).
    [19] Gui J, Sun ZA, Jia W, Hu RX, Lei YK, Ji SW. Discriminant sparse neighborhood preserving embedding for face recognition. Pattern Recognition, 2012,45(8):2884-2893.[doi: 10.1016/j.patcog.2012.02.005]
    [20] Li HF, Jiang T, Zhang KS. Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. on Neural Networks, 2006,17(1):157-165.[doi: 10.1109/TNN.2005.860852]
    [21] Xu SL, Xue CH, Hu ZS, Jin YD. Modern Differential Geometry—Spectral Theory and Isospectral Problem, Curvature and Topological Invariants. Hefei: University of Science and Technology of China Press, 2009 (in Chinese).
    [22] Ng AY, Jordan MI, Weiss Y. On spectral clustering: Analysis and an akorithm. In: Proc. of the Advances in NIPs 14. Cambridge: MIT Press, 2001. 849-856.
    [23] Cortes C, Mohri M. On transductive regression. In: Proc. of the Advances in Neural Information Processing Systems. 2007. 305-312.
    [24] Berger M, Gauduchon P, Mazet E, Le spectre d''une variété riemannienne. Lecture Notes in Mathematics, 1971,194:204-216.
    [25] Huang K, Aviyente S. Sparse representation for signal classification. In: Proc. of the Advances in Neural Information Processing Systems (NIPS). 2006.
    [26] Donoho DL, Tsaig Y. Fast solution of l1-norm minimization problems when the solution may be sparse. IEEE Trans. on Information Theory, 2008,54(11):4789-4812.[doi: 10.1109/TIT.2008.929958]
    [27] Jebara T, Wang J, Chang S. Graph construction and b-matching for semisupervised learning. In: Proc. of the Int''l Conf. on Machine Learning (ICML). 2009.
    [28] Mardia KV, Kent JT, Bibby JM. Multivariate Analysis. New York: Academic Press, 1979.
    [29] Belhumeur PN, Hespanha JP, Kriegman DJ. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1997,19(7):711-720.[doi: 10.1109/34.598228]
    [30] Samaria FS, Harter AC. Parameterisation of a stochastic model for human face identification. In: Proc. of the Workshop on Applications of Computer Vision. 1994.
    [31] Dogandzic A, Qiu K. Automatic hard thresholding for sparse signal reconstruction from NDE measurement. Review of Progress in Quantitative Nondestructive Evaluation, 2010,29:806-813.[doi: 10.1063/1.3362486]
    [32] Liu ZG, Pan Q, Dezert J. A new belief-based K-nearest neighbor classification method. Pattern Recognition, 2013,46(3):834-844.[doi: 10.1016/j.patcog.2012.10.001]
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

黄运娟,李凡长.等谱流形学习算法.软件学报,2013,24(11):2656-2666

复制
分享
文章指标
  • 点击次数:5647
  • 下载次数: 8032
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2013-01-29
  • 最后修改日期:2013-08-02
  • 在线发布日期: 2013-11-01
文章二维码
您是第20250458位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号