一种基于局部学习的自然图像景物提取方法
作者:
基金项目:

Supported by the National Natural Science Foundations of China under Grant Nos.60505004, 60773061 (国家自然科学基金)


A Local Learning Approach for Natural Image Matting
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [17]
  • |
  • 相似文献
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    引入一种按邻点对的相似性权值计算次数来归类Laplacian 的思想,并从理论上证明了包含多次相似性权值计算的Laplacian 构造比只计算一次或两次相似性权值的Laplacian 构造更能精细地刻画数据局部几何结构.据此提出了一种新的更能胜任自然图像景物提取任务的Laplacian 构造方法.该方法通过任意一对相邻像素在不同局部邻域内建立一个线性学习模型来重构不同的相似性权值.结合用户提供的部分前、背景标记约束,导出求解景物提取的半监督二次优化目标函数.当考虑通过对前、背景抽样来估计未知像素的颜色值时,优化目标可以迭代求解.更有意义的是,该迭代方法可以成功地将原来构造的其他Laplacian 推广应用于只提供稀疏指示条带的景物提取问题中.理论分析与实验结果均证实,所构造的Laplacian 能够更充分地表达图像像素间的内在结构,能以更精细的方式约束传播前、背景的成分比例而不仅仅是标号,从而获得更优的景物提取效果.

    Abstract:

    A scheme of categorizing Laplacians is introduced in this paper based on the computation times of similarity weights for each pair of adjacent data points. It is also theoretically proven that the Laplacian construction with multiple computations of similarity weights for each pair of adjacent points can better capture the local intrinsic structure of data than those methods with only one or two such computations. A novel Laplacian construction method is then proposed, which is more suitable for natural image matting task. In this method, all the different similarity weights for any pair of adjacent pixels are reconstructed by using a local linear model in the neighborhoods they fall into. By combining the user-provided constraints which specify some pixels as foreground or background, a quadratic objective function for matting based on semi-supervised learning is formed. When estimating the colors of unknown pixels by sampling foreground and background colors, this optimization problem is reformulated and solved in an iterative manner. What’s more, this iterative scheme can also be successfully generalized and applied into other previously constructed Laplacians for image matting tasks with only sparse label scribbles. Both the theoretical analysis and experimental results validate that the proposed Laplacian construction approach can better capture the intrinsic structure between image pixels, and can propagate the finer ingredients of an image foreground and background rather than just their labels, and thus the mattes of higher quality are obtained.

    参考文献
    [1] Wang J, Cohen MF. Optimized color sampling for robust matting. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Minneapolis, 2007. http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=4270031
    [2] Chuang YY, Curless B, Salesin DH, Szeliski R. A Bayesian approach to digital matting. In: Jacobs A, Baldwin T, eds. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2001. 264?271.
    [3] Lin SY, Pan RF, Du H, Shi JY. A survey on digital matting. Journal of Computer—Aided Design & Computer Graphics,2007,19(4):473?479 (in Chinese with English abstract).
    [4] Wang J, Cohen MF. An iterative optimization approach for unified image segmentation and matting. In: Ma SD, Shum HY, eds.Proc. of the IEEE Int’l Conf. on Computer Vision. New York: IEEE Computer Society Press, 2005. 936?943.
    [5] Guan Y, Chen W, Liang X, Ding ZG, Peng QS. Easy matting—A stroke based approach for continuous image matting. Computer Graphics Forum, Eurographics, 2006,25(3):567?576.
    [6] Sun J, Jia JY, Tang CK, Shum HY. Poisson matting. ACM Trans. on Graphics, 2004,23(3):315?321.
    [7] Grady L, Schiwietz T, Aharon S, Westermann R. Random walks for interactive alpha-matting. In: Villanueva JJ, ed. Proc. of the 5th IASTED Int’l Conf. on Visualization, Imaging and Image Processing. Benidorm: ACTA Press, 2005. 423?429.
    [8] Levin A, Lischinski D, Weiss Y. A closed form solution to natural image matting. In: Fitzgibbon A, Taylor C, LeCun Y, eds. Proc.of the IEEE Conf. on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2006. 61?68.
    [9] Levin A, Alex RA, Lischinski D. Spectral matting. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Minneapolis, 2007. http://people.csail.mit.edu/alevin/papers/spectral-matting-levin-etal-cvpr07.pdf
    [10] Bai X, Sapiro G. A geodesic framework for fast interactive image and video segmentation and matting. In: Proc. of the 11th IEEE Int’l Conf. on Computer Vision. Rio de Janeiro, 2007. http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=4408931
    [11] Roweis ST, Saul LK. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000,290(5500):2323?2326.
    [12] Zhu XJ, Ghaharmani Z, Lafferty J. Semi-Supervised learning using Gaussian fields and harmonic functions. In: Fawcett T, Mishra N, eds. Proc. of the 20th Int’l Conf. on Machine Learning. Menlo Park: AAAI Press, 2003. 912?919.
    [13] Zhou DY, Bousquet O, Lal TN, Weston J, Sch?lkopf B. Learning with local and global consistency. In: Thrum S, Saul L,Sch?lkopf B, eds. Advances in Neural Information Processing System 16. Cambridge: MIT Press, 2004. 321?328.
    [14] Wang F, Wang JD, Zhang CS, Shen HC. Semi-Supervised classification using linear neighborhood propagation. In: Werner B, ed.Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Los Alamitos: IEEE Computer Society Press, 2006.160?167.
    [15] Wang F, Zhang CS. Label propagation through liner neighborhoods. In: Cohen WW, Moore A, eds. Proc. of the 23rd Int’l Conf. on Machine Learning. New York: ACM Press, 2006. 985?992.
    [16] Wu MR, Sch?lkopf B. Transductive classification via local learning regularization. In: Meila M, Shen X, eds. Proc. of the 11th Int’l Conf. on Artificial Intelligence and Statistics. Cambridge: MIT Press, 2007. 624?631. 附中文参考文献:
    [3] 林生佑,潘瑞芳,杜辉,石教英.数字抠图技术综述.计算机辅助设计与图形学学报,2007,19(4):473?479.
    相似文献
    引证文献
引用本文

彭宏京,陈松灿,张道强.一种基于局部学习的自然图像景物提取方法.软件学报,2009,20(4):834-844

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2008-01-18
  • 最后修改日期:2008-08-13
文章二维码
您是第19780545位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号