背景变化鲁棒的自适应视觉跟踪目标模型
作者:
基金项目:

Supported by the National Natural Science Foundation of China under Grant No.60332010 (国家自然科学基金); the "100 Talents Program" of the Chinese Academy of Sciences (中国科学院百人计划); the Shanghai Municipal Sciences and Technology Committee under Grand No.03DZ15013 (上海市科委项目); the ISVISION Technologies Co., Ltd. (上海银晨智能识别科技有限公司)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [16]
  • |
  • 相似文献
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    提出了视觉跟踪任务中目标动态建模的一种方法.该方法首先针对跟踪序列中的当前帧图像观测进行Haar变换,从而得到图像的过完备特征描述;然后根据Fisher准则,评价每个Haar特征对目标和当前背景的区分能力,目标模型由那些区分能力最强的Haar特征构成.在跟踪过程中,采用卡尔曼滤波算法预测目标下一时刻的可能位置,从而根据目标的图像观测和目标下一时刻可能的位置附近的背景图像观测,对Haar特征的区分能力进行动态评价.通过保留区分能力强的特征,同时淘汰区分能力弱的特征,维护目标模型的强可区分性和低计算复杂性.该方法的主要策略是,在最大程度地保持可区分性的前提下,减少计算的复杂性.实验结果表明,在存在诸多不确定性因素的真实长序列视频上,该跟踪方法能够实时地完成复杂的目标跟踪任务.

    Abstract:

    A novel method of dynamic object modeling for visual tracking is presented. The Haar transformation is first applied on the incoming image of the video to get features, which are over-complete description of the image. Then, the Fisher criteria are employed for ranking features based on their contributions to the discrimination between the tracked objects and the background. After that, the objects are modeled by the subset of top-ranked features. During tracking, a Kalman filter is used to predict the upcoming destinations of the tracked objects and the features are re-ranked by the discrimination between the objects and predicted locations. Thereafter, objects models will be updated and only discriminative features are kept in it. This proposed strategy aims to maximally maintain the basic discrimination and reduce computational cost simultaneously. To evaluate the performance of the proposed method, several experiments have been conducted on long video sequences. The experimental results show that the proposed method can handle various uncertain factors under the real world conditions and successfully track the objects in real-time.

    参考文献
    [1]Regazzoni C,Ramesh V,Foresti GL.Special issue on video communications,processing,and understanding for third generation surveillance systems.Proc.of the IEEE,2001,89(10):1355-1539
    [2]Comaniciu D,Ramesh V,Meer P.Kernel-Based object tracking.IEEE Trans.on Pattern Analysis and Machine Intelligence,2003,25(5):564-577.
    [3]Gustafsson F,Gunnarsson F,Bergman N,Forssell U,Jansson J,Karlsson R,Nordlund J.Particle filters for positioning,navigation,and tracking.IEEE Trans.on Signal Processing,2002,50(2):425-437.
    [4]Welch G,G.Bishop.An introduction to the Kalman filter.Technical Report,95-041,University of North Carolina at Chapel Hill,2001.http://www.cs.unc.edu/~welch/media/pdf/kalman_intro.pdf
    [5]Stauffer C,Grimson W.Learning patterns of activity using real-time tracking.IEEE.Trans.on Pattern Analysis and Machine Intelligence,2000,22(8):747-757.
    [6]Stenger B,Ramesh V,Paragios N,Coetzee F,Bouhman J.Topology free hidden Markov models:Application to background modeling.In:Horaud R,ed.Proc.of the Int'l Conf.on Computer Vision.Vancouver.Vancour:IEEE Computer Society,2001.294-301.
    [7]Kato J,Watanabe T,Joga S,Rittscher J,Blake A.An HMM-Based segmentation method for traffic monitoring movies.IEEE Trans.on Pattern Analysis and Machine Intelligence,2002,24(9):1291-1296.
    [8]Rui Y,Chen Y.Better proposal distributions:Object tracking using unscented particle filter.In:Flynn P,ed.IEEE Conf.on Computer Vision and Pattern Recognition.Hawaii:IEEE Computer Society,2001.786-793.
    [9]Hager G,Belhumeur P.Efficient region tracking with parametric models of geometry and illumination.IEEE Trans.on Pattern Analysis and Machine Intelligence,1998,20(10):1025-1039.
    [10]Nguyen H,Smeulders A.Tracking aspects of the foreground against the background.In:Pajdla T,ed.European Conf.on Computer Vision,Vol.2.Prague:Springer-Verlag,2004.446-456.
    [11]Birchfield S.Elliptical head tracking using intensity gradients and color histograms.In:Proc.of the Computer Vision and Pattern Recognition.Santa Barbara:IEEE Computer Society,1998.232-237.
    [12]Stern H,Efros B.Adaptive color space switching for face tracking in multi-colored lighting environments.In:Proc.of the Int'l Conf.On Automatic Face and Gesture Recognition.Washington:IEEE Computer Society,2002.236-241.
    [13]Collins R,Liu Y.On-Line selection of discriminative tracking features.In:Lee SW,ed.Proc.of the IEEE Conf.on Computer Vision.Nice:IEEE Computer Society,2003.346-352.
    [14]Viola P,Jones M.Robust real-time face detection.Int'l Journal of Computer Vision,2004,2(57):137-154.
    [15]Duda R,Hart P,Stork D.Pattern Classification.2nd ed.,New York:Wiley-Interscience Press,2000.
    [16]Fisher R,et al.CAVIAR test case scenarios.2003.http://homepages.inf.ed.ac.uk/rbf/CAVIAR
    相似文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

王建宇,陈熙霖,高文,赵德斌.背景变化鲁棒的自适应视觉跟踪目标模型.软件学报,2006,17(5):1001-1008

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2004-11-23
  • 最后修改日期:2005-05-20
文章二维码
您是第19819585位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号