基于光学图像的多粒度随动环境感知算法
作者:
基金项目:

国家自然科学基金(61170223,61502434,61502432)


Optical Image Based Multi-Granularity Follow-Up Environment Perception Algorithm
Author:
Fund Project:

National Natural Science Foundation of China (61170223, 61202207, 61502432)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [31]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    针对快速三维建模中的室内外随动环境感知问题,提出一种基于光学图像的多粒度随动环境感知算法.该算法根据多种光学图像生成拟合真实三维环境的多粒度点云模型,然后通过概率八叉树压缩并统一表示已生成的多粒度三维模型.进而伴随相机轨迹每个时间节点,通过卡尔曼滤波动态融合多粒度点云模型的概率八叉树表示.最终生成唯一的时态融合概率八叉树三维模型,简称TFPOM,使TFPOM能够在较少的噪声影响下以任意粒度动态拟合真实环境.该算法配合剪枝和归并策略能够适应多粒度融合和多粒度表示的环境建模要求,有效压缩环境模型存储空间,实现鲁棒的随动环境感知,便于基于环境模型的视觉导航,增强现实等应用.实验结果表明,该算法能够在以可穿戴设备为代表的内含多种异构光学图像传感器、低计算效能的平台上实时地得到充分拟合真实动态环境的多粒度TFPOM,基于该模型的视觉导航具有较小的轨迹误差.

    Abstract:

    An optical image based multi-granularity follow-up environment perception algorithm is proposed to address the follow-up environment perception issue from indoor to outdoor in the field of rapid 3D modeling. The algorithm generates multi-granularity 3D point cloud models which perfectly fit the ground-truth according to different types of optical image. A probabilistic octree representation is proposed to uniformly express the 3D point cloud models. Finally, the expected TFPOM is generated through dynamic ground-truth fitting at any granularity, and probabilistic octree representation of multi-granularity point cloud models are dynamically fused through implementation of Kalman filter along with the camera trajectory. Benefiting from pruning and merging strategies, the proposed algorithm meets requirements of multi-granularity fusion and multi-granularity representation. As a result, the storage space of environment models can be effectively compressed and robust follow-up environment perception can be achieved, which are essential in environment model based visual navigation and augmented reality. Experiment results show that the algorithm can generate multi-granularity TFPOM which perfectly fits ground-truth in real time with fewer errors in model based navigation on platforms, such as wearable devices, that are equipped with multiple optical sensors and low computing capability.

    参考文献
    [1] Davison AJ,Reid ID,Molton ND,Stasse O.MonoSLAM:Real-Time single camera SLAM.IEEE Trans.on Pattern Analysis&Machine Intelligence,2007,29(6):1052-1067.[doi:10.1109/TPAMI.2007.1049]
    [2] Klein G,Murray D.Parallel tracking and mapping for small AR workspaces.IEEE and ACM Int'l Symp.on Mixed and Augmented Reality.2007.225-234.[doi:10.1109/ISMAR.2007.4538852]
    [3] Castle RO,Klein G,Murray DW.Wide-Area augmented reality using camera tracking and mapping in multiple regions.Computer Vision and Image Understanding,2011,115(6):854-867.[doi:10.1016/j.cviu.2011.02.007]
    [4] Newcombe RA,Lovegrove SJ,Davison AJ.DTAM:Dense tracking and mapping in real-time.In:Proc.of the Int'l Conf.on Computer Vision.IEEE Computer Society,2010.2320-2327.[doi:10.1109/ICCV.2011.6126513]
    [5] Engel J,Schöps T,Cremers D.LSD-SLAM:Large-Scale direct monocular SLAM.In:Proc.of the Computer Vision (ECCV 2014).Springer Int'l Publishing,2014.834-849.[doi:10.1007/978-3-319-10605-2_54]
    [6] Caruso D,Engel J,Cremers D.Large-Scale direct SLAM for omnidirectional cameras.In:Proc.of the 2015 IEEE/RSJ Int'l Conf.on Intelligent Robots and Systems (IROS).IEEE,2015.141-148.[doi:10.1109/IROS.2015.7353366]
    [7] Engel J,Stuckler J,Cremers D.Large-Scale direct slam with stereo cameras.In:Proc.of the 2015 IEEE/RSJ Int'l Conf.on Intelligent Robots and Systems (IROS).IEEE,2015.1935-1942.[doi:10.1109/IROS.2015.7353631]
    [8] Mur-Artal R,Montiel JMM,Tardos JD.ORB-SLAM:A versatile and accurate monocular SLAM system.IEEE Trans.on Robotics,2015,1147-1163.[doi:10.1109/TRO.2015.2463671]
    [9] Rublee E,Rabaud V,Konolige K,Bradski G.ORB:An efficient alternative to SIFT or SURF.In:Proc.of the 2011 IEEE Int'l Conf.on Computer Vision (ICCV).IEEE,2011.2564-2571.[doi:10.1109/ICCV.2011.6126544]
    [10] Mur-Artal R,Tardos J.Probabilistic semi-dense mapping from highly accurate feature-based monocular SLAM.In:Proc.of the Robotics:Science and Systems.Rome,2015.[doi:10.15607/RSS.2015.XI.041]
    [11] Newcombe RA,Izadi S,Hilliges O,Molyneaux D,Kim D,Davison AJ,Fitzgibbon A.KinectFusion:Real-Time dense surface mapping and tracking.In:Proc.of the 10th IEEE Int'l Symp.on Mixed and Augmented Reality (ISMAR).IEEE,2011.127-136.[doi:10.1109/ISMAR.2011.6092378]
    [12] Salas-Moreno R,Newcombe R,Strasdat H,Kelly P,Davison A.Slam++:Simultaneous localisation and mapping at the level of objects.In:Proc.of the IEEE Conf.on Computer Vision and Pattern Recognition.2013.1352-1359.[doi:10.1109/CVPR.2013.178]
    [13] Newcombe RA,Fox D,Seitz SM.DynamicFusion:Reconstruction and tracking of non-rigid scenes in real-time.In:Proc.of the IEEE Conf.on Computer Vision and Pattern Recognition.2015.343-352.[doi:10.1109/CVPR.2015.7298631]
    [14] Kahler O,Prisacariu VA,Ren CY,Sun X,Torr P,Murray D.Very high frame rate volumetric integration of depth images on mobile devices.IEEE Trans.on Visualization and Computer Graphics,2015,21(11):1241-1250.[doi:10.1109/TVCG.2015.2459891]
    [15] Moravec H.Robot spatial perceptionby stereoscopic vision and 3D evidence grids.Perception,1996.
    [16] Roth-Tabak Y,Jain R.Building an environment model using depth information.Computer,1989,22(6):85-90.[doi:10.1109/2.30724]
    [17] Cole DM,Newman PM.Using laser range data for 3D SLAM in outdoor environments.In:Proc.of the 2006 IEEE Int'l Conf.on Robotics and Automation.IEEE,2006.1556-1563.[doi:10.1109/ROBOT.2006.1641929]
    [18] Surmann H,Nüchter A,Lingemann K.,Hertzberg J.6D SLAM-Mapping outdoor environment.Journal of Field Robotics,2007,24:699-722.[doi:10.1002/rob.20209]
    [19] Meagher D.Geometric modeling using octree encoding.Computer Graphics and Image Processing,1982,19(2):129-147.[doi:10.1016/0146-664X (82)90104-6]
    [20] Wilhelms J,Van Gelder A.Octrees for faster isosurface generation.ACM Trans.on Graphics (TOG),1992,11(3):201-227.[doi:10.1145/130881.130882]
    [21] Dai Z,Cha JZ,Ni ZL.A fast decomposition algorithm of octree node in 3D-packing.Ruan Jian Xue Bao/Journal of Software,1995,6(11):679-685(in Chinese with English abstract).http://www.jos.org.cn/1000-9825/19951106.htm
    [22] Payeur P,Hébert P,Laurendeau D,Gosselin,CM.Probabilistic octree modeling of a 3D dynamic environment.In:Proc.of the IEEE Int'l Conf.on Robotics and Automation.IEEE,1997,2:1289-1296.[doi:10.1109/ROBOT.1997.614315]
    [23] Fournier J,Ricard B,Laurendeau D.Mapping and exploration of complex environments using persistent 3D model.In:Proc.of the 4th Canadian Conf.on Computer and Robot Vision (CRV 2007).IEEE,2007.403-410.[doi:10.1109/CRV.2007.45]
    [24] Pathak K,Birk A,Poppinga J,Schwertfeger S.3D forward sensor modeling and application to occupancy grid based sensor fusion.In:Proc.of the 2007 IEEE/RSJ Int'l Conf.on Intelligent Robots and System (IROS 2007).IEEE,2007.2059-2064.[doi:10.1109/IROS.2007.4399406]
    [25] Wurm KM,Hornung A,Bennewitz M,Stachniss C,Burgard W.OctoMap:A probabilistic,flexible,and compact 3D map representation for robotic systems.In:Proc.of the ICRA 2010 Workshop on Best Practice in 3D Perception and Modeling for Mobile Manipulation.2010,2.
    [26] Hornung A,Wurm KM,Bennewitz M,Stachniss C,Burgard W.OctoMap:An efficient probabilistic 3D mapping framework based on octrees.Autonomous Robots,2013,34(3):189-206.[doi:10.1007/s10514-012-9321-0]
    [27] Benson D,Davis J.Octree textures.ACM Trans.on Graphics (TOG),2002,21(3):785-790.[doi:10.1145/566654.566652]
    [28] Sturm J,Engelhard N,Endres F,Burgard W,Cremers D.A benchmark for the evaluation of RGB-D SLAM systems.In:Proc.of the 2012 IEEE/RSJ Int'l Conf.on Intelligent Robots and Systems (IROS).IEEE,2012.573-580.[doi:10.1109/IROS.2012.6385773]
    [29] Geiger A,Lenz P,Stiller C,Urtasun R.Vision meets robotics:The KITTI dataset.The Int'l Journal of Robotics Research,2013,0278364913491297.[doi:10.1177/0278364913491297]
    [30] Endres F,Hess J,Sturm J,Cremers D,Burgard W.3-d mapping with an RGB-d camera.IEEE Trans.on Robotics,2014,30(1):177-187.[doi:10.1109/TRO.2013.2279412]
    [21] 戴佐,查建中,倪仲力.三维布局中八叉树节点的快速分解算法.软件学报,1995,6(11):679-685.http://www.jos.org.cn/1000-9825/19951106.htm
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

陈昊升,张格,叶阳东.基于光学图像的多粒度随动环境感知算法.软件学报,2016,27(10):2661-2675

复制
分享
文章指标
  • 点击次数:4248
  • 下载次数: 6583
  • HTML阅读次数: 4016
  • 引用次数: 0
历史
  • 收稿日期:2016-01-20
  • 最后修改日期:2016-03-25
  • 在线发布日期: 2016-08-11
文章二维码
您是第19765674位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号