Optical Image Based Multi-Granularity Follow-Up Environment Perception Algorithm
Author:
Affiliation:

Fund Project:

National Natural Science Foundation of China (61170223, 61202207, 61502432)

  • Article
  • | |
  • Metrics
  • |
  • Reference [31]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    An optical image based multi-granularity follow-up environment perception algorithm is proposed to address the follow-up environment perception issue from indoor to outdoor in the field of rapid 3D modeling. The algorithm generates multi-granularity 3D point cloud models which perfectly fit the ground-truth according to different types of optical image. A probabilistic octree representation is proposed to uniformly express the 3D point cloud models. Finally, the expected TFPOM is generated through dynamic ground-truth fitting at any granularity, and probabilistic octree representation of multi-granularity point cloud models are dynamically fused through implementation of Kalman filter along with the camera trajectory. Benefiting from pruning and merging strategies, the proposed algorithm meets requirements of multi-granularity fusion and multi-granularity representation. As a result, the storage space of environment models can be effectively compressed and robust follow-up environment perception can be achieved, which are essential in environment model based visual navigation and augmented reality. Experiment results show that the algorithm can generate multi-granularity TFPOM which perfectly fits ground-truth in real time with fewer errors in model based navigation on platforms, such as wearable devices, that are equipped with multiple optical sensors and low computing capability.

    Reference
    [1] Davison AJ,Reid ID,Molton ND,Stasse O.MonoSLAM:Real-Time single camera SLAM.IEEE Trans.on Pattern Analysis&Machine Intelligence,2007,29(6):1052-1067.[doi:10.1109/TPAMI.2007.1049]
    [2] Klein G,Murray D.Parallel tracking and mapping for small AR workspaces.IEEE and ACM Int'l Symp.on Mixed and Augmented Reality.2007.225-234.[doi:10.1109/ISMAR.2007.4538852]
    [3] Castle RO,Klein G,Murray DW.Wide-Area augmented reality using camera tracking and mapping in multiple regions.Computer Vision and Image Understanding,2011,115(6):854-867.[doi:10.1016/j.cviu.2011.02.007]
    [4] Newcombe RA,Lovegrove SJ,Davison AJ.DTAM:Dense tracking and mapping in real-time.In:Proc.of the Int'l Conf.on Computer Vision.IEEE Computer Society,2010.2320-2327.[doi:10.1109/ICCV.2011.6126513]
    [5] Engel J,Schöps T,Cremers D.LSD-SLAM:Large-Scale direct monocular SLAM.In:Proc.of the Computer Vision (ECCV 2014).Springer Int'l Publishing,2014.834-849.[doi:10.1007/978-3-319-10605-2_54]
    [6] Caruso D,Engel J,Cremers D.Large-Scale direct SLAM for omnidirectional cameras.In:Proc.of the 2015 IEEE/RSJ Int'l Conf.on Intelligent Robots and Systems (IROS).IEEE,2015.141-148.[doi:10.1109/IROS.2015.7353366]
    [7] Engel J,Stuckler J,Cremers D.Large-Scale direct slam with stereo cameras.In:Proc.of the 2015 IEEE/RSJ Int'l Conf.on Intelligent Robots and Systems (IROS).IEEE,2015.1935-1942.[doi:10.1109/IROS.2015.7353631]
    [8] Mur-Artal R,Montiel JMM,Tardos JD.ORB-SLAM:A versatile and accurate monocular SLAM system.IEEE Trans.on Robotics,2015,1147-1163.[doi:10.1109/TRO.2015.2463671]
    [9] Rublee E,Rabaud V,Konolige K,Bradski G.ORB:An efficient alternative to SIFT or SURF.In:Proc.of the 2011 IEEE Int'l Conf.on Computer Vision (ICCV).IEEE,2011.2564-2571.[doi:10.1109/ICCV.2011.6126544]
    [10] Mur-Artal R,Tardos J.Probabilistic semi-dense mapping from highly accurate feature-based monocular SLAM.In:Proc.of the Robotics:Science and Systems.Rome,2015.[doi:10.15607/RSS.2015.XI.041]
    [11] Newcombe RA,Izadi S,Hilliges O,Molyneaux D,Kim D,Davison AJ,Fitzgibbon A.KinectFusion:Real-Time dense surface mapping and tracking.In:Proc.of the 10th IEEE Int'l Symp.on Mixed and Augmented Reality (ISMAR).IEEE,2011.127-136.[doi:10.1109/ISMAR.2011.6092378]
    [12] Salas-Moreno R,Newcombe R,Strasdat H,Kelly P,Davison A.Slam++:Simultaneous localisation and mapping at the level of objects.In:Proc.of the IEEE Conf.on Computer Vision and Pattern Recognition.2013.1352-1359.[doi:10.1109/CVPR.2013.178]
    [13] Newcombe RA,Fox D,Seitz SM.DynamicFusion:Reconstruction and tracking of non-rigid scenes in real-time.In:Proc.of the IEEE Conf.on Computer Vision and Pattern Recognition.2015.343-352.[doi:10.1109/CVPR.2015.7298631]
    [14] Kahler O,Prisacariu VA,Ren CY,Sun X,Torr P,Murray D.Very high frame rate volumetric integration of depth images on mobile devices.IEEE Trans.on Visualization and Computer Graphics,2015,21(11):1241-1250.[doi:10.1109/TVCG.2015.2459891]
    [15] Moravec H.Robot spatial perceptionby stereoscopic vision and 3D evidence grids.Perception,1996.
    [16] Roth-Tabak Y,Jain R.Building an environment model using depth information.Computer,1989,22(6):85-90.[doi:10.1109/2.30724]
    [17] Cole DM,Newman PM.Using laser range data for 3D SLAM in outdoor environments.In:Proc.of the 2006 IEEE Int'l Conf.on Robotics and Automation.IEEE,2006.1556-1563.[doi:10.1109/ROBOT.2006.1641929]
    [18] Surmann H,Nüchter A,Lingemann K.,Hertzberg J.6D SLAM-Mapping outdoor environment.Journal of Field Robotics,2007,24:699-722.[doi:10.1002/rob.20209]
    [19] Meagher D.Geometric modeling using octree encoding.Computer Graphics and Image Processing,1982,19(2):129-147.[doi:10.1016/0146-664X (82)90104-6]
    [20] Wilhelms J,Van Gelder A.Octrees for faster isosurface generation.ACM Trans.on Graphics (TOG),1992,11(3):201-227.[doi:10.1145/130881.130882]
    [21] Dai Z,Cha JZ,Ni ZL.A fast decomposition algorithm of octree node in 3D-packing.Ruan Jian Xue Bao/Journal of Software,1995,6(11):679-685(in Chinese with English abstract).http://www.jos.org.cn/1000-9825/19951106.htm
    [22] Payeur P,Hébert P,Laurendeau D,Gosselin,CM.Probabilistic octree modeling of a 3D dynamic environment.In:Proc.of the IEEE Int'l Conf.on Robotics and Automation.IEEE,1997,2:1289-1296.[doi:10.1109/ROBOT.1997.614315]
    [23] Fournier J,Ricard B,Laurendeau D.Mapping and exploration of complex environments using persistent 3D model.In:Proc.of the 4th Canadian Conf.on Computer and Robot Vision (CRV 2007).IEEE,2007.403-410.[doi:10.1109/CRV.2007.45]
    [24] Pathak K,Birk A,Poppinga J,Schwertfeger S.3D forward sensor modeling and application to occupancy grid based sensor fusion.In:Proc.of the 2007 IEEE/RSJ Int'l Conf.on Intelligent Robots and System (IROS 2007).IEEE,2007.2059-2064.[doi:10.1109/IROS.2007.4399406]
    [25] Wurm KM,Hornung A,Bennewitz M,Stachniss C,Burgard W.OctoMap:A probabilistic,flexible,and compact 3D map representation for robotic systems.In:Proc.of the ICRA 2010 Workshop on Best Practice in 3D Perception and Modeling for Mobile Manipulation.2010,2.
    [26] Hornung A,Wurm KM,Bennewitz M,Stachniss C,Burgard W.OctoMap:An efficient probabilistic 3D mapping framework based on octrees.Autonomous Robots,2013,34(3):189-206.[doi:10.1007/s10514-012-9321-0]
    [27] Benson D,Davis J.Octree textures.ACM Trans.on Graphics (TOG),2002,21(3):785-790.[doi:10.1145/566654.566652]
    [28] Sturm J,Engelhard N,Endres F,Burgard W,Cremers D.A benchmark for the evaluation of RGB-D SLAM systems.In:Proc.of the 2012 IEEE/RSJ Int'l Conf.on Intelligent Robots and Systems (IROS).IEEE,2012.573-580.[doi:10.1109/IROS.2012.6385773]
    [29] Geiger A,Lenz P,Stiller C,Urtasun R.Vision meets robotics:The KITTI dataset.The Int'l Journal of Robotics Research,2013,0278364913491297.[doi:10.1177/0278364913491297]
    [30] Endres F,Hess J,Sturm J,Cremers D,Burgard W.3-d mapping with an RGB-d camera.IEEE Trans.on Robotics,2014,30(1):177-187.[doi:10.1109/TRO.2013.2279412]
    [21] 戴佐,查建中,倪仲力.三维布局中八叉树节点的快速分解算法.软件学报,1995,6(11):679-685.http://www.jos.org.cn/1000-9825/19951106.htm
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

陈昊升,张格,叶阳东.基于光学图像的多粒度随动环境感知算法.软件学报,2016,27(10):2661-2675

Copy
Share
Article Metrics
  • Abstract:4255
  • PDF: 6628
  • HTML: 4081
  • Cited by: 0
History
  • Received:January 20,2016
  • Revised:March 25,2016
  • Online: August 11,2016
You are the first2036682Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063