A Survey on Data Association Methods in VSLAM
Citation
S.SriVidhya, Dr.C.B. Akki, Dr.Prakash S"A Survey on Data Association Methods in VSLAM", International Journal of Engineering Trends and Technology (IJETT), V30(2),83-88 December 2015. ISSN:2231-5381. www.ijettjournal.org. published by seventh sense research group
Abstract
In robotics the Simultaneous Localization and Mapping (SLAM) is the problem in which an autonomous robots acquires a map of the surrounding environment while at the same time localizes itself inside this map. One of the most challenging fields of research in SLAM is the so called Visual- SLAM problem, in which various types of cameras are used as sensor for the navigation. Cameras are inexpensive sensors and can provide rich information about the surrounding environment, on the other hand the complexity of the computer vision tasks and the strong dependence on the characteristics of the environment in current approaches makes the Visual-SLAM far to be considered a closed problem. Visual SLAM (simultaneous localization and mapping) refers to the problem of using images, as the only source of external information, in order to establish the position of a robot, a vehicle, or a moving camera in an environment, and at the same time, construct a representation of the explored zone. Nowadays, the problem of SLAM is considered solved when range sensors such as lasers or sonar are used to build 2D maps of small static environments. However SLAM for dynamic, complex and large scale environments, using vision as the sole external sensor, is an active area of research. The computer vision techniques employed in visual SLAM, such as detection, description and matching of salient features, image recognition and retrieval, among others, are still susceptible of improvement. The objective of this article is to provide new researchers in the field of visual SLAM a brief and comprehensible review of data association categories in VSLAM.
References
[1] H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping: part I. Robotics and Automation Magazine, IEEE, 13:99{110, June 2006.
[2] D.G. Lowe. Object recognition from local scale-invariant features. In Int. Conf. On Computer Vision, 1999.
[3] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. SURF: Speeded up robust features. In European Conference on Computer Vision, 2006.
[4] Andrew J. Davison and David W. Murray. Simultaneous localisation and map-building using active vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002.
[5] J. Kosecka, L. Zhou, P. Barber, and Z. Duric. Qualitative image based localization in indoor environments. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 2003.
[6] F. Zernike. Diffraction theory of the cut procedure and its improved form, the phase contrast method. Physica, 1:689–704, 1934.
[7] Eade E, Drummond T (2006a) Edge landmarks in monocular SLAM. In: Proceedings of the British machine vision conference
[8] Klein G, Murray D (2008) Improving the agility of key framebased SLAM. In: Proceedings of the European conference on computer vision, pp 802–815
[9] Gee A, Chekhlov D, Calway A, Mayol W (2008) Discovering higher level structure in visual SLAM. IEEE Trans Robot 24(5):980–990
[10] Martinez J, Calway (2010) A unifying planar and point mapping in monocular SLAM. In: Proceedings of the British machine vision conference, pp 1–11
[11] D. Wolf and G. Sukhatme. Mobile robot simultaneous localization and mapping in dynamic environments. Autonomous Robots, 19(1):53–65, 2005.
[12] C. Wang, C. Thorpe, S. Thrun, M. Hebert, and H. Durrant Whyte. Simultaneous localization, mapping and moving object tracking. Int?l of Robotics Research, 26(9):889, 2007.
[13] D. Nister, O. Naroditsky, and J. Bergen. Visual odometry. In IEEE Proc. of CVPR, volume 1, 2004.
[14] L. Paz, P. Pini´es, J. Tard´ os, and J. Neira. Large-scale 6-dof slam with stereo-in-hand. IEEE Trans. on Robotics, 24(5):946– 957, 2008.
[15] M. Kaess and F. Dellaert. Visual slam with a multi-camera rig. Georgia Institute of Technology, Tech. Rep. GIT-GVU-06-06, 2006.
[16] R. O. Castle, G. Klein, and D. W. Murray. Video-rate localization in multiple maps for wearable augmented reality. In Proc 12th IEEE Int Symp on Wearable Computers, pages 15–22, 2008.
[17] W. Burgard, M. Moors, D. Fox, R. Simmons, and S. Thrun. Collaborative multi-robot exploration. In IEEE Proc. of Robotic and Automation, volume 1, pages 476–481, 2002.
[18] S. Thrun, W. Burgard, and D. Fox. A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping. In IEEE Proc. of Robotics and Automation, volume 1, pages 321–328. IEEE, 2002.
[19] J. Allred, A. Hasan, S. Panichsakul, W. Pisano, P. Gray, J. Huang, R. Han, D. Lawrence, and K. Mohseni. Sensorflock an airborne wireless sensor network of micro-air vehicles. In Proc. of Int?l Conf. on Embedded networked sensor systems, pages 117– 129. ACM, 2007.
[20] J. Shi and C. Tomasi. Good features to track. In IEEE Proc. Of CVPR, pages 593–600, 1994.
[21] M. Chli and A. J. Davison. Active matching for visual tracking. Robot. Auton. Syst., 57(12):1173–1187, 2009.
[22] Ho K, Newman P (2007) Detecting loop closure with scene sequences. Int J Comput Vis 74(3):261–286.
[23] Gil A, Reinoso O, Ballesta M, Juliá M (2010) Multi-robot visual SLAM using a rao-blackwellized particle filter. Robot Autonom Syst 58(1):68–80
[24] Vidal T, Berger C, Sola J, Lacroix S (2011) Large scales multiple robot visual mapping with heterogeneous landmarks in semi-structured terrain. Robot Autonom Syst, pp 654–674
[25] Neira J, Tardós JD (2001) Data association in stochastic mapping using the joint compatibility test. In: Proceedings of the IEEE international conference on robotics and automation 17(6): 890–897
[26] Cummins M, Newman P (2008) FAB-MAP: probabilistic localization and mapping in the space of appearance. Int J Robot Res 27(6):647–665
[27] A. Angeli, S. Doncieux, J.A. Meyer, and D. Filliat. Incremental vision based topological slam. In In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1031{1036, 22-26 Sept.2008.
[28] Clemente L, Davison A, Reid I, et al (2007) Mapping large loops with a single hand-held camera. In: Proceedings of robotics: science and systems conference
[29] Mei C, Sibley G, Cummins M et al (2010) RSLAM: a system for large-scale mapping in constant-time using stereo. Int J Comput Vision 94(2):1–17
[30] Angeli A, Doncieux S, Filliat D (2008) Real time visual loop closure detection. In: Proceedings of the IEEE international conference on robotics and automation
[31] Williams B, Cummins M, Neira J, Newman P, Reid I, Tardós JD (2009) A comparison of loop closing techniques in monocular SLAM. Robot Autonom Syst 57(12):1188–1197.
[32] Ho K, Newman P (2007) Detecting loop closure with scene sequences. Int J Comput Vis 74(3):261–286
[33] Magnusson M, Andreasson H, et al (2009) Automatic appearance-based loop detection from 3D laser data using the normal distribution transform. J Field Robot. Three Dimensional Mapping Parts 2, 26(12):892–914
[34] Chekhlov D, Mayol W, Calway A (2008) Appearance based indexing for relocalisation in real-time visual SLAM. In: Proceedings of the British machine vision conference, pp 363– 372
[35] Eade E, Drummond T (2008) Unified loop closing and recovery for real time monocular SLAM. In Proceedings of the British Machine vision conference
[36] Williams B, Klein G, Reid I (2007) Real-time SLAM relocalisation. In: Proceedings of the IEEE international conference on computer vision
[37] Lepetit V, Fua P (2006) Key point recognition using randomized trees. IEEE Trans Pattern Anal Mach Intell 28(9): 1465–1479 IEEE Std. 802.11, 1997.
Keywords
Visual SLAM - Detectors-Descriptors- Data association.