Go to the content. | Move to the navigation | Go to the site search | Go to the menu | Contacts | Accessibility

| Create Account

Giubilato, Riccardo (2019) Stereo and Monocular Vision Guidance for Autonomous Aerial and Ground Vehicles. [Ph.D. thesis]

Full text disponibile come:

[img]PDF Document
Thesis not accessible until 03 December 2022 for intellectual property related reasons.
Visibile to: nobody

26Mb

Abstract (italian or english)

Robotic agents vastly increase the return of planetary exploration missions thanks to their ability of performing in-situ measurements. To this date, unmanned exploration has been performed by individual of robots such as the MER Spirit and Opportunity and later MSL Curiosity.
A fundamental asset to robotic autonomy is the ability to perceive the surroundings through vision systems such as stereo cameras. Since global localization using GPS-like approaches is unavailable in extra-terrestrial environments, rovers need to measure their motion in order to understand where they are heading. This allows to close high-level control loops to follow planned routes toward goals of scientific interest. Visual SLAM (Simultaneous Localization and Mapping) is an effective strategy to fulfill these needs. Stereo cameras are used to both reconstruct the environment structure through triangulation and use that information to localize the cameras while moving. While performing Visual SLAM on constrained resources is still challenging, many state of the art solution exist to solve this problem for single exploration sessions.
The future of planetary exploration however strongly involves cooperation amongst teams of heterogeneous robotic agents. While the SLAM problem is efficiently solved for single sessions and agents, robust solutions for collaborative map merging and re-localization are still topics of active research and constitute the first major objective of this thesis. Here is proposed and validated a robust re-localization pipeline targeted at planetary vehicles equipped with stereo vision systems allowing to localize them in previously built maps. Instead of common Visual SLAM approaches based exclusively on visual features, this algorithm exploits the invariant nature of 3D point clouds by using compact 3D binary descriptors in conjunction with texture cues. Maps are discretized in submaps which are represented in a lightweight form using the Bag of Binary Words paradigm. The algorithm is then tested and validated both in the laboratories of the DLR Robotics and Mechatronics Center and in Mount Etna, Sicily, an outdoor planetary analogous environment.
The second major research objective involves monocular vision for UAVs. Stereo depth perception is often infeasible for UAVs as small baseline systems degenerate to monocular as the vehicle takes off. 3D structure can be obtained using Structure-from-Motion approaches which are however unable to recover a global metric scale. Scale is traditionally recovered integrating accelerations from IMUs. However visual-inertial sensing is delicate being very sensitive on wrong extrinsic calibration. In addition, initialization of the visual-inertial pipeline is challenging and can diverge. These reasons challenge the implementation of unsupervised autonomous behaviors on UAVs. To address these issues, this thesis work proposes a sensor fusion approach between cameras and low resolution range sensors in order to exploit direct range measurements enforcing scale constraints in monocular Visual Odometry. This research objective is accomplished in two stages. Firstly a monocular Visual Odometry is developed without enforcing strict performance constraints and is used in conjunction with a low resolution Time of Flight camera, a lightweight sensor capable of measuring 64 ranges in a narrow Field-of-View. The algorithm is tested against both a state of the art stereo visual SLAM system and a more accurate, while heavier, 2D LiDAR. Finally, a real-time monocular Visual Odometry is developed exploiting a multi-threaded architecture to enable concurrent tracking of the camera pose and scale optimization in the background. This algorithm is tested with a 1D LiDAR altimeter, a minimal range sensing configuration of just 1 point per measurement, demonstrating the ability of recovering and maintain a correct scale along the trajectory with very light and inexpensive off-the-shelf range sensors.


EPrint type:Ph.D. thesis
Tutor:Debei, Stefano
Ph.D. course:Ciclo 32 > Corsi 32 > SCIENZE TECNOLOGIE E MISURE SPAZIALI > MISURE MECCANICHE PER L'INGEGNERIA E LO SPAZIO
Data di deposito della tesi:29 November 2019
Anno di Pubblicazione:02 December 2019
Key Words:Visual SLAM, Visual Systems, Space Robotics, Navigation, Mapping
Settori scientifico-disciplinari MIUR:Area 09 - Ingegneria industriale e dell'informazione > ING-IND/12 Misure meccaniche e termiche
Struttura di riferimento:Centri > Centro Interdipartimentale di ricerca di Studi e attività  spaziali "G. Colombo" (CISAS)
Codice ID:12148
Depositato il:26 Jan 2021 16:26
Simple Metadata
Full Metadata
EndNote Format

Bibliografia

I riferimenti della bibliografia possono essere cercati con Cerca la citazione di AIRE, copiando il titolo dell'articolo (o del libro) e la rivista (se presente) nei campi appositi di "Cerca la Citazione di AIRE".
Le url contenute in alcuni riferimenti sono raggiungibili cliccando sul link alla fine della citazione (Vai!) e tramite Google (Ricerca con Google). Il risultato dipende dalla formattazione della citazione.

[1]  T. Ahonen, A. Hadid, and M. Pietikainen. “Face description with local binary patterns: Application to face recognition”. In: IEEE Transactions on Pattern Analysis & Machine Intelligence 12 (2006), pp. 2037–2041. Cerca con Google

[2]  H. Alismail, L. D. Baker, and B. Browning. “Automatic Calibration of a Range Sensor and Camera System”. In: 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization Transmission. 2012, pp. 286–292. doi: 10.1109/ 3DIMPVT.2012.52. Cerca con Google

[3]  F. Andert and L. Mejias. “Improving Monocular SLAM with Altimeter Hints for Fixed-Wing Aicraft Navigation and Emergency Landing”. In: Proc. International Conference on Unmanned Aircraft Systems (ICUAS). 2015. Cerca con Google

[4]  F. Andert et al. “Optical-Aided Aircraft Navigation using Decoupled Visual SLAM with Range Sensor Augmentation”. In: Intelligent Robotic Systems (2017). Cerca con Google

[5]  D. Arthur and S. Vassilvitskii. “K-means++: The Advantages of Careful Seeding”. In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms. SODA ’07. New Orleans, Louisiana: Society for Industrial and Applied Mathematics, 2007, pp. 1027–1035. isbn: 978-0-898716-24-5. url: http://dl.acm. org/citation.cfm?id=1283383.1283494. Vai! Cerca con Google

[6]  R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval: The Concepts and Technology Behind Search. 2nd. USA: Addison-Wesley Publishing Company, 2008. isbn: 9780321416919. Cerca con Google

[7]  B. Balaram et al. “Mars Helicopter Technology Demonstrator”. In: 2018 AIAA Atmospheric Flight Mechanics Conference. doi: 10.2514/6.2018-0023. eprint: https://arc.aiaa.org/doi/pdf/10.2514/6.2018-0023. url: https://arc. aiaa.org/doi/abs/10.2514/6.2018-0023. Vai! Cerca con Google

[8]  H. Bay et al. “SURF: Speeded Up Robust Features”. In: Computer Vision and Image Understanding (CVIU) 110.3 (2008), pp. 346–359. Cerca con Google

[9]  S. Benhimane and E. Malis. “Homography-based 2d visual tracking and servoing”. In: The International Journal of Robotics Research 26.7 (2007), pp. 661–676. Cerca con Google

[10]  P. Biber and W. Strasser. “The normal distributions transform: a new approach to laser scan matching”. In: Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453). Vol. 3. 2003, 2743–2748 vol.3. doi: 10.1109/IROS.2003.1249285. Cerca con Google

[11]  J.-P Bibring et al. “The Rosetta Lander (Philae) Investigations”. In: Space Science Reviews 128 (Feb. 2007). doi: 10.1007/s11214-006-9138-2. Cerca con Google

[12]  S. Birchfield and C. Tomasi. “A pixel dissimilarity measure that is insensitive to image sampling”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 20.4 (1998), pp. 401–406. Cerca con Google

[13]  J. Biswas and M. Veloso. “Depth camera based indoor mobile robot localization and navigation”. In: IEEE International Conference on Robotics and Automation (ICRA) (2012). Cerca con Google

[14]  D. F. Blake et al. “Curiosity at Gale Crater, Mars: Characterization and Analysis of the Rocknest Sand Shadow”. In: Science 341.6153 (2013). issn: 0036-8075. doi: 10.1126/science.1239505. eprint: https://science.sciencemag.org/content/ 341/6153/1239505.full.pdf. url: https://science.sciencemag.org/content/ 341/6153/1239505. Vai! Cerca con Google

[15]  J. yves Bouguet. “Pyramidal implementation of the Lucas Kanade feature tracker”. In: Intel Corporation, Microprocessor Research Labs (2000). Cerca con Google

[16]  C. Brand et al. “Stereo-vision based obstacle mapping for indoor/outdoor SLAM”. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE. 2014, pp. 1846–1853. Cerca con Google

[17]  C. Brand et al. “Submap matching for stereo-vision based indoor/outdoor SLAM”. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2015, pp. 5670–5677. Cerca con Google

[18]  D. Brown, J. Wendel, and D. Agle. Mars Helicopter to Fly on NASA Next Red Planet Rover Mission. 2018. url: https://www.nasa.gov/press-release/mars-helicopter-to-fly-on-nasa-s-next-red-planet-rover-mission (visited on 05/11/2018). Vai! Cerca con Google

[19]  W. Burgard et al. “Coordinated multi-robot exploration”. In: IEEE Transactions on Robotics 21.3 (2005), pp. 376–386. issn: 1552-3098. doi: 10.1109/TRO.2004. 839232. Cerca con Google

[20]  S. Campagnola et al. “Mission analysis for the Martian Moons Explorer (MMX) mission”. In: Acta Astronautica 146 (2018), pp. 409 –417. issn: 0094-5765. doi: https://doi.org/10.1016/j.actaastro.2018.03.024. url: http://www. sciencedirect.com/science/article/pii/S0094576517317575. Vai! Cerca con Google

[21]  C.-F. Chen, M. Bolas, and E. S. Rosenberg. “Rapid creation of photorealistic virtual reality content with consumer depth cameras”. In: IEEE Virtual Reality (VR) (2017). Cerca con Google

[22]  H. Chen and B. Bhanu. “3D free-form object recognition in range images using local surface patches”. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. Vol. 3. 2004, 136–139 Vol.3. doi: 10.1109/ICPR. 2004.1334487. Cerca con Google

[23]  W. Cheung and G. Hamarneh. “n -SIFT: n -Dimensional Scale Invariant Feature Transform”. In: IEEE Transactions on Image Processing 18.9 (2009), pp. 2012–2021. issn: 1057-7149. doi: 10.1109/TIP.2009.2024578. Cerca con Google

[24]  S. Chiodini et al. “Monocular visual odometry aided by a low resolution time of flight camera”. In: Proc. IEEE International Workshop on Metrology for AeroSpace (MetroAeroSpace). 2017. doi: 10.1109/MetroAeroSpace.2017.7999572. Cerca con Google

[25]  S. Chiodini et al. “MORPHEUS: A Field Robotics Testbed for Soil Sampling and Autonomous Navigation”. In: Proc. 1st Symposium on Space Educational Activities. 2015. Cerca con Google

[26]  C.-S. Chua and R. Jarvis. “Point Signatures: A New Representation for 3D Object Recognition”. In: International Journal of Computer Vision 25 (Oct. 1997), pp. 63– 85. doi: 10.1023/A:1007981719186. Cerca con Google

[27]  T. Conceicao et al.“Joint Visual and Time-of-Flight Camera Calibration for an Automatic Procedure in Space”. In: 2018. Cerca con Google

[28]  A. Concha and J. Civera. “RGBDTAM: A cost-effective and accurate RGB-D tracking and mapping system”. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2017, pp. 6756–6763. Cerca con Google

[29]  K. P. Cop, P. V. K. Borges, and R. Dub ́e. “Delight: An Efficient Descriptor for Global Localisation Using LiDAR Intensities”. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). 2018, pp. 3653–3660. doi: 10.1109/ICRA.2018. 8460940. Cerca con Google

[30]  M. Cummins and P. Newman. “Appearance-only SLAM at large scale with FABMAP 2.0”. In: The International Journal of Robotics Research 30.9 (2011), pp. 1100– 1123. Cerca con Google

[31]  M. Cummins and P. Newman. “FAB-MAP: Probabilistic localization and mapping in the space of appearance”. In: The International Journal of Robotics Research 27.6 (2008), pp. 647–665. Cerca con Google

[32]  M. Donoser and H. Bischof. “Efficient maximally stable extremal region (MSER) tracking”. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06). Vol. 1. Ieee. 2006, pp. 553–560. Cerca con Google

[33]  G. Dorian and J. D. Tardos. “Bags of Binary Words for Fast Place Recognition in Image Sequences”. In: IEEE Transactions on Robotics 28.5 (2012), pp. 1188–1197. issn: 1552-3098. doi: 10.1109/TRO.2012.2197158. Cerca con Google

[34]  R. Dub ́e et al. “An online multi-robot SLAM system for 3D LiDARs”. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2017, pp. 1004–1011. doi: 10.1109/IROS.2017.8202268. Cerca con Google

[35]  R. Dub ́e et al. “Incremental-Segment-Based Localization in 3-D Point Clouds”. In: IEEE Robotics and Automation Letters 3.3 (2018), pp. 1832–1839. issn: 2377-3766. doi: 10.1109/LRA.2018.2803213. Cerca con Google

[36]  F. Endres et al. “3D Mapping with an RGB-D Camera”. In: IEEE Transactions on Robotics (2014). Cerca con Google

[37]  J. Engel, J. Sturm, and D. Cremers. “Scale-aware navigation of a low-cost quadrocopter with a monocular camera”. In: Robotics and Autonomous Systems (2014). Cerca con Google

[38]  T. Estlin et al. “Coordinating multiple spacecraft assets for joint science campaigns”. In: (2010). Cerca con Google

[39]  S. Farboud-Sheshdeh, T. D. Barfoot, and R. H. Kwong. “Towards Estimating Bias in Stereo Visual Odometry”. In: Canadian Conference on Computer and Robot Vision, CRV 2014, Montreal, QC, Canada, May 6-9, 2014. 2014, pp. 8–15. doi: 10.1109/ CRV.2014.10. url: https://doi.org/10.1109/CRV.2014.10. Vai! Cerca con Google

[40]  O. D. Faugeras and F. Lustman. “Motion and structure from motion in a piecewise planar environment”. In: International Journal of Pattern Recognition and Artificial Intelligence 2.03 (1988), pp. 485–508. Cerca con Google

[41]  M. A. Fischler and R. C. Bolles. “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography”. In: Communications of the ACM 24.6 (1981), pp. 381–395. Cerca con Google

[42]  C. Forster, M. Pizzoli, and D. Scaramuzza. “Air-ground localization and map augmentation using monocular dense reconstruction”. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2013, pp. 3971–3978. doi: 10.1109/ IROS.2013.6696924. Cerca con Google

[43]  C. Forster et al. “SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems”. In: IEEE Transactions on Robotics 33 (2017). Cerca con Google

[44]  D. Galvez-Lopez. Place and Object Recognition for Real-time Visual Mapping. 2013. Cerca con Google

[45]  S. Garcia et al. “Indoor SLAM for Micro Aerial Vehicles Control using Monocular Camera and Sensor Fusion”. In: Proc. International Conference on Autonomous Robot Systems and Competitions. 2016. 
 Cerca con Google

[46]  A. Gawel et al. “3d registration of aerial and ground robots for disaster response: An evaluation of features, descriptors, and transformation estimation”. In: 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR). IEEE. 2017, pp. 27–34. Cerca con Google

[47]  A. Geiger et al. “Automatic camera and range sensor calibration using a single shot”. In: 2012 IEEE International Conference on Robotics and Automation. 2012, pp. 3936–3943. doi: 10.1109/ICRA.2012.6224570. Cerca con Google

[48]  A. Geiger, J. Ziegler, and C. Stiller. “StereoScan: Dense 3D Reconstruction in Realtime”. In: 2011. Cerca con Google

[49]  Y. Girdhar et al. “Streaming Scene Maps for Co-Robotic Exploration in Bandwidth Limited Environments”. In: 2019 International Conference on Robotics and Automation (ICRA) (2019). doi: 10.1109/icra.2019.8794132. url: http://dx. doi.org/10.1109/ICRA.2019.8794132. Vai! Cerca con Google

[50]  R. Giubilato et al. “An evaluation of ROS-compatible stereo visual SLAM methods on a nVidia Jetson TX2”. In: Measurement 140 (2019), pp. 161–170. Cerca con Google

[51]  R. Giubilato et al. “Scale Correct Monocular Visual Odometry Using a LiDAR Altimeter”. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018, Madrid, Spain, October 1-5, 2018. 2018, pp. 3694–3700. doi: 10.1109/IROS.2018.8594096. url: https://doi.org/10.1109/IROS.2018. 8594096. Vai! Cerca con Google

[52]  X. Gong, Y. Lin, and J. Liu. “3D LIDAR-camera extrinsic calibration using an arbitrary trihedron”. In: Sensors 13.2 (2013), pp. 1902–1918. Cerca con Google

[53]  J. Graeter, A. Wilczynski, and M. Lauer. “LIMO: Lidar-Monocular Visual Odometry”. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2018, pp. 7872–7879. Cerca con Google

[54]  I. Grixa et al. “Appearance-Based Along-Route Localization for Planetary Missions”. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. 2018, pp. 6327–6334. Cerca con Google

[55]  M. Grott et al. “Low thermal conductivity boulder with high porosity identified on C-type asteroid (162173) Ryugu”. In: Nature Astronomy (2019), pp. 1–6. Cerca con Google

[56]  J. Guo et al. “Local Descriptor for Robust Place Recognition Using LiDAR Intensity”. In: IEEE Robotics and Automation Letters 4.2 (2019), pp. 1470–1477. issn: 2377-3766. doi: 10.1109/LRA.2019.2893887. Cerca con Google

[57]  Y. Guo et al. “A comprehensive performance evaluation of 3D local feature descriptors”. In: International Journal of Computer Vision 116.1 (2016), pp. 66–89. Cerca con Google

[58]  R. Haarmann et al. “Mobile Payload Element (MPE): Concept study for a sample fetching rover for the ESA Lunar Lander Mission”. In: Planetary and Space Science 74.1 (2012), pp. 283–295. Cerca con Google

[59]  M. Hansard et al. “Cross-Calibration of Time-of-flight and Colour Cameras”. In: Computer Vision and Image Understanding. Image Understanding for Real-world Distributed Video Networks 134 (May 2015), pp. 105–115. doi: 10.1016/j.cviu. 2014.09.001. url: https://hal.inria.fr/hal-01059891. Vai! Cerca con Google

[60]  R. Ha ̈nsch, T. Weber, and O. Hellwich. “Comparison of 3D interest point detectors and descriptors for point cloud fusion”. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2.3 (2014), p. 57. Cerca con Google

[61]  C. Harris and M. Stephens. “A combined corner and edge detector”. In: In Proc. of Fourth Alvey Vision Conference. 1988, pp. 147–151. Cerca con Google

[62]  R. Hartley and A. Zissermann. Multiple Views Geometry in Computer Vision. Cerca con Google

[63]  R. I. Hartley. “In defense of the eight-point algorithm”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 19.6 (1997), pp. 580–593. issn: 01628828. doi: 10.1109/34.601246. Cerca con Google

[64]  H. Hirschmuller. “Stereo processing by semiglobal matching and mutual information”. In: IEEE Transactions on pattern analysis and machine intelligence 30.2 (2008), pp. 328–341. Cerca con Google

[65]  H. Hirschmuller, P. R. Innocent, and J. M. Garibaldi. “Fast, unconstrained camera motion estimation from stereo without tracking and robust statistics”. In: 7th International Conference on Control, Automation, Robotics and Vision, 2002. ICARCV 2002. Vol. 2. IEEE. 2002, pp. 1099–1104. Cerca con Google

[66]  B. K. P. Horn. “Closed-form solution of absolute orientation using unit quaternions”. In: J. Opt. Soc. Am. A 4.4 (1987), pp. 629–642. doi: 10.1364/JOSAA.4.000629. url: http://josaa.osa.org/abstract.cfm?URI=josaa-4-4-629. Vai! Cerca con Google

[67]  A. S. Huang et al. “Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera”. In: Robotics Research : The 15th International Symposium ISRR. 2017. Cerca con Google

[68]  A. E. Johnson and M. Hebert. “Using spin images for efficient object recognition in cluttered 3D scenes”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 21.5 (1999), pp. 433–449. issn: 0162-8828. doi: 10.1109/34.765655. Cerca con Google

[69]  A. E. Johnson et al. “Robust and Efficient Stereo Feature Tracking for Visual Odometry”. In: 2008 IEEE International Conference on Robotics and Automation. 2008, pp. 39–46. doi: 10.1109/ROBOT.2008.4543184. Cerca con Google

[70]  J. Jung et al. “Time-of-Flight Sensor Calibration for a Color and Depth Camera Pair”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 37.7 (2015), pp. 1501–1513. issn: 0162-8828. doi: 10.1109/TPAMI.2014.2363827. Cerca con Google

[71]  M. Kaess et al. “iSAM2: Incremental Smoothing and Mapping Using the Bayes Tree”. In: Intl. J. of Robotics Research, IJRR 31.2 (2012), pp. 217–236. Cerca con Google

[72]  G. Kim and A. Kim. “Scan Context: Egocentric Spatial Descriptor for Place Recognition within 3D Point Cloud Map”. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid, 2018. Cerca con Google

[73]  G. Klein and D. Murray. “Parallel Tracking and Mapping for Small AR Workspaces”. In: Proc. Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’07). Nara, Japan, 2007. Cerca con Google

[74]  L. Kneip, D. Scaramuzza, and R. Siegwart. “A Novel Parametrization of the Perspective-Three-Point Problem for a Direct Computation of Absolute Camera Position and Orientation”. In: IEEE Computer Vision and Pattern Recognition (CVPR) (2011). Cerca con Google

[75]  S. Kodgule, A. Candela, and D. Wettergreen. “Non-myopic Planetary Exploration Combining In Situ and Remote Measurements”. In: arXiv preprint arXiv:1904.12255 (2019). Cerca con Google

[76]  C. Krause and U. A. et al. “MASCOT. a Mobile Lander on-board Hayabusa2 Spacecraft, Status and Operational Concept for the Asteroid Ryugu”. In: 2018 SpaceOps Conference. doi: 10.2514/6.2018-2418. eprint: https://arc.aiaa.org/doi/pdf/ 10.2514/6.2018-2418. url: https://arc.aiaa.org/doi/abs/10.2514/6.20182418. Vai! Cerca con Google

[77]  R. Kummerle et al. “On measuring the accuracy of SLAM algorithms”. In: Autonomous Robots 27.4 (2009), p. 387. issn: 1573-7527. doi: 10.1007/s10514-009- 9155-6. url: https://doi.org/10.1007/s10514-009-9155-6. Vai! Cerca con Google

[78]  K. Kwak et al. “Extrinsic calibration of a single line scanning lidar and a camera”. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2011, pp. 3283–3289. doi: 10.1109/IROS.2011.6094490. Cerca con Google

[79]  M. Labb ́e and F. Michaud. “Appearance-Based Loop Closure Detection for Online Large-Scale and Long-Term Operation”. In: IEEE Transactions on Robotics 29.3 (2013). Cerca con Google

[80]  G. H. Lee, F. Fraundorfer, and M. Pollefeys. “Robust pose-graph loop-closures with expectation-maximization”. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2013, pp. 556–563. doi: 10.1109/IROS.2013.6696406. Cerca con Google

[81]  K. Lenc and A. Vedaldi. “Large scale evaluation of local image feature detectors on homography datasets”. In: arXiv preprint arXiv:1807.07939 (2018). Cerca con Google

[82]  V. Lepetit, F.Moreno-Noguer, and P.Fua. “EPnP: An Accurate O(n) Solution to the PnP Problem”. In: International Journal of Computer Vision 81.2 (2009). Cerca con Google

[83]  E. Lopez et al. “A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) system for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments”. In: Sensors (2017). Cerca con Google

[84]  D. G. Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. In: International Journal of Computer Vision 60.2 (2004). Cerca con Google

[85]  M. Maimone et al. “Autonomous navigation results from the Mars Exploration Rover (MER) mission”. In: Experimental robotics IX. Springer, 2006, pp. 3–13. Cerca con Google

[86]  M. W. Maimone, P. C. Leger, and J. J. Biesiadecki. “Overview of the Mars Exploration Rovers Autonomous Mobility and Vision Capabilities”. In: 2007. Cerca con Google

[87]  J Maki et al. “The Mars science laboratory engineering cameras”. In: Space science reviews 170.1-4 (2012), pp. 77–93. Cerca con Google

[88]  E. Malis and M. Vargas. “Deeper understanding of the homography decomposition for vision-based control”. In: (2007), p. 90. Cerca con Google

[89]  J. Matijevic et al. “The Pathfinder Microrover”. In: Journal of Geophysical Research 102 (1997), pp. 3989–4002. Cerca con Google

[90]  L. H. Matthies and S. A. Shafer. “Error modeling in stereo navigation”. In: IEEE J. Robotics and Automation 3.3 (1987), pp. 239–248. doi: 10.1109/JRA.1987. 1087097. url: https://doi.org/10.1109/JRA.1987.1087097. Vai! Cerca con Google

[91]  S. May et al. “Three-dimensional mapping with time-of-flight cameras”. In: Journal of Field Robotics (2009). Cerca con Google

[92]  T. Miki, P. Khrapchenkov, and K. Hori. “UAV/UGV Autonomous Cooperation: UAV assists UGV to climb a cliff by attaching a tether”. In: 2019 International Conference on Robotics and Automation (ICRA) (2019). doi: 10.1109/icra.2019. 8794265. url: http://dx.doi.org/10.1109/ICRA.2019.8794265. Vai! Cerca con Google

[93]  M. J. Milford and G. F. Wyeth. “SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights”. In: 2012 IEEE International Conference on Robotics and Automation. IEEE. 2012, pp. 1643–1649. Cerca con Google

[94]  A. Mishchuk et al. “Working hard to know your neighbor’s margins: Local descriptor learning loss”. In: Advances in Neural Information Processing Systems. 2017, pp. 4826–4837. Cerca con Google

[95]  H. P. Moravec. Obstacle avoidance and navigation in the real world by a seeing robot rover. Tech. rep. Stanford Univ CA Dept of Computer Science, 1980. Cerca con Google

[96]  R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos. “ORB-SLAM: a versatile and accurate monocular SLAM system”. In: IEEE transactions on robotics 31.5 (2015), pp. 1147–1163. Cerca con Google

[97]  R. Mur-Artal and J. D. Tardo ́s. “Fast relocalisation and loop closing in keyframe-based SLAM”. In: 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE. 2014, pp. 846–853. Cerca con Google

[98]  R.Mur-Artal and J.D.Tardos.“ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras”. In: abs/1610.06475 (2016). arXiv: 1610.06475. url: http://arxiv.org/abs/1610.06475. Vai! Cerca con Google

[99]  O. Naroditsky, A. Patterson, and K. Daniilidis. “Automatic alignment of a camera with a line scan LIDAR system”. In: 2011 IEEE International Conference on Robotics and Automation. 2011, pp. 3429–3434. doi: 10.1109/ICRA.2011.5980513. Cerca con Google

[100]  R. A. Newcombe et al. “Kinectfusion: Real-time dense surface mapping and tracking.” In: ISMAR. Vol. 11. 2011. 2011, pp. 127–136. Cerca con Google

[101]  D. Nister. “An efficient solution to the five-point relative pose problem”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 26.6 (2004), pp. 756– 770. issn: 0162-8828. doi: 10.1109/TPAMI.2004.17. Cerca con Google

[102]  D. Nister and H. Stewenius. “Scalable Recognition with a Vocabulary Tree”. In: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Volume 2. CVPR ’06. Washington, DC, USA: IEEE Computer Society, 2006, pp. 2161–2168. isbn: 0-7695-2597-0. doi: 10.1109/CVPR.2006.264. url: http://dx.doi.org/10.1109/CVPR.2006.264. Vai! Cerca con Google

[103]  J. Novatnack and K. Nishino. “Scale-Dependent/Invariant Local 3D Shape Descriptors for Fully Automatic Registration of Multiple Sets of Range Images”. In: (Mar. 2008). Cerca con Google

[104]  D. L. Olson and D. Delen. Advanced data mining techniques. Springer Science & Business Media, 2008. Cerca con Google

[105]  M. Pertile et al. “Calibration of extrinsic parameters of a hybrid vision system for navigation comprising a very low resolution Time-of-Flight camera”. In: 2017 IEEE International Workshop on Metrology for AeroSpace (MetroAeroSpace). 2017, pp. 391–396. doi: 10.1109/MetroAeroSpace.2017.7999604. Cerca con Google

[106] M. Pertile et al. “Uncertainty comparison of three visual odometry systems in different operative conditions”. In: Measurement 78 (2016), pp. 388–396. Cerca con Google

[107] S. M. Prakhya et al. “B-SHOT: a binary 3D feature descriptor for fast Keypoint matching on 3D point clouds”. In: Autonomous Robots 41.7 (2017), pp. 1501–1520. Cerca con Google

[108] A Prusak et al. “Pose estimation and map building with a pmd-camera for robot navigation”. In: Proceedings of the Dynamic 3D Imaging Workshop in Conjunction with DAGM (Dyn3D). Vol. 1. 2007. Cerca con Google

[109] J. Reill et al. “Development of a mobility drive unit for low gravity planetary body exploration”. In: 12th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA), ESA/ESTEC, Noordwijk, Netherlands. 2013. Cerca con Google

[110] J. Reill et al. “MASCOT asteroid lander with innovative mobility mechanism”. In: ASTRA (2015). Cerca con Google

[111] F. Ropero, P. Mun ̃oz, and M. D. R-Moreno. “TERRA: A path planning algorithm for cooperative UGV–UAV exploration”. In: Engineering Applications of Artificial Intelligence 78 (2019), pp. 260–272. Cerca con Google

[112] E. Rosten and T. Drummond. “Machine learning for high-speed corner detection”. In: Proc. European Conference on Computer Vision (ECCV) (2006). Cerca con Google

[113] S. Rusinkiewicz and M. Levoy. “Efficient variants of the ICP algorithm”. In: Proceedings Third International Conference on 3-D Digital Imaging and Modeling. 2001, pp. 145–152. doi: 10.1109/IM.2001.924423. Cerca con Google

[114] R. B. Rusu, N. Blodow, and M. Beetz. “Fast Point Feature Histograms (FPFH) for 3D registration”. In: 2009 IEEE International Conference on Robotics and Automation. 2009, pp. 3212–3217. doi: 10.1109/ROBOT.2009.5152473. Cerca con Google

[115] R. B. Rusu et al. “Aligning point cloud views using persistent feature histograms”. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2008, pp. 3384–3391. doi: 10.1109/IROS.2008.4650967. Cerca con Google

[116] R. B. Rusu, N. Blodow, and M. Beetz. “Fast point feature histograms (FPFH) for 3D registration”. In: 2009 IEEE International Conference on Robotics and Automation. IEEE. 2009, pp. 3212–3217. Cerca con Google

[117] A. Saxena, S. H. Chung, and A. Y. Ng. “Learning depth from single monocular images”. In: Advances in neural information processing systems. 2006, pp. 1161– 1168. Cerca con Google

[118] K. Schmid, F. Ruess, and D. Burschka. “Local reference filter for life-long vision aided inertial navigation”. In: 17th International Conference on Information Fusion (FUSION). IEEE. 2014, pp. 1–8. Cerca con Google

[119] N Schmitz et al. “The Camera of the MASCOT Asteroid Lander on Board Hayabusa 2—Science Objectives, Imaging Sequences, and Instrument Design”. In: Lunar and Planetary Science Conference. Vol. 49. 2018. Cerca con Google

[120] J. L. Schonberger et al. “Comparative evaluation of hand-crafted and learned local features”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, pp. 1482–1491. Cerca con Google

[121] J. L. Scho ̈nberger and J.-M. Frahm. “Structure-from-Motion Revisited”. In: Conference on Computer Vision and Pattern Recognition (CVPR). 2016. Cerca con Google

[122] M. J. Schuster et al. “Distributed stereo vision-based 6D localization and mapping for multi-robot teams”. In: Journal of Field Robotics 36.2 (2019), pp. 305–332. Cerca con Google

[123] M. J. Schuster et al. “The Lightweight Rover Unit ( LRU ), its Success in the SpaceBotCamp Challenge , and Beyond”. In: (). Cerca con Google

[124] H. Sedlmayr et al. “MASCOT: Asteroid Lander with innovative Mobility Mechanism”. In: June 2015. Cerca con Google

[125] J. Shi and C. Tomasi. Good features to track. Tech. rep. Cornell University, 1993. Cerca con Google

[126] S. M. Siam and H. Zhang. “Fast-seqslam: A fast appearance based place recognition algorithm”. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE. 2017, pp. 5702–5708. Cerca con Google

[127] E. Simo-Serra et al. “Discriminative learning of deep convolutional feature point descriptors”. In: Proceedings of the IEEE International Conference on Computer Vision. 2015, pp. 118–126. Cerca con Google

[128] Sivic and Zisserman. “Video Google: a text retrieval approach to object matching in videos”. In: Proceedings Ninth IEEE International Conference on Computer Vision. 2003, 1470–1477 vol.2. doi: 10.1109/ICCV.2003.1238663. Cerca con Google

[129] J. Sivic and A. Zisserman. “Video Google: A text retrieval approach to object matching in videos”. In: null. IEEE. 2003, p. 1470. Cerca con Google

[130] S. W. Squyres et al. “The Opportunity Rover’s Athena Science Investigation at Meridiani Planum, Mars”. In: Science 306.5702 (2004), pp. 1698–1703. issn: 0036- 8075. doi: 10.1126/science.1106171. eprint: https://science.sciencemag. org/content/306/5702/1698.full.pdf. url: https://science.sciencemag. org/content/306/5702/1698. Vai! Cerca con Google

[131] S. W. Squyres et al. “The Spirit Rover’s Athena Science Investigation at Gusev Crater, Mars”. In: Science 305.5685 (2004), pp. 794–799. issn: 0036-8075. doi: 10. 1126/science.3050794. eprint: https://science.sciencemag.org/content/ 305/5685/794.full.pdf. url: https://science.sciencemag.org/content/ 305/5685/794. Vai! Cerca con Google

[132] B. Steder et al. “NARF: 3D Range Image Features for Object Recognition”. In: Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS). Taipei, Taiwan, 2010. Cerca con Google

[133] F. Stein and G. Medioni. “Structural indexing: efficient 3-D object recognition”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 14.2 (1992), pp. 125–145. issn: 0162-8828. doi: 10.1109/34.121785. Cerca con Google

[134] K. H. Strobl and G. Hirzinger. “More accurate camera and hand-eye calibrations with unknown grid pattern dimensions”. In: 2008 IEEE International Conference on Robotics and Automation. IEEE. 2008, pp. 1398–1405. Cerca con Google

[135] J. Sturm et al. “A Benchmark for the Evaluation of RGB-D SLAM Systems”. In: Proc. of the International Conference on Intelligent Robot Systems (IROS). 2012. Cerca con Google

[136] K. Tateno et al. “CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction”. In: CoRR abs/1704.03489 (2017). arXiv: 1704.03489. url: http: //arxiv.org/abs/1704.03489. Cerca con Google

[137] F. Tombari and L. Di Stefano. “Object Recognition in 3D Scenes with Occlusions and Clutter by Hough Voting”. In: 2010 Fourth Pacific-Rim Symposium on Image and Video Technology. 2010, pp. 349–355. doi: 10.1109/PSIVT.2010.65. Cerca con Google

[138] F. Tombari, S. Salti, and L. Di Stefano. “A combined texture-shape descriptor for enhanced 3D feature matching”. In: 2011 18th IEEE International Conference on Image Processing. IEEE. 2011, pp. 809–812. Cerca con Google

[139] F. Tombari, S. Salti, and L. Di Stefano. “Unique Signatures of Histograms for Local Surface Description”. In: Proceedings of the 11th European Conference on Computer Vision Conference on Computer Vision: Part III. ECCV’10. Heraklion, Crete, Greece: Springer-Verlag, 2010, pp. 356–369. isbn: 3-642-15557-X, 978-3-642-15557-4. url: http://dl.acm.org/citation.cfm?id=1927006.1927035. Vai! Cerca con Google

[140] P. H. Torr, A. Zisserman, and S. J. Maybank. “Robust detection of degenerate configurations while estimating the fundamental matrix”. In: Computer vision and image understanding 71.3 (1998), pp. 312–333. Cerca con Google

[141] F. Vasconcelos, J. P. Barreto, and U. Nunes. “A Minimal Solution for the Extrinsic Calibration of a Camera and a Laser-Rangefinder”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 34.11 (2012), pp. 2097–2107. issn: 0162- 8828. doi: 10.1109/TPAMI.2012.18. Cerca con Google

[142] Y. Verdie et al. “TILDE: A Temporally Invariant Learned DEtector”. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015). doi: 10.1109/cvpr.2015.7299165. url: http://dx.doi.org/10.1109/CVPR. 2015.7299165. Vai! Cerca con Google

[143] P. Viola and W. M. Wells. “Alignment by maximization of mutual information”. In: Proceedings of IEEE International Conference on Computer Vision. 1995, pp. 16– 23. doi: 10.1109/ICCV.1995.466930. Cerca con Google

[144] R. Volpe, T. Litwin, and L. Matthies. “Mobile robot localization by remote viewing of a colored cylinder”. In: Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots. Vol. 1. 1995, 257–263 vol.1. doi: 10.1109/IROS.1995.525805. Cerca con Google

[145] R. Volpe et al. “Rocky 7: a next generation Mars rover prototype”. In: Advanced Robotics 11.4 (1996), pp. 341–358. doi: 10.1163/156855397X00362. Cerca con Google

[146] C. Wang et al. “Learning Depth from Monocular Videos using Direct Methods”. In: arXiv preprint arXiv:1712.00175 (2017). Cerca con Google

[147] R. Wang, M. Schworer, and D. Cremers. “Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras”. In: International Conference on Computer Vision (ICCV). Venice, Italy, 2017. Cerca con Google

[148] S. Wang et al. “DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks”. In: CoRR abs/1709.08429 (2017). arXiv: 1709.08429. url: http://arxiv.org/abs/1709.08429. Vai! Cerca con Google

[149] A. Wedler et al. “LRU-lightweight rover unit”. In: Cerca con Google

[150] S. Weiss et al. “Monocular Vision for Long-term Micro Aerial Vehicle State Estimation: A Compendium”. In: Journal of Field Robotics 30.5 (2013), pp. 803–831. Cerca con Google

[151] T. Whelan et al. “ElasticFusion: Dense SLAM Without A Pose Graph”. In: IEEE Robotics Science and Systems (2015). Cerca con Google

[152] N. Yang et al. “Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry”. In: eccv. 2018. Cerca con Google

[153] K. M. Yi et al. “Lift: Learned invariant feature transform”. In: European Conference on Computer Vision. Springer. 2016, pp. 467–483. Cerca con Google

[154] J. Zhang and S. Singh. “LOAM: Lidar Odometry and Mapping in Real-time”. In: Robotics: Science and Systems Conference (RSS). 2014. Cerca con Google

[155] J. Zhang and S. Singh. “Visual-lidar Odometry and Mapping: Low-drift, Robust, and Fast”. In: Proc. IEEE International Conference on Robotics and Automation (ICRA). 2015. Cerca con Google

[156] L. Zhang and S. Rusinkiewicz. “Learning to Detect Features in Texture Images”. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, pp. 6325–6333. doi: 10.1109/CVPR.2018.00662. Cerca con Google

[157] Z. Zhang. “A flexible new technique for camera calibration”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence 22.11 (2000). Cerca con Google

[158] Y. Zhong. “Intrinsic shape signatures: A shape descriptor for 3D object recognition”. In: 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. 2009, pp. 689–696. doi: 10.1109/ICCVW.2009.5457637. Cerca con Google

Solo per lo Staff dell Archivio: Modifica questo record