Document Type
Article
Keywords
Mobile Robots, Navigation, GPS-denied, visual homing
Disciplines
Computer Engineering | Robotics
Abstract
Visual homing is a lightweight approach to visual navigation which does not require GPS. It is very attractive for robot platforms with a low computational capacity. However, a limitation is that the stored home location must be initially within the field of view of the robot. Motivated by the increasing ubiquity of camera information we propose to address this line-of-sight limitation by leveraging camera information from other robots and fixed cameras. To home to a location that is not initially within view, a robot must be able to identify a common visual landmark with another robot that can be used as an `intermediate' home location. We call this intermediate location identification step the \Do you see what I see" (DYSWIS) task. We evaluate three approaches to this problem: SIFT based, CNN appearance based, and a semantic approach.
Publication Title
SPI Unmanned Systems Technology
Volume
XXIV
Article Number
1081
Publication Date
Spring 2022
Language
English
Peer Reviewed
1
Recommended Citation
Lyons, Damian and Petzinger, Noah, "Visual Homing for Robot Teams: Do you see what I see?" (2022). Faculty Publications. 72.
https://research.library.fordham.edu/frcv_facultypubs/72
Version
Published
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.