Document Type
Conference Proceeding
Keywords
Mobile Robots, Navigation, GPS-denied, visual homing
Disciplines
Artificial Intelligence and Robotics | Computer Engineering | Robotics
Abstract
Visual homing is a lightweight approach to visual navigation which does not require GPS. It is very attractive for robot platforms with a low computational capacity. However, a limitation is that the stored home location must be initially within the field of view of the robot. Motivated by the increasing ubiquity of camera information we propose to address this line-of-sight limitation by leveraging camera information from other robots and fixed cameras. To home to a location that is not initially within view, a robot must be able to identify a common visual landmark with another robot that can be used as an `intermediate' home location. We call this intermediate location identification step the \Do you see what I see" (DYSWIS) task. We evaluate three approaches to this problem: SIFT based, CNN appearance based, and a semantic approach.
Publication Title
SPIE Unmanned Systems Technology 2002
Article Number
1085
Publication Date
Spring 2022
Publisher
SPIE
Language
English
Peer Reviewed
1
Recommended Citation
Damian Lyons and Noah Petzinger. Visual homing for robot teams: Do you see what i see? SPIE Conference on Unmanned Systems Technology (UST), April 2022.
Version
Published
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.