Document Type
Conference Proceeding
Keywords
cognitive robotics, robot simulation, synthetic video, motion detection, computer vision, robot localization
Disciplines
Computer Engineering | Robotics
Abstract
A mobile robot moving in an environment in which there are other moving objects and active agents, some of which may represent threats and some of which may represent collaborators, needs to be able to reason about the potential future behaviors of those objects and agents. In previous work, we presented an approach to tracking targets with complex behavior, leveraging a 3D simulation engine to generate predicted imagery and comparing that against real imagery. We introduced an approach to compare real and simulated imagery using an affine image transformation that maps the real scene to the synthetic scene in a robust fashion.
In this paper, we present an approach to continually synchronize the real and synthetic video by mapping the affine transformation yielded by the real/synthetic image comparison to a new pose for the synthetic camera. We show a series of results for pairs of real and synthetic scenes containing objects including similar and different scenes.
Article Number
1009
Publication Date
1-2010
Recommended Citation
Lyons, Damian M.; Chaudhry, Sirhan; and Benjamin, D. Paul, "Synchronizing Real and Predicted Synthetic Video Imagery for Localization of a Robot to a 3D Environment" (2010). Faculty Publications. 6.
https://research.library.fordham.edu/frcv_facultypubs/6
Comments
SPIE Conference on Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques, San Jose, CA, January 2010
This research was conducted at the Fordham University Robotics and Computer Vision Lab. For more information about graduate programs in Computer Science, see http://www.cis.fordham.edu/graduate.html, and the Fordham University Graduate School of Arts and Sciences, see http://www.fordham.edu/gsas.