Document Type

Conference Proceeding


cognitive robotics, robot simulation, synthetic video, motion detection, computer vision, robot localization


Computer Engineering | Robotics


A mobile robot moving in an environment in which there are other moving objects and active agents, some of which may represent threats and some of which may represent collaborators, needs to be able to reason about the potential future behaviors of those objects and agents. In previous work, we presented an approach to tracking targets with complex behavior, leveraging a 3D simulation engine to generate predicted imagery and comparing that against real imagery. We introduced an approach to compare real and simulated imagery using an affine image transformation that maps the real scene to the synthetic scene in a robust fashion.

In this paper, we present an approach to continually synchronize the real and synthetic video by mapping the affine transformation yielded by the real/synthetic image comparison to a new pose for the synthetic camera. We show a series of results for pairs of real and synthetic scenes containing objects including similar and different scenes.

Article Number


Publication Date



SPIE Conference on Intelligent Robots and Computer Vision XXVII: Algorithms and Techniques, San Jose, CA, January 2010

This research was conducted at the Fordham University Robotics and Computer Vision Lab. For more information about graduate programs in Computer Science, see, and the Fordham University Graduate School of Arts and Sciences, see

Included in

Robotics Commons