cognitive robotics, navigation, sensory fusion
Computer Engineering | Robotics
We address the problem of fusing laser and RGB-Data from multiple robots operating in close proximity to one another. By having a team of robots working together, a large area can be scanned quickly, or a smaller area scanned in greater detail. However, a key aspect of this problem is the elimination of the spurious readings due to the robots operating in close proximity. While there is an extensive literature on the mapping and localization aspect of this problem, our problem differs from the dynamic map problem in that it involves at one kind of transient map feature, robots viewing other robots, and we know that we wish to completely eliminate all such mutual views. In prior work, we investigated the problem of fusing laser data from multiple robots in such a manner as to reject this spurious data from other robots. This work showed that a combination of local robot-based direction filtering and global map-based visibility filtering at a central map server removed 91% of the spurious data and resulted in a 98% quality improvement. In this paper we additionally consider the problem of fusing RGB-D data generated by a stereo-camera sensor. An approach based on a model of human visual attention is presented and compared with our prior work and with other related work. This approach is an order of magnitude faster than the prior work and yet rejects 73% of the spurious data producing a 55% quality improvement. Results are shown for this approach for two experiments with a two robot team operating in a confined indoor environment (4m x 4m).
Lyons, Damian M. and Shrestha, Karma, "Eliminating Mutual Views in Fusion of Ranging and RGB-D Data From Robot Teams Operating in Confined Areas" (2014). Faculty Publications. 41.