John Leonard’s group in the MIT Department of Mechanical Engineering specializes in SLAM, or simultaneous localization and mapping, the technique whereby mobile autonomous robots map their environments and determine their locations. Last week, at the Robotics Science and Systems conference, members of Leonard’s group presented a new paper demonstrating how SLAM can be used to improve object-recognition systems, which will be a vital component of future robots that have to manipulate the objects around them in arbitrary ways.
The system uses SLAM information to augment existing object-recognition algorithms. Its performance should thus continue to improve as computer-vision researchers develop better recognition software, and roboticists develop better SLAM software. “Considering object recognition as a black box, and considering SLAM as a black box, how do you integrate them in a nice manner?” asks Sudeep Pillai, a graduate student in computer science and engineering and first author on the new paper. “How do you incorporate probabilities from each viewpoint over time? That’s really what we wanted to achieve.” Read the full article at MIT NEWS