Julian Straub orienteers robots


May 7, 2014

Suppose you’re trying to navigate an unfamiliar section of a big city, and you’re using a particular cluster of skyscrapers as a reference point. Traffic and one-way streets force you to take some odd turns, and for a while you lose sight of your landmarks. When they reappear, in order to use them for navigation, you have to be able to identify them as the same buildings you were tracking before — as well as your orientation relative to them.

That type of re-identification is second nature for humans, but it’s difficult for computers. At the IEEE Conference on Computer Vision and Pattern Recognition in June, MIT researchers will present a new algorithm that could make it much easier, by identifying the major orientations in 3-D scenes. The same algorithm could also simplify the problem of scene understanding, one of the central challenges in computer vision research.

Julian Straub, a graduate student in electrical engineering and computer science at MIT, is lead author on the paper. He’s joined by his advisors, John Fisher, a senior research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory, and John Leonard, a professor of mechanical and ocean engineering, as well as Oren Freifeld and Guy Rosman, both postdocs in Fisher’s Sensing, Learning, and Inference Group.

Read the article on MIT news.

Leave a Reply

Your email address will not be published. Required fields are marked *