in

3 Questions: Honing robot perception and mapping

Walking to a friend’s house or browsing the aisles of a grocery store might feel like simple tasks, but they in fact require sophisticated capabilities. That’s because humans are able to effortlessly understand their surroundings and detect complex information about patterns, objects, and their own location in the environment.

What if robots could perceive their environment in a similar way? That question is on the minds of MIT Laboratory for Information and Decision Systems (LIDS) researchers Luca Carlone and Jonathan How. In 2020, a team led by Carlone released the first iteration of Kimera, an open-source library that enables a single robot to construct a three-dimensional map of its environment in real time, while labeling different objects in view. Last year, Carlone’s and How’s research groups (SPARK Lab and Aerospace Controls Lab) introduced Kimera-Multi, an updated system in which multiple robots communicate among themselves in order to create a unified map. A 2022 paper associated with the project recently received this year’s IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award, given to the best paper published in the journal in 2022.

Carlone, who is the Leonardo Career Development Associate Professor of Aeronautics and Astronautics, and How, the Richard Cockburn Maclaurin Professor in Aeronautics and Astronautics, spoke to LIDS about Kimera-Multi and the future of how robots might perceive and interact with their environment.

Q: Currently your labs are focused on increasing the number of robots that can work together in order to generate 3D maps of the environment. What are some potential advantages to scaling this system?

How: The key benefit hinges on consistency, in the sense that a robot can create an independent map, and that map is self-consistent but not globally consistent. We’re aiming for the team to have a consistent map of the world; that’s the key difference in trying to form a consensus between robots as opposed to mapping independently.

Carlone: In many scenarios it’s also good to have a bit of redundancy. For example, if we deploy a single robot in a search-and-rescue mission, and something happens to that robot, it would fail to find the survivors. If multiple robots are doing the exploring, there’s a much better chance of success. Scaling up the team of robots also means that any given task may be completed in a shorter amount of time.

Q: What are some of the lessons you’ve learned from recent experiments, and challenges you’ve had to overcome while designing these systems?

Carlone: Recently we did a big mapping experiment on the MIT campus, in which eight robots traversed up to 8 kilometers in total. The robots have no prior knowledge of the campus, and no GPS. Their main tasks are to estimate their own trajectory and build a map around it. You want the robots to understand the environment as humans do; humans not only understand the shape of obstacles, to get around them without hitting them, but also understand that an object is a chair, a desk, and so on. There’s the semantics part.

The interesting thing is that when the robots meet each other, they exchange information to improve their map of the environment. For instance, if robots connect, they can leverage information to correct their own trajectory. The challenge is that if you want to reach a consensus between robots, you don’t have the bandwidth to exchange too much data. One of the key contributions of our 2022 paper is to deploy a distributed protocol, in which robots exchange limited information but can still agree on how the map looks. They don’t send camera images back and forth but only exchange specific 3D coordinates and clues extracted from the sensor data. As they continue to exchange such data, they can form a consensus.

Right now we are building color-coded 3D meshes or maps, in which the color contains some semantic information, like “green” corresponds to grass, and “magenta” to a building. But as humans, we have a much more sophisticated understanding of reality, and we have a lot of prior knowledge about relationships between objects. For instance, if I was looking for a bed, I would go to the bedroom instead of exploring the entire house. If you start to understand the complex relationships between things, you can be much smarter about what the robot can do in the environment. We’re trying to move from capturing just one layer of semantics, to a more hierarchical representation in which the robots understand rooms, buildings, and other concepts.

Q: What kinds of applications might Kimera and similar technologies lead to in the future?

How: Autonomous vehicle companies are doing a lot of mapping of the world and learning from the environments they’re in. The holy grail would be if these vehicles could communicate with each other and share information, then they could improve models and maps that much quicker. The current solutions out there are individualized. If a truck pulls up next to you, you can’t see in a certain direction. Could another vehicle provide a field of view that your vehicle otherwise doesn’t have? This is a futuristic idea because it requires vehicles to communicate in new ways, and there are privacy issues to overcome. But if we could resolve those issues, you could imagine a significantly improved safety situation, where you have access to data from multiple perspectives, not only your field of view.

Carlone: These technologies will have a lot of applications. Earlier I mentioned search and rescue. Imagine that you want to explore a forest and look for survivors, or map buildings after an earthquake in a way that can help first responders access people who are trapped. Another setting where these technologies could be applied is in factories. Currently, robots that are deployed in factories are very rigid. They follow patterns on the floor, and are not really able to understand their surroundings. But if you’re thinking about much more flexible factories in the future, robots will have to cooperate with humans and exist in a much less structured environment.


Source: Data Management & Statistic - news.mit.edu


Tagcloud:

Learning the language of molecules to predict their properties

Razer investigates potential breach involving its digital wallet