Skip to Content, Navigation, or Footer.

Robots engage in autonomous high-level planning

Algorithm connects sensory data with higher-level processing to improve robots’ planning abilities

A new study by led by George Konidaris, assistant professor of computer science, describes an algorithm to help robots plan intelligently by engaging in high-level tasks using low-level data. The study was supported by an award from the Defense Advanced Research Projects Agency.


Artificial Intelligence researchers have been struggling to put together the idea of how robots could plan and understand abstract concepts. “The study is about connecting abstract cognitive states with physical states of the world,” explained Michael Littman, professor of computer science. “When we (humans) think about problem solving, we think about it in an abstract, conceptual space, and if we want robots to process ideas at a more complex level, they must be able to do the same.”


“Right now, robots are making decisions based on sensorimotor data, which is limited to low-level processing tasks,” Konidaris said. For example, if a human were to put on a jacket using only low-level processes, they would have to understand every movement necessary to reach for the jacket, process every muscle that needs to be contracted and position the jacket optimally. In these cases, programmers must dictate how the robots think about the world, which inhibits robots from acting and learning autonomously.


“The problem is that … low-level sensory data are confusing and hard to process, which makes long-term goal-based plans hard to achieve,” said Stefanie Tellex, assistant professor of computer science. Konidaris’ approach links the low-level data to high-level concepts, she added.


High-level processing would involve self-learning the methods necessary to complete a task without being explicitly instructed about every detail and would  require a fine-grained understanding of abstract concepts, Konidaris said. “What our work tries to do is to bridge the gap between these two types of processing and investigate how a robot can learn (these concepts) by simply interacting with objects in the world, just as we as humans do,” he added.


The study was published in the Journal of Artificial Intelligence and co-authored by Leslie Pack Kaelbling and Tomás Lozano-Peréz, professors of computer science from the Massachusetts Institute of Technology. Together with Konidaris, they came up with an algorithm that allows for connections between low-level sensory data and higher-level processing. Now, when robots are provided with only motor skills, they will be able to identify and process images and tasks in the real world, Konidaris said.


The experimentation largely involved simulations and “lots of math” in the early stages, Konidaris added. Eventually, they were able to apply these algorithms to robots in a lab setting.


Being able to make robots engage in long-term, goal-based planning is “very exciting,” Tellex said. “It points robots towards a way of functioning that’s more autonomous and self-sufficient.”

ADVERTISEMENT


Popular


Powered by SNworks Solutions by The State News
All Content © 2024 The Brown Daily Herald, Inc.