Clicky

mobile btn
Monday, December 23rd, 2024

Lawrence Livermore National Laboratory developing “collaborative autonomy” network

© Shutterstock

Researchers at Lawrence Livermore National Laboratory (LLNL) are currently exploring the potential use of “collaborative autonomy” to creates a network of autonomous machines and humans that could be used by first responders.

The research team is developing a coordinated and distributed network of nodes with artificial intelligence capabilities that can be affixed to things like autonomous vehicles, drones, and robots. The team is currently exploring a decentralized network in which intelligence and sensor data is shared among machines and a belief network in which nodes use observations to calculate probabilities.

“The idea with collaborative autonomy is not the human flying the drone, it’s the human in control in the sense of guiding the mission or the task,” Reg Beer, the LLNL engineer heading the Lab’s collaborative autonomy effort, said. “The goal is to employ robotic partners with the ability to direct an autonomous squadmate and have that squadmate go achieve something without having to be teleoperated or with intense oversight.”

Machine-based systems that rely on unexplainable artificial intelligence systems, Beer said, will not be successful because society won’t trust something that cannot be understood.

“Humans have to see a reason and a logic to things, and if the machine seems unpredictable to us, if we don’t understand why it’s making its decisions, it won’t be adopted,” Beer said. “We want to trust that it’s going to perform and function optimally and if it makes a mistake, we’re going to have a basis to explain the mistake.”

Anantha Krishnan, the associate director engineering at LLNL, said the ultimate objective is to develop algorithms and computing capabilities that allow an adaptable network of mobile and autonomous platforms to collaborate in real time and create an “actionable picture of the operating environment.”

One of the biggest challenges is processing data across large networks of sensors so that artificial intelligence machines can communicate with the network and determine what needs to be sensed.

“We don’t want nodes to be ‘dumb’ sensors that go out and collect data, and send their data to a fusion center that interprets all the data, and directs the network over the next timestep,” Ryan Goldhahn, an LLNL researcher, said. “The idea is for the individual nodes themselves to judiciously choose what data to sense and what they should send to other nodes — you don’t want your network to rely on one node, which is a potential vulnerability. If the intelligence is pushed out to the individual nodes, and each of those nodes is making a local decision, then if one node gets destroyed, the others can compensate, and the performance of the entire network degrades gracefully.”