Raytheon researchers, in conjunction with the Defense Research Project Agency’s (DARPA) Explainable Artificial Intelligence program (XAI), are developing a network designed to enhance the spectrum of artificial intelligence (AI) capabilities.
Raytheon officials said the technology firm is developing a groundbreaking neural network that explains itself. The Explainable Question Answering System (EQUAS) allows AI programs to be more demonstrative while increasing the human user’s confidence in the machine’s suggestions.
“Our goal is to give the user enough information about how the machine’s answer was derived and show that the system considered relevant information so users feel comfortable acting on the system’s recommendation,” Bill Ferguson, the project’s lead scientist and EQUAS principal investigator at Raytheon BBN, said.
Researchers said the work involves EQUAS demonstrating to users which data mattered most in the AI decision-making process, as users implement a graphical interface to explore the system’s recommendations and see why it chose one answer over another.
Officials acknowledge the technology is still in its early phases of development but could potentially be used for a wide range of applications.
“Say a doctor has an x-ray image of a lung and her AI system says that its cancer,” Ferguson said.”She asks why and the system highlights what it thinks are suspicious shadows, which she had previously disregarded as artifacts of the X-ray process. Now the doctor can make the call to diagnose, investigate further, or if she still thinks the system is in error, let it go.”