Clicky

mobile btn
Thursday, March 28th, 2024

DARPA initiative addresses machine learning attacks

© Shutterstock

The Defense Advanced Research Projects Agency (DARPA) has launched an initiative to address potential machine learning (ML) model attacks.

The Guaranteeing AI Robustness against Deception (GARD) program seeks to work ahead of safety challenges by developing a new generation of defenses against adversarial attacks on ML models.

“Other technical communities – like cryptography – have embraced transparency and found that if you are open to letting people take a run at things, the technology will improve,” Bruce Draper, the program manager leading GARD, said. “With GARD, we are taking a page from cryptography and are striving to create a community to facilitate the open exchange of ideas, tools, and technologies that can help researchers test and evaluate their ML defenses. Our goal is to raise the bar on existing evaluation efforts, bringing more sophistication and maturation to the field.”

The scope of work involves researchers representing Two Six Technologies, IBM, MITRE, University of Chicago, and Google Research generating toolbox, benchmarking dataset, and training materials while making the assets available to the broader research community through a public repository.

Artificial Intelligence (AI) algorithms attacks could result in impacts ranging from altering a content recommendation engine to disrupting the operation of a self-driving vehicle.

“The goal is to help the GARD community improve their system evaluation skills by understanding how their ideas really work and how to avoid common mistakes that detract from their defense’s robustness,” Draper said. “With the Self-Study repository, researchers are provided hands-on understanding. This project is designed to give them in the field experience to help improve their evaluation skills.”