Mere days after the Department of Homeland Security formed a new Artificial Intelligence (AI) Safety and Security Board, it also released new guidelines for how to protect critical infrastructure and weapons of mass destruction against the threats posed by AI.
The report followed six months after President Joe Biden’s executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” and addressed AI misuse in the development and production of chemical, biological, radiological, and nuclear (CBRN) threats.
To start, it offered guidelines for mitigating AI risks to critical infrastructure, though, in which it considered three categories of system-level risk: attacks using AI, attacks targeting AI systems and failures in AI design and implementation. This marked a first-of-its-kind cross-sector analysis of AI specific risks to critical infrastructure, and led to a four-part mitigation strategy.
When considering contextual and unique AI risk situation, the department urged people to consider establishing an organizational culture of AI risk management; understanding individual AI use context and risk profiles; developing systems to assess, analyze and track AI risks; and prioritizing and implementing controls to manage AI risks to safety and security. Respectively, it dubbed these approaches governing, mapping, measuring and managing.
All were developed in coordination between the DHS and its Cybersecurity and Infrastructure Security Agency (CISA).
“AI can present transformative solutions for U.S. critical infrastructure, and it also carries the risk of making those systems vulnerable in new ways to critical failures, physical attacks, and cyber attacks,” Secretary of Homeland Security Alejandro Mayorkas said. “Our Department is taking steps to identify and mitigate those threats.”
The threat countering portion of the report was undertaken by DHS and its Countering Weapons of Mass Destruction Office (CWMD). Together, they analyzed the risk of AI being misused for development or production of CBRN threats, drawing on insights from governmental collaborators, as well as academic and industry experts. While it noted AI has already affected the way research is conducted in the physical and life sciences, it added that AI-enabled enhancements to research could have both positive and negative impacts, and many will be difficult to anticipate.
AI should, in their view, be integrated into international collaboration and CBRN prevention, detection, response and mitigation, more specifically. Yet as AI technologies advance, they warned, the barriers for use will be lower, and that could create more novel risks from bad actors.
“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” Assistant Secretary for CWMD Mary Ellen Callahan said. “This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI.”