Artificial Intelligence (AI) has gotten a lot of attention recently, particularly in tech circles, but this week the government turned its eye to machine potential as well through a hearing on its possible risks and opportunities by the Senate Homeland Security and Governmental Affairs Committee.
Committee Chairman Gary Peters (D-MI) called the hearing on March 8, 2023 to examine AI and how it affects U.S. economic competitiveness worldwide. A major focus was the current lack of transparency surrounding these technologies, which could harm public trust in their use, and efforts to guarantee they would not abuse civil rights or liberties. At the same time, the committee recognized that the U.S. does not stand alone on the doorstep of these questions – other governments, like the Chinese Communist Party, are also eyeing their potential for economic advantage.
“From the development of lifesaving drugs and advanced manufacturing, to helping businesses and governments better serve the public, to self-driving vehicles that will improve mobility and make our roads safer, artificial intelligence holds great promise,” U.S. Sen. Peters said. “But this rapidly-evolving technology also presents potential risks that could impact our safety, privacy, and our economic and national security. We must ensure that, as use of this technology becomes more widespread, we have the right safeguards in place to ensure it is being used appropriately.”
The hearing probed at various aspects of the AI question with insights provided by three expert witnesses: RAND Corp. President and CEO Jason Matheny; computer and data science professor Suresh Venkatasubramanian of Brown University; and Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology. Together, these witnesses delved into the ways lawmakers could support AI development and benefit society, industry, and government while limiting its possible harms.
“The truth is AI systems are not magic: AI is technology,” Venkatasubramanian said at the hearing. “Like any other piece of technology that has benefited us – drugs, cars, planes – AI needs guardrails so we can be protected from the worst failures while still benefiting from the progress AI offers.”
For Matheny, there was no ignoring that AI development poses national security challenges, and he called for intelligence agencies to expand data collection and analysis of information on national adversaries alongside any AI concerns. The concern goes deeper than that, though, as he confirmed that the application of these developing models could one day allow even non-state affiliates to gain offensive cyber capabilities.
Another thread that emerged from this meeting was that there have been issues where developers do not always understand how AI algorithms make their decisions. That creates an inherent lack of transparency. Further, these systems are capable of significantly more processing power than people but tend to be highly specific and are often influenced by the biases of their creators and contributors.
The witnesses pointed to existing laws and regulatory authorities and the reforms they saw as needed for them to assure responsible systems development. Further, Peters pressed for Congress to guarantee the federal government has the workforce and technology needed to lead on the subject.