News

AI training program for federal workforce leadership proposed in Senate

A day ahead of testimony by OpenAI CEO Sam Altman before Congress on the concerns and possibilities of artificial intelligence (AI), U.S. Sens. Gary Peters (D-MI) and Mike Braun (R-IN) introduced legislation to create an AI-based training program for federal managers.

While Altman would go on to call for proactive government regulation and opine on the seriousness of AI’s upcoming role in the world this week, the AI Leadership Training Act called for a new program to help improve the federal workforce’s understanding of AI applications, and guarantee leadership understands AI’s potential benefits and risks.

“As the federal government continues to invest in and use artificial intelligence tools, decision-makers in the federal government must have the appropriate training to ensure this technology is used responsibly and ethically,” Peters, chairman of the Homeland Security and Governmental Affairs Committee, said. “With AI training, federal agency leaders will have the expertise needed to ensure this technology benefits the American people and to mitigate potential harms such as bias or discrimination.”

The move was backed by organizations like the National Security Commission on Artificial Intelligence (NSCAI) and the National AI Advisory Committee (NAIAC), which have previously recommended more AI training for federal workers. Otherwise, proponents argue, the risks and rewards of the technology may not be properly weighed, and that could cost agencies and America at large.

“In the past couple of years, we have seen unprecedented development and adoption of AI across industries. We must ensure that government leaders are trained to keep up with the advancements in AI and recognize the benefits and risks of this tool,” Braun said.

Specifically, this legislation would require the Director of the Office of Personnel Management (OPM) to create and routinely update an AI training program for federal supervisors and management officials as a means to help them understand the capabilities, risks and ethical implications associated with AI. From there, they could theoretically use this to determine if AI is appropriate to meet their mission requirements.

Chris Galford

Recent Posts

New Raytheon advanced ground system gives U.S. advanced warning for space-based missiles

Thanks to work by Raytheon, an advanced new ground system for space-based missile warning recently…

1 day ago

FBI Report: Older population hit by more than $3.4B in scam losses in 2023

According to the latest Elder Fraud Report from the Federal Bureau of Investigation (FBI), 2023…

1 day ago

Protect and Serve Act would elevate the harming or attempted harm of law enforcement to a federal crime

Following the deaths of four police officers while executing an arrest warrant in North Carolina…

2 days ago

U.S. Reps. Steil, Dean introduce legislation to target human trafficking among other countries

As a way to crackdown on human trafficking, two U.S. representatives recently introduced the Exposing…

2 days ago

DoD challenge brings opportunities for nine new ideas in talent management

A Department of Defense (DoD) 2040 Task Force (D2T) challenge on talent management innovation drew…

5 days ago

TSA publishes final rule on Flight Training Security Program improvements

For the first time since its creation in 2004, the Transportation Security Administration’s (TSA) Flight…

5 days ago

This website uses cookies.