The White House released an AI plan chock-full of cybersecurity provisions among what the White House says are 90 AI-related desired “policy actions.”
Informed by 10,000 pages of comments solicited by the White House Office of Science and Technology, the policy, entitled “Winning the AI Race: America’s AI Action Plan,” is structured around three pillars: Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security. It “charts a decisive course to cement US dominance in artificial intelligence,” White House Office of Science and Technology Policy Director Michael Kratsios said in a press release.
“To win the AI race, the US must lead in innovation, infrastructure, and global partnerships,” AI and Crypto Czar David Sacks said in the release. “At the same time, we must center American workers and avoid Orwellian uses of AI. This Action Plan provides a roadmap for doing that.”
The plan outlines a host of general principles aimed at shaping how the government formulates its AI actions. “The AI plan that the Trump administration has put out is brilliant and long overdue,” Arnie Bellini, chairman of ConnectSecure, tells CSO. “It’s overarching and it says that we need to implement AI anywhere and everywhere that we can, and we need to do it right away.”
The document’s numerous cybersecurity provisions reflect an understanding of how the meteoric emergence of AI must occur with accompanying security measures, as well as the recognition that AI can spur better cybersecurity protections against adversaries. Experts worry, however, that the lack of implementation requirements, deadlines, or directions makes the action plan more of a wish list that Congress will have to fortify to make it concrete and enforceable.
They also worry that the plan betrays the fact that most organizations, including government agencies, struggle with elementary cybersecurity duties and that adding AI security responsibilities on top of these obligations won’t work until the basics are in place.
Finally, they are concerned that the federal government and state and local governments might not be able to support the goals outlined in the plan at a time when the Trump administration is slashing federal agency budgets.
Key cyber elements of the plan
The more prominent cybersecurity elements included in the plan appear under the second pillar, Build American AI Infrastructure, including the following:
Securing data centers: The plan spells out policy actions for creating the data centers necessary for competing in the AI arena. When it comes to security, the plan emphasizes that data centers should “not be built with any adversarial technology that could undermine US AI dominance.”
It also specifies that “security guardrails” should be maintained to prohibit adversaries from inserting sensitive inputs to this infrastructure and that AI-related energy and telecommunications “are free from foreign adversary information and communications technology and services (ICTS) — including software and relevant hardware.”
Daniel Bardenstein, CTO and co-founder of Manifest Cyber and former chief of technology strategy and delivery at the Cybersecurity and Infrastructure Security Agency, is concerned that the plan omits provisions for local electric and water utilities, which will be forced to fuel and cool the growing number of data centers, and will likely be unable to handle the strain of the new AI era.
“Most electricity and water systems in the US are smaller municipal systems,” he tells CSO. “Those small power and water utilities don’t have the people, personnel, and money today to do best practices cybersecurity. And now you are throwing on AI security on top of that?”
Establish an AI Information Sharing and Analysis Center (AI-ISAC): The plan calls for the creation of an AI-ISAC led by the Department of Homeland Security in collaboration with NIST’s Center for AI Standards and Innovation (CAISI), and the Office of the National Cyber Director (ONCD), to promote the sharing of AI-security threat information and intelligence across US critical infrastructure sectors.
Maintain remediation guidance to private sector entities: DHS is also directed by the plan to lead an effort to issue and maintain guidance to private sector entities on remediating and responding to AI-specific vulnerabilities and threats.
Ensure collaborative and consolidated sharing of known AI vulnerabilities: The plan asks that federal agencies share AI vulnerability information with the private sector as appropriate. “There’s a lot in there about government and private sector working together to identify threats along the lines of how it has worked for cybersecurity, but recognizing that AI is an extension,” Jenny Marron, director of policy and engagement for the Institute for AI Policy and Strategy (IAPS), tells CSO. “That’s something we’re pleased to see.”
Promote secure-by-design AI technologies and applications: The plan says the US government “has a responsibility to ensure the AI systems it relies on — particularly for national security applications — are protected against spurious or malicious inputs” and that “promoting resilient and secure AI development and deployment should be a core activity of the US government.” It recommends that DoD, in collaboration with NIST and ODNI, continue to refine DoD’s responsible AI and generative AI frameworks, roadmaps, and toolkits. It also asks the ODNI, in consultation with DoD and CAISI, to publish a standard on AI assurance.
Promote mature federal capacity for AI incident response: The plan asks NIST, including CAISI, to partner with the AI and cybersecurity industries to ensure AI is included in the establishment of standards, response frameworks, best practices, and technical capabilities of incident response teams. It further asks CISA to modify its cybersecurity incident and vulnerability response playbooks to incorporate considerations for AI systems and to include requirements for CISOs to consult with chief AI officers, senior agency officials for privacy, CAISI, and other officials as appropriate.
Assess national security risks: Another key provision asks for “American AI developers to enable the private sector to actively protect AI innovations from security risks, including malicious cyber actors, insider threats, and others.” It further asks CAISI, in collaboration with national security agencies, to “evaluate and assess potential security vulnerabilities and malign foreign influence arising from the use of adversaries’ AI systems in critical infrastructure and elsewhere in the American economy, including the possibility of backdoors and other malicious behavior.”
It’s a ‘north star’ strategy and not an executive order
Unlike strategy documents or executive orders issued by presidential administrations in the past, this action plan contains no implementation requirements, deadlines, or specifics on when many of its actions need to be completed or how. It is a “north star strategy for all of these agencies,” Bellini says.
“This tells agencies where they should be leaning in, and then the work is going to be in the next month actually to figure out how this gets implemented,” IAPS’s Marron says.
“It is what the administration is presenting as a to-do list, and it does not have the force of an executive order,” Heather West, senior fellow at the Center for Cybersecurity Policy and Law, tells CSO. Despite a lack of specific directions, West believes federal agencies will get the message. “I do expect every agency will need to prioritize, but given that they did bounce all of the elements of this plan off of the relevant agencies, the agencies hopefully believe that they can pull it off,” she says.
Congress could make things more definitive
The action plan was released just as the US House and Senate are moving to create a reconciliation bill for the FY2026 National Defense Authorization Act (NDAA). Both the House and Senate bills contain extensive provisions on what the military expects when it comes to AI technologies, with great emphasis on cybersecurity technologies.
“We’re enthused to see that Congress is taking action and setting the high bar when it comes to AI and cybersecurity,” Manifest Cyber’s Bardenstein says.
The Center for Cybersecurity Policy and Law’s West agrees. “It would not surprise me at all if pieces of the action plan get picked up by Congress and expanded on,” she says. “The plan is much more abstract, and Congress will necessarily have to be more specific if they do pick it up.”
Budget cuts could put a crimp in the plan
The AI plan has also been released amid budget cuts proposed by the Trump administration, which might significantly constrain how quickly agencies can begin implementing some of the concepts it contains.
“The administration is saying that it cares a lot about security at the same time it’s cutting critical security programs that federal agencies and critical infrastructure depend upon,” Bardenstein says.
On the other hand, ConnectSecure’s Bellini thinks improved efficiency and the automation that AI delivers will make up for the reduced funds. “Because budget cuts have been incorporated along with this request, you’re thinking, how does that make sense?” he says. “A lot of the activities that we’ve been doing in those agencies do not need to be done anymore or can be done in a more efficient or completely different way.”
Regardless of how the AI plan gets implemented, one key element moving forward on an AI strategy is trust, Bardenstein says. “I was one of the cybersecurity leads for the COVID-19 vaccines for Operation Warp Speed at DoD. That too was a race. It was a race to see which country globally could come out with the first COVID-19 vaccine. At the same time, it was also understood by the administration that trust was a critical factor in winning that race.”
He adds, “When we think about winning technological races, crossing the finish line on adoption and technological development is part of the success criteria. But so is establishing trust and safety in the adoption of those technologies. And if you don’t have either of them, you haven’t won.”