When ChatGPT first came out, I asked a panel of CISOs what it meant for their cybersecurity programs. They recognized impending changes, but reflected on past disruptive technologies, like iPods, Wi-Fi access points, and SaaS applications entering the enterprise. The consensus was that security AI would be a similar disrupter, so they agreed that 80% (or more) of AI security requirements were already in place. Security fundamentals such as strong asset inventory, data security, identity governance, vulnerability management, and so on, would serve as an AI cybersecurity foundation.

Fast-forward to 2025, and my CISO friends were right — sort of. It’s true that a robust and comprehensive enterprise security program acts as an AI security anchor, but the other 20% is more challenging than first imagined. AI applications are rapidly expanding the attack surface while also extending the attack surface to third-party partners, as well as deep within the software supply chain. This means limited visibility and blind spots. AI is often rooted in open source and API connectivity, so there’s likely shadow AI activity everywhere. Finally, AI innovation is moving rapidly, making it hard for overburdened security teams to keep up.

Aside from the technical aspects of AI, it’s also worth noting that many AI projects end in failure. According to research from S&P Global Market Intelligence, 42% of businesses shut down most of their AI initiatives in 2025 (compared to 17% in 2024). Furthermore, nearly half (46%) of firms are halting AI proof-of-concepts (PoCs) before they even reach production.

Why do so many AI projects fail? Industry research points to cost, poor data quality, lack of governance, talent gaps, and scaling issues, among others.

With projects failing and a potpourri of security challenges, organizations have a long and growing to-do list when it comes to ensuring a robust AI strategy for innovation and security. When I meet my CISO amigos these days, they often stress the following five priorities:

1. Start everything with a strong governance model

To be clear, I’m not talking about technology or security alone. In fact, the AI governance model must begin with alignment between business and technology teams on how and where AI can be used to support the organizational mission.

To accomplish this, CISOs should work with CIO counterparts to educate business leaders, as well as business functions such as legal teams, finance, etc., to establish an AI framework that supports business needs and technical capabilities. Frameworks should follow a lifecycle from conception to production, and include ethical considerations, acceptable use policies, transparency, regulatory compliance, and (most importantly) success metrics.

In this effort, CISOs should review existing frameworks such as the NIST AI Risk Management Framework, ISO/IEC 42001:2023, UNESCO recommendations on the ethics of artificial intelligence, and the RISE (research, implement, sustain, evaluate) and CARE (create, adopt, run, evolve) frameworks from RockCyber. Enterprises may need to create a “best of” framework that fits their specific needs.

2. Develop a comprehensive and continuous view of AI risks

Getting a handle on organizational AI risks starts with the basics, such as an AI asset inventory, software bills of material, vulnerability and exposure management best practices, and an AI risk register. Beyond basic hygiene, CISOs and security professionals must understand the fine points of AI-specific threats such as model poisoning, data inference, prompt injection, etc. Threat analysts will need to keep up with emerging tactics, techniques, and procedures (TTPs) used for AI attacks. MITRE ATLAS is a good resource here.

As AI applications extend to third parties, CISOs will need tailored audits of third-party data, AI security controls, supply chain security, and so on. Security leaders must also pay attention to emerging and often changing AI regulations. The EU AI Act is the most comprehensive to date, emphasizing safety, transparency, non-discrimination, and environmental friendliness. Others, such as the Colorado Artificial Intelligence Act (CAIA), may change rapidly as consumer reaction, enterprise experience, and legal case law evolves. CISOs should anticipate other state, federal, regional, and industry regulations.

3. Pay attention to an evolving definition of data integrity

You’d think this would be obvious, as confidentiality, integrity, and availability make up the cybersecurity CIA triad. But in the infosec world, data integrity has focused on issues such as unauthorized data modifications and data consistency. Those protections are still needed, but CISOs should expand their purview to include the data integrity and veracity of the AI models themselves.

To illustrate this point, here are some infamous examples of data model issues. Amazon created an AI recruiting tool to help it better sort through resumes and choose the most qualified candidates. Unfortunately, the model was mostly trained with male-oriented data, so it discriminated against women applicants. Similarly, when the UK created a passport photo checking application, its model was trained using people with white skin, so it discriminated against darker skinned individuals.

AI model veracity isn’t something you’ll cover as part of a CISSP certification, but CISOs must be on top of this as part of their AI governance responsibilities.

4. Strive for AI literacy at all levels

Every employee, partner, and customer will be working with AI at some level, so AI literacy is a high priority. CISOs should start in their own department with AI fundamentals training for the entire security team.

Established secure software development lifecycles should be amended to cover things such as AI threat modeling, data handling, API security, etc. Developers should also receive training on AI development best practices, including the OWASP Top 10 for LLMs, Google’s Secure AI Framework (SAIF), and Cloud Security Alliance (CSA) Guidance.

End user training should include acceptable use, data handling, misinformation, and deepfake training. Human risk management (HRM) solutions from vendors such as Mimecast may be necessary to keep up with AI threats and customize training to different individuals and roles.

5. Remain cautiously optimistic about AI technology for cybersecurity

I’d categorize today’s AI security technology as more “driver assist,” like cruise control, than autonomous driving. Nevertheless, things are advancing quickly.

CISOs should ask their staff to identify discrete tasks, such as alert triage, threat hunting, risk scoring, and creating reports, where they could use some help, and then start to research emerging security innovations in these areas.

Simultaneously, security leaders should schedule roadmap meetings with leading security technology partners. Come to these meetings prepared to discuss specific needs rather than sit through pie-in-the-sky PowerPoint presentations. CISOs should also ask vendors directly about how AI will be used for existing technology tuning and optimization. There’s a lot of innovation going on, so I believe it’s worth casting a wide net across existing partners, competitors, and startups.

A word of caution however, many AI “products” are really product features, and AI applications are resource intensive and expensive to develop and operate. Some startups will be acquired but many may burn out quickly. Caveat emptor!

Opportunities ahead

I’ll end this article with a prediction. About 70% of CISOs report to CIOs today. I believe that as AI proliferates, CISOs reporting structures will change rapidly, with more reporting directly to the CEO. Those that take a leadership role in AI business and technology governance will likely be the first ones promoted.

By

Leave a Reply

Your email address will not be published. Required fields are marked *