1. Align strategies for compliance and resilience: Setting your cybersecurity goals to meet regulatory compliance requirements as the most important and only outcome is a good strategy for helping the bad guys win. Compliance is essential and non-negotiable, but in most industries compliance-driven efforts are almost always focused on addressing historical threats — not emerging activities, not threatening behavior we’re seeing in the wild, and not the very real consequences that follow a successful cyberattack.
Organizations should seek alignment between what they’re doing to comply with international laws and regulations as part of a larger operational resilience strategy and efforts to build resilient systems. Systems that are designed to stay up, running, and secure in the face of known and unknown threats demonstrate compliance as a natural by-product.
Given the technical and regulatory progress in AI, we know operational resilience is going to require alignment and coordination of these efforts to keep up with the speed at which these systems are evolving — and how they’re being used in our own companies.
2. Securing the AI supply chain: Determining why a model is right (or wrong) is going to be one of the key challenges of the AI era, and especially when that error occurs because of interference by a threat actor. CISOs should be prioritizing how they get and maintain continuous visibility into and the security of their entire AI supply chain.
Nothing short of end-to-end view and provenance of all AI system components and origins — models, data sources, applications and infrastructure services — will do. Google’s overall approach to supply-chain security is to support Supply-chain Levels for Software Artifacts (SLSA), supplemented by software bills of materials (SBOM), and we carry that forward to AI.
Securing the AI supply chain is vital to the success of any AI initiative, as we discussed in the previous Cloud CISO Perspectives, and we’ll be talking more about it throughout the year as we continue to work on the Secure AI Framework (SAIF).
3. Master identity: Identity management has always been important, but it is absolutely critical to have a robust plan for identities in the agentic world we now live in. Managing human and non-human identities and their access privileges is essential to mitigating new categories of risks that appear when we use non-deterministic systems to perform actions in the real world.
While incidents are inevitable, the blast radius can be controlled when we have crisp, well-defined abilities to identify and log the behavior of agentic actors. The goal is to control their access and see what they did in case we have to go back.
Identities are the central piece of digital evidence that ties everything together. Organizations need to know who’s using AI models, what the model’s identity is, what the code driving the interaction’s identity is, what the user’s identity is, and be able to differentiate between those things — especially with AI agents.
Some industries, such as financial services, have so far done a good job of focusing on identity management, but others are lagging. For example, the healthcare industry is struggling to understand how far and by how much can agentic AI help in the treatment, diagnosis, and delivery of care. Some developers seem satisfied that the use of AI and agents will always be constrained because humans must be in the loop to make consequential medical decisions, but we’re seeing evidence that the market is heading in a very different direction.
Whether it’s machine identities, model identities, data identities, application identities, agent identities, or people identities, we all need to get really, really strong on identity to build tomorrow’s resilient systems today.
4. Defend (and fix, rebuild, and deploy) at machine speed: Weaponized AI systems can attack at lightning speed, so defense must also accelerate. To survive modern threats, organizations should automate their ability to detect, respond, and apply preventions to systems in seconds or milliseconds, not hours.
Strategically, we encourage Google Cloud customers to evaluate their architecture, systems, and applications, and measure how quickly they can deploy fixes, how they might operate in likely situations where services are degraded through high-likelihood attack types, and how quickly they could rebuild and re-deploy a system from scratch if they had to. Similarly, we advise our customers to set baselines for how quickly they can generate and deploy new detections and mitigations from threat intelligence.
Shrinking dwell times tell us that organizations should focus on continuously driving down how long it takes them to complete these defensive activities, from hours to minutes, or minutes to seconds. We recommend prioritizing any approach that can help deploy corrections and detect and respond to issues in the fastest way possible, including new ephemeral architectures, automated deployment tools, and vulnerability management tools with increasingly greater scope and pipelines to capture and cull system audit logs efficiently.
5. Uplevel AI governance through context, advanced testing and evaluation: Probably the most frequently-requested topic our customers ask about is how to effectively govern the use of AI systems, agents, and cloud. AI governance is crucial because it provides a holistic approach to shoring up AI defenses. Proper AI governance requires a combination of technical skill, understanding how AI works and is built, as well as the regulatory and business context to determine what is an important risk — and then doing something about it.
In 2026, we see our customer conversations bringing more business context to governance, with questions like, “How should we govern agentic AI systems involved in the dosing of [prescription] for [diagnosis] when the patient is in the emergency room?”
Driving more business context into increasingly-deep AI governance activities will unlock greater, more sophisticated, and valuable uses of AI systems and agents. Organizations are seeing the need to bring more advanced testing and evaluation activities to the table, and activities such as AI red teaming are becoming more common because they provide crucial feedback assessments.
How to get started
While organizations that have years of experience building Secure by Design into their culture are more likely to be able to jump on these priorities faster, the best place to start is where CISOs know that they have a deficit that aligns with these priorities.
For more guidance, please check out our CISO Insights Hub.
Source Credit: https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-5-top-ciso-priorities-in-2026/
