Taylor Lehmann, director, health care and life sciences
A year from now, we’re going to have an awesome security opportunity to secure a new persona in our organizations: Knowledge workers who produce truly useful, mission-critical applications and software using ideas and words — but not necessarily well-written, vetted, and tested code.
We’re going to need better and more fine-grained paths to help these new “idea-native developers” who use powerful AI tools and agents to build, test, submit, manage and blast secure code into secure production as safely and as fast as they can. In 2026 and 2027, we’re going to see how big this opportunity is. We should prepare to align our organizations, operations, and technology (OOT) to take advantage of it.
A corollary to this comes from our DORA reports: Just as AI has amplified productivity and begun optimizing work, it amplifies organizational dysfunctions — especially those that lead to inefficiently and ineffectively secured data.
Marina Kaganovich, executive trust lead
The heightened capability of agentic AI to take actions and execute tasks autonomously elevates the importance of cybersecurity basics. Organizations will need to create discrete boundary definitions for the authorization, authentication, and monitoring of each agent.
Beyond technical controls, organizational defense will depend on fostering an AI-literate workforce through training and awareness, as staff shift from performing tasks to architecting and overseeing agents. To be successful, organizations will require a fundamental shift in risk-informed culture.
Bill Reid, security advisor
Aggressive adoption of agentic AI will drive a renewed interest in threat modeling practices. Security teams will be asked to deeply understand what teams are trying to build, and will need to think about the data flows, the trust boundaries, and the guardrails needed.
Agentic AI will also demand that the supply chain be considered within that threat model, beyond the software bill of materials (SBOM), to look at how those services will control autonomous actions. It will also force a renewed look at identity and entitlements, as agents are asked to act on behalf of or as an extension of employees in the enterprise.
What may have been acceptable wide scopes covered by detective controls may no longer be sufficient, given the speed of action that comes with automation and the chaining of models together in goal seeking behavior.
Vesselin Tzvetkov, senior cybersecurity advisor
As Francis noted, agentic security operations are set to become the standard for modern SOCs, dramatically enhancing the speed and capabilities of security organizations. The agentic SOC in 2026 will feature multiple small, dedicated agents for tasks like summarization, alert grouping, similarity detection, and predictive remediation.
This shift will transform modern SOC roles and processes, moving away from tiered models in favor of CI/CD-like automation. AI capabilities and relevant know-how are essential for security personnel.
As AI drives new AI threat hunting capabilities to gain insight from data lakes in previously underexplored areas, such as OT protocols for manufacturing and industry-specific protocols like SS7 for telecommunications, the overall SOC coverage and overall industry security will improve.
Vinod D’Souza, director, manufacturing and industry
In 2026, agentic AI will help the manufacturing and industrial sector cross the critical threshold from static automation to true autonomy. Machines will self-correct and self-optimize with a speed and precision that exceeds human capacity.
The engine powering this transformation is the strategic integration of cloud-native SCADA and AI-native architectures. Security leaders should redefine their mandate from protecting a perimeter to enabling a trusted ecosystem anchored in cyber-physical identity.
Every sensor, service, autonomous agent, and digital twin should be treated as a verified entity. By rooting security strategies in data-centered Zero Trust, organizations stop treating security as a gatekeeper and transform it into the architectural foundation. More than just securing infrastructure, the goal is to secure the decision-making integrity of autonomous systems.
AI threats
We anticipate threat actors will move decisively from using AI as an exception to using it as the norm. They will use AI to enhance the speed, scope, and effectiveness of their operations, streamlining and scaling attacks.
A critical and growing threat is prompt injection, an attack that manipulates AI to bypass its security protocols and follow an attacker’s hidden command. Expect a significant rise in targeted attacks on enterprise AI systems.
Threat actors will accelerate the use of highly manipulative AI-enabled social engineering. This includes vishing (voice phishing) with AI-driven voice cloning to create hyperrealistic impersonations of executives or IT staff, making attacks harder to detect and defend against.
Source Credit: https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-our-2026-cybersecurity-forecast-report/
