
Today’s real world impact
To cut through the noise so we can understand where we should actually be focusing our AI efforts, we need better data – specifically in two buckets: AI in the threat landscape, and AI for defense.
With so many different potential adversarial use cases related to AI, we need to prioritize the most prominent AI-driven attack vectors so we can properly manage the risks they present.
At the same time, CISOs need AI to deliver for defense. What is AI’s real value proposition? How does it meaningfully help deliver savings and improve security outcomes over the next 6 to 12 months?
Today, I’m going to share data-driven analyses that can help eliminate the guesswork, and help you prioritize the practical applications of AI that we’re seeing have a tangible impact.
How attackers are using AI
As part of our work countering threats to Google and our users, Google Threat Intelligence Group analysts track known threat actors, and we investigate how these threat actors are currently attempting to use generative AI, specifically Gemini. We’ve identified Advanced Persistent Threat groups from more than 20 countries that have accessed our public Gemini AI services.
Threat actors have used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, performing reconnaissance on target organizations, researching vulnerabilities, payload development, and seeking assistance with malicious scripting and evasion techniques.
Crucially, we see that these are existing attack phases being made more efficient, not fundamentally new AI-driven attacks. We’ve observed threat actors experimenting with AI and finding productivity gains, but not yet developing novel capabilities.
Much of the current discourse can feel overly alarmist. Our analysis shows that while AI is a useful tool for common tasks, we haven’t seen indications of adversaries developing fundamentally new attack vectors using these models.
Attackers are using Gemini the way many of us are using AI: It’s a productivity tool to help them brainstorm and refine their work. Instead of inventing brand new attack methods using AI, they are enhancing traditional tactics. We did not observe unique AI-enabled attacks, or prompt attacks.
The good news is that Gemini’s safety measures continue to restrict adversarial operational capabilities. While Gemini provided assistance with common, neutral tasks like content creation, summarization, and simple coding, it generated safety responses when prompted with more elaborate or explicitly malicious requests. We even observed unsuccessful attempts by threat actors to use Gemini to research techniques for abusing Google products such as Gmail, stealing data, and bypassing account verification.
How defenders are using AI
Thankfully, the same AI capabilities that attackers are using for productivity gains can have a different impact when defenders seize them: They have the power to make defenders even more resilient. There are use cases we recommend CISOs lean into right now to harness the potential of AI.
The growing volume of cyber threats has increased workloads for defenders and created a need for improved automation and innovative approaches. AI has enabled increased efficiency, supporting malware analysis, vulnerability research and analyst workflows.
- The true test of any malware analysis tool lies in its ability to identify never-before-seen techniques that are not detected by traditional methods. Gemini can understand how code behaves in a deep way to spot new threats, even threats never seen before, and can make this kind of advanced analysis more widely accessible.
- Our current results using large-language models (LLM) to create new fuzzing harnesses are showing real promise. We’ve achieved coverage increases of up to 7,000% across 272 C and C++ projects in OSS-Fuzz.
- Google Project Zero and Google DeepMind collaborated on a project called Big Sleep, which has already uncovered its first real-world vulnerability using a LLM.
- At Google, we’re using LLMs to speed up our security and privacy incident workflows. Gemini helps us write incident summaries 51% faster while also measurably improving their quality in blind evaluations by human reviewers.
- We’re also using AI to reduce toil for our own analyst workflows. GTIG uses an internal AI tool that reviews thousands of event logs collected from an investigation and quickly summarizes them – in minutes – as a bite-sized overview that can be easily understood across the intelligence team, a process that previously took hours of effort.
- Another internal AI tool also helps us provide crucial information to customers on the hacktivist threats they face, and reduce toil, in a way that would not be feasible without AI. Our analysts will onboard a hacktivist group’s main social channel (such as Telegram) into the AI tool, and when we have collected enough data from that channel, it creates a comprehensive report on the group’s behavior – including TTPs, preferred targets, and attacks that they’ve claimed credit for. That report is then reviewed, validated, and edited by a GTIG analyst.
We’ve only scratched the surface today of how AI is actively shaping the cybersecurity landscape right now. If you’re reading this from the RSA Conference, please come visit the Google Cloud Security Hub and speak to our experts about the tangible value we’re already gaining from integrated and agentic AI, and how to make Google part of your security team to benefit as well.
You can check out all our RSA Conference announcements here, and of course visit us anytime at our CISO Insights Hub.
Source Credit: https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-data-driven-insights-ai-cybersecurity/