Nice to meet you.

Enter your email to receive our weekly G2 Tea newsletter with the hottest marketing news, trends, and expert opinions.

AI vs AI: Is This Cybersecurity’s Moment of Reckoning?

April 1, 2025

ai vs ai in cybersecurity

While "AI in cybersecurity" might conjure images straight from a tech-thriller — deepfakes perfectly mimicking your CEO or sophisticated social engineering attacks — it might actually be the dawn of something far more interesting: AI-versus-AI warfare.

In 2020, Palo Alto Networks detected and blocked the infamous SolarWinds attack targeting their systems using an AI-backed solution, signaling a new paradigm in cybersecurity. 

AI innovation is empowering companies to mount stronger defenses against AI-led cyber threats. This is especially crucial for sectors most vulnerable to attacks — healthcare institutions safeguarding patient data, financial firms handling assets, and retailers enabling millions of daily transactions. 

But did humanity’s most promising recent innovation fuel a virtual arms race in the cybersecurity space between attackers and defenders? Both sides are battling it out to stay ahead of the AI innovation curve and best the other.

What led to this imminent clash?

Until recently, humans fought humans in the cyber world with some software assistance. But this dynamic has now irrevocably changed. Attackers are beginning to rely on AI to launch mass attacks and breach vulnerabilities. And the rapid evolution of cyber threats shows that traditional defenses are falling behind.

“Humans are not going to stand a chance fighting AI,” Greylock partner Asheem Chandna declared in a panel discussion with The Wall Street Journal. “Where this game has to go next is that AI has to fight against AI.”

Let’s shed some light on why traditional defenses are struggling to keep up.

The evolving nature of threats

To stress the growing threat of AI-backed attacks even more, a Canadian survey found that 93% of the CEOs there are worried that the emergence of generative AI would make them even more vulnerable to breaches. 

The threat to cybersecurity, fueled by AI, is unlike any before. Cybercriminals can now use large language models (LLM) for several scenarios, from spear phishing to zero-day vulnerability exploits. 

However, the most concerning is the emergence of autonomous attack frameworks. The NotPetya malware that crippled corporations in 2017 required human orchestration. However, modern variants can independently execute multi-stage kill chains, exploit vulnerabilities, and establish persistence — all without human prompts. 

Another emerging threat is polymorphic malware, which has generative capabilities. These malware can write their own code to evade security measures. Security structures lacking adaptive intelligence to mitigate such evolving threats are being rendered toothless.  

According to Palo Alto Networks, polymorphic malware mutates in an encrypted form. While obfuscating its true functionality through the insertion of dead code or substitution, it retains its original malicious functionality. 

Hyper-personalized social engineering has shattered our assumptions about recognizable threats. AI is helping attackers make phishing more contextually aware. For instance, attackers are now scraping data to impersonate colleagues more accurately, including mimicking writing styles or sharing common references. 

“Cybersecurity has always had AI for many years,” says CIO and G2 Icon Justin Penchina. “The real shift is in the availability of AI tools for threat actors. Even simple changes like using ChatGPT to write phishing emails with perfect spelling and grammar have changed the game,” he explains. 

So yes, AI is making cyber threats more dangerous than ever. But it’s also becoming our best line of defense.

AI defenses boom

Human interventions with conventional static defenses are not enough to battle these novel threats. It’s like bringing a knife to a gunfight. 

That’s why vendors eagerly integrate AI capabilities into their security solutions in a bid to turn static defenses into adaptive systems that learn, predict, and respond autonomously. 

The market has responded with extraordinary force, and many companies are beginning to realize this, including Google, which recently acquired AI cloud security provider Wiz for $32 billion

Even investor confidence in the AI cybersecurity market is unmistakable. According to Markets and Markets, it is poised to grow at a CAGR of 21.9% from 2023 to 2028. AI-backed data security startup Cyera recently closed a $300 million Series D round. Crunchbase called it the biggest raise by a startup playing at the intersection of two of VC’s “favorite industries” — AI and cybersecurity. 

Riding the positive investor climate, the number of products added to AI cybersecurity has also grown steadily. Nearly 8,000 new products were added to the G2's Security category in the last 12 months, and 2,000 new products were added to the Governance, Risk, and Compliance category.

The surge in investments is driving innovation across multiple fronts. Thirty-five percent of companies use AI for security hygiene and posture management analysis and prioritization. 

Here are the other emerging applications of AI in cybersecurity to combat AI-created threats:

1. Behavioral analytics

To offset the threat posed by AI, companies such as Darktrace and CrowdStrike have moved beyond traditional signature-based detection in favor of behavioral analysis. CrowdStrike uses indicators of attack to identify signs of malicious behavior.

2. Threat preparedness through simulations

Taking a step further, companies like SentinelOne and Sophos now use machine learning (ML) to simulate attacks and harden defenses. Through synthetic attacks, these companies train their defenses on threat detection and mitigation. This approach fundamentally shifts the approach from being defensive to anticipatory security. 

3. Better contextual awareness

Traditional security tools operate in a binary world. They either flag an event as suspicious or permit it based on predefined rules. However, modern AI tools boast contextual awareness and draw relations among systems, data, and users. 

“Solutions like Vanta are helping companies build smarter logical connections in their security apparatuses,” Iccha Sethi, VP of Engineering, Vanta, told G2. 

“AI is increasingly critical for rapidly identifying risks, suggesting preventive actions, and helping companies maintain a robust security posture against evolving AI-generated threats,” she adds.

4. Automating first-line defense

AI tools are also letting companies conduct a first-pass analysis of alerts, assess severity, and summarize alert data in context.

“This is all while recommending the ideal course of action for responding to risks, allowing teams to operate more effectively,” explains Matt Hillary, VP of Security and CISO, Drata.

5. Zero Trust reimagined

The Zero Trust approach to security — never trust, always verify — is also gaining a sharper edge with AI implementation.

“AI helps operationalize Zero Trust by automating threat detection, enhancing access controls, and continuously analyzing user behavior.”

John Kindervag
Chief Evangelist at Illumio

John, also the creator of the Zero Trust model, believes AI’s real untapped opportunity in Zero Trust lies in proactively securing AI systems themselves. 

“Organizations must recognize AI models as protect surfaces and apply strict segmentation to prevent poisoning attacks or unauthorized access,” he says. This recursive form of protection, where AI secures AI, will define the next security frontier.

5. Better SOAR

AI is especially helping organizations hone their adaptive security orchestration and automation (SOAR) capabilities. Rather than relying on predefined rules and playbooks, AI can now analyze threats in their specific forms and tailor a response to them. This is a crucial capability, as AI-driven attacks can bypass and evade traditional response mechanisms. 

Platforms like Swimlane boast AI capabilities that build on SOAR. With automation capabilities, the platform provides an agentic AI companion that helps manage cases and report on them. 

While it may be believed that AI is generating security risks faster than companies can keep up, vendors are steadily closing this gap with constant innovation and model training.

While there are promising avenues to use AI in cybersecurity, its effective implementation hinges on the quality and diversity of training data. The key to successful security lies in integrating multiple AI-powered capabilities with human expertise into a holistic security strategy. 

AI can't do it alone

Companies continue to build AI sentinels across use cases. Essentially, AI is helping meet AI threats on their own terms: with speed, at scale, and the capacity to process information beyond human abilities. 

For instance, FinSec deployed a mix of AI and accelerated computing to detect a simulated ransomware attack in less than 12 seconds and recover 80% of data on infected servers. AI is facilitating such real-time responses at unprecedented speeds. 

Yet this very reliance on AI technologies creates a novel vulnerability — overreliance on black-box algorithms whose logic remains opaque. 

So far, AI in cybersecurity excels at the mundane: pattern recognition, anomaly detection, and the grunt work of sifting through heaps of data. There have been experiments with agentic solutions, which work without human prompts, but it’s too early to call them truly autonomous. Human intervention remains critical in certain areas.

Humans capture context better

Despite AI’s growing capabilities to tackle threats, it still struggles with contextual understanding — something humans naturally excel at.  Humans are still needed to build guardrails and orchestrate security responses. A research paper emphasizing the role humans will play names augmented intelligence (human-AI interfaces) among the three crucial AI techniques that impact cybersecurity development alongside explainable AI and multiple data source analysis. 

“Social engineering defense is inherently human-driven,” says John. He says that AI in its present form fails to fully grasp the nuances of human interactions, making phishing detection, insider threats, and business email compromise (BEC) attacks areas where security awareness and human oversight are crucial.

In addition, human insight is still needed to decipher the intent behind an attack. This is because AI responses are still rooted in neural networks, giving out results that are statistical probabilities rather than definitive understandings of malicious intent. 

Insider attacks are also more common than we’d like to believe. AI here may lack the ability to differentiate between legitimate actions performed by an authorized user and malafide activities disguised as normal behavior. 

For example, an authorized employee may access sensitive files at 2 am. While AI could flag this as a breach, it could simply be in preparation for an early morning meeting. Context determines the answer here. 

More than your average newsletter.

Every Thursday, we spill hot takes, insider knowledge, and news recaps straight to your inbox. Subscribe here

AI tools are trained on historical data. This means attacks employing novel evasion techniques may bypass the protective layers of an AI solution. 

Realizing the importance of keeping humans in the loop, sellers of AI-powered solutions claim to have weaved in human-centricity. Let’s consider the case of Vanta.

According to Iccha, every AI-generated result is surfaced directly in Vanta’s easy-to-use web UI, so users can review, confirm, or tweak the information before finalizing. 

There’s also a way to make the user company’s response system stronger with every iteration. 

“Whenever users make adjustments, we capture these changes and feed them back into their company knowledge base, making future responses even smarter and safer.”

Iccha Sethi
VP of Engineering, Vanta

Increased human involvement in AI use also evolves the role of security professionals. Their responsibility is moving beyond mere pattern recognition to threat anticipation. This shifts the focus from incident response to proactive threat hunting, requiring professionals to upskill in AI model interpretations and advanced data analytics.

Sharpen security intuition

That humans must merely oversee AI decisions misunderstands human-artificial-intelligence synergy. AI solutions are aimed to augment security intuition across organizations. 

These interfaces allow human intuition and AI pattern recognition to defuse threats. AI nodes present information in a way professionals naturally process them, empowering humans. At the same time, the nodes become more capable with human insight. This human-machine synergy can make your security infrastructure more resilient to progressing AI-led threats.

Moving towards autonomous security 

As AI battles AI on the digital frontlines, we’re rapidly approaching the emergence of quasi-autonomous security ecosystems. These self-evolving defense networks will operate beyond human capabilities and combat AI-created threats at scale. 

By 2028, we predict agentic AI solutions will start making a significant dent in cybersecurity. Initial applications will be in risk assessments, compliance, and training, the three areas with the most common AI applications currently. 

Further, the NIST Cybersecurity Framework and ISO standards, which are predicated on human-scale threat assessments and response, are slated to undergo radical transformations to account for both AI-driven solutions and threats. 

We’ll witness the death of checklist-based security standards and the birth of metrics capturing an organization’s capability to adapt to emerging threats. 

The message for organizations navigating this transformation is clear: build transparency into AI systems through explainable AI and empower professionals to orchestrate and govern AI solutions for ethical use. 

Organizations must reimagine their relationship with risk. What separates good use of AI for cybersecurity from bad use is ethical clarity and strategic intent. These foundational principles can help security experts always stay ahead of the game.

AI can be a cybersecurity force multiplier for organizations, outsmarting criminals sooner rather than later. Learn more about the dual nature of AI for cybersecurity


Want more articles like this?

Subscribe to G2 Tea and get the latest marketing news and trends delivered straight to your inbox.

Get this exclusive AI content editing guide.

By downloading this guide, you are also subscribing to the weekly G2 Tea newsletter to receive marketing news and trends. You can learn more about G2's privacy policy here.