April 1, 2025
by Sidharth Yadav / April 1, 2025
While "AI in cybersecurity" might conjure images straight from a tech-thriller — deepfakes perfectly mimicking your CEO or sophisticated social engineering attacks — it might actually be the dawn of something far more interesting: AI-versus-AI warfare.
In 2020, Palo Alto Networks detected and blocked the infamous SolarWinds attack targeting their systems using an AI-backed solution, signaling a new paradigm in cybersecurity.
AI innovation is empowering companies to mount stronger defenses against AI-led cyber threats. This is especially crucial for sectors most vulnerable to attacks — healthcare institutions safeguarding patient data, financial firms handling assets, and retailers enabling millions of daily transactions.
But did humanity’s most promising recent innovation fuel a virtual arms race in the cybersecurity space between attackers and defenders? Both sides are battling it out to stay ahead of the AI innovation curve and best the other.
Until recently, humans fought humans in the cyber world with some software assistance. But this dynamic has now irrevocably changed. Attackers are beginning to rely on AI to launch mass attacks and breach vulnerabilities. And the rapid evolution of cyber threats shows that traditional defenses are falling behind.
“Humans are not going to stand a chance fighting AI,” Greylock partner Asheem Chandna declared in a panel discussion with The Wall Street Journal. “Where this game has to go next is that AI has to fight against AI.”
Let’s shed some light on why traditional defenses are struggling to keep up.
To stress the growing threat of AI-backed attacks even more, a Canadian survey found that 93% of the CEOs there are worried that the emergence of generative AI would make them even more vulnerable to breaches.
The threat to cybersecurity, fueled by AI, is unlike any before. Cybercriminals can now use large language models (LLM) for several scenarios, from spear phishing to zero-day vulnerability exploits.
However, the most concerning is the emergence of autonomous attack frameworks. The NotPetya malware that crippled corporations in 2017 required human orchestration. However, modern variants can independently execute multi-stage kill chains, exploit vulnerabilities, and establish persistence — all without human prompts.
Another emerging threat is polymorphic malware, which has generative capabilities. These malware can write their own code to evade security measures. Security structures lacking adaptive intelligence to mitigate such evolving threats are being rendered toothless.
According to Palo Alto Networks, polymorphic malware mutates in an encrypted form. While obfuscating its true functionality through the insertion of dead code or substitution, it retains its original malicious functionality.
Hyper-personalized social engineering has shattered our assumptions about recognizable threats. AI is helping attackers make phishing more contextually aware. For instance, attackers are now scraping data to impersonate colleagues more accurately, including mimicking writing styles or sharing common references.
“Cybersecurity has always had AI for many years,” says CIO and G2 Icon Justin Penchina. “The real shift is in the availability of AI tools for threat actors. Even simple changes like using ChatGPT to write phishing emails with perfect spelling and grammar have changed the game,” he explains.
So yes, AI is making cyber threats more dangerous than ever. But it’s also becoming our best line of defense.
Human interventions with conventional static defenses are not enough to battle these novel threats. It’s like bringing a knife to a gunfight.
That’s why vendors eagerly integrate AI capabilities into their security solutions in a bid to turn static defenses into adaptive systems that learn, predict, and respond autonomously.
The market has responded with extraordinary force, and many companies are beginning to realize this, including Google, which recently acquired AI cloud security provider Wiz for $32 billion.
Even investor confidence in the AI cybersecurity market is unmistakable. According to Markets and Markets, it is poised to grow at a CAGR of 21.9% from 2023 to 2028. AI-backed data security startup Cyera recently closed a $300 million Series D round. Crunchbase called it the biggest raise by a startup playing at the intersection of two of VC’s “favorite industries” — AI and cybersecurity.
Riding the positive investor climate, the number of products added to AI cybersecurity has also grown steadily. Nearly 8,000 new products were added to the G2's Security category in the last 12 months, and 2,000 new products were added to the Governance, Risk, and Compliance category.
The surge in investments is driving innovation across multiple fronts. Thirty-five percent of companies use AI for security hygiene and posture management analysis and prioritization.
Here are the other emerging applications of AI in cybersecurity to combat AI-created threats:
To offset the threat posed by AI, companies such as Darktrace and CrowdStrike have moved beyond traditional signature-based detection in favor of behavioral analysis. CrowdStrike uses indicators of attack to identify signs of malicious behavior.
Using AI-backed security software, IT Manager Jeremy Hines, a G2 Icon, could reduce false positives by nearly 60%. He also takes less than five minutes a week to investigate security incidents, which earlier took almost an hour daily.
Taking a step further, companies like SentinelOne and Sophos now use machine learning (ML) to simulate attacks and harden defenses. Through synthetic attacks, these companies train their defenses on threat detection and mitigation. This approach fundamentally shifts the approach from being defensive to anticipatory security.
Traditional security tools operate in a binary world. They either flag an event as suspicious or permit it based on predefined rules. However, modern AI tools boast contextual awareness and draw relations among systems, data, and users.
“Solutions like Vanta are helping companies build smarter logical connections in their security apparatuses,” Iccha Sethi, VP of Engineering, Vanta, told G2.
“AI is increasingly critical for rapidly identifying risks, suggesting preventive actions, and helping companies maintain a robust security posture against evolving AI-generated threats,” she adds.
AI tools are also letting companies conduct a first-pass analysis of alerts, assess severity, and summarize alert data in context.
“This is all while recommending the ideal course of action for responding to risks, allowing teams to operate more effectively,” explains Matt Hillary, VP of Security and CISO, Drata.
The Zero Trust approach to security — never trust, always verify — is also gaining a sharper edge with AI implementation.
“AI helps operationalize Zero Trust by automating threat detection, enhancing access controls, and continuously analyzing user behavior.”
John Kindervag
Chief Evangelist at Illumio
John, also the creator of the Zero Trust model, believes AI’s real untapped opportunity in Zero Trust lies in proactively securing AI systems themselves.
“Organizations must recognize AI models as protect surfaces and apply strict segmentation to prevent poisoning attacks or unauthorized access,” he says. This recursive form of protection, where AI secures AI, will define the next security frontier.
AI is especially helping organizations hone their adaptive security orchestration and automation (SOAR) capabilities. Rather than relying on predefined rules and playbooks, AI can now analyze threats in their specific forms and tailor a response to them. This is a crucial capability, as AI-driven attacks can bypass and evade traditional response mechanisms.
Platforms like Swimlane boast AI capabilities that build on SOAR. With automation capabilities, the platform provides an agentic AI companion that helps manage cases and report on them.
While it may be believed that AI is generating security risks faster than companies can keep up, vendors are steadily closing this gap with constant innovation and model training.
While there are promising avenues to use AI in cybersecurity, its effective implementation hinges on the quality and diversity of training data. The key to successful security lies in integrating multiple AI-powered capabilities with human expertise into a holistic security strategy.
Companies continue to build AI sentinels across use cases. Essentially, AI is helping meet AI threats on their own terms: with speed, at scale, and the capacity to process information beyond human abilities.
For instance, FinSec deployed a mix of AI and accelerated computing to detect a simulated ransomware attack in less than 12 seconds and recover 80% of data on infected servers. AI is facilitating such real-time responses at unprecedented speeds.
Yet this very reliance on AI technologies creates a novel vulnerability — overreliance on black-box algorithms whose logic remains opaque.
So far, AI in cybersecurity excels at the mundane: pattern recognition, anomaly detection, and the grunt work of sifting through heaps of data. There have been experiments with agentic solutions, which work without human prompts, but it’s too early to call them truly autonomous. Human intervention remains critical in certain areas.
Despite AI’s growing capabilities to tackle threats, it still struggles with contextual understanding — something humans naturally excel at. Humans are still needed to build guardrails and orchestrate security responses. A research paper emphasizing the role humans will play names augmented intelligence (human-AI interfaces) among the three crucial AI techniques that impact cybersecurity development alongside explainable AI and multiple data source analysis.
“Social engineering defense is inherently human-driven,” says John. He says that AI in its present form fails to fully grasp the nuances of human interactions, making phishing detection, insider threats, and business email compromise (BEC) attacks areas where security awareness and human oversight are crucial.
In addition, human insight is still needed to decipher the intent behind an attack. This is because AI responses are still rooted in neural networks, giving out results that are statistical probabilities rather than definitive understandings of malicious intent.
Insider attacks are also more common than we’d like to believe. AI here may lack the ability to differentiate between legitimate actions performed by an authorized user and malafide activities disguised as normal behavior.
For example, an authorized employee may access sensitive files at 2 am. While AI could flag this as a breach, it could simply be in preparation for an early morning meeting. Context determines the answer here.
Every Thursday, we spill hot takes, insider knowledge, and news recaps straight to your inbox. Subscribe here
AI tools are trained on historical data. This means attacks employing novel evasion techniques may bypass the protective layers of an AI solution.
Realizing the importance of keeping humans in the loop, sellers of AI-powered solutions claim to have weaved in human-centricity. Let’s consider the case of Vanta.
According to Iccha, every AI-generated result is surfaced directly in Vanta’s easy-to-use web UI, so users can review, confirm, or tweak the information before finalizing.
There’s also a way to make the user company’s response system stronger with every iteration.
“Whenever users make adjustments, we capture these changes and feed them back into their company knowledge base, making future responses even smarter and safer.”
Iccha Sethi
VP of Engineering, Vanta
Increased human involvement in AI use also evolves the role of security professionals. Their responsibility is moving beyond mere pattern recognition to threat anticipation. This shifts the focus from incident response to proactive threat hunting, requiring professionals to upskill in AI model interpretations and advanced data analytics.
That humans must merely oversee AI decisions misunderstands human-artificial-intelligence synergy. AI solutions are aimed to augment security intuition across organizations.
These interfaces allow human intuition and AI pattern recognition to defuse threats. AI nodes present information in a way professionals naturally process them, empowering humans. At the same time, the nodes become more capable with human insight. This human-machine synergy can make your security infrastructure more resilient to progressing AI-led threats.
While human action remains critical to security, Brandon Summers-Miller, Senior Research Analyst at G2, argues that human actors, with their current capabilities, remain the weakest link in a cybersecurity strategy.
The solution to it, he adds, is implementing AI to aid in training workforces, including simulated attacks and phishing attempts, which can boost human actors’ capabilities against genuine AI threats perpetrated against their organizations.
Professionals must also become adept at setting guardrails for AI systems, especially those with SOAR capabilities. This means defining workflows, reviewing responses, and integrating AI tools with existing security infrastructure.
As AI battles AI on the digital frontlines, we’re rapidly approaching the emergence of quasi-autonomous security ecosystems. These self-evolving defense networks will operate beyond human capabilities and combat AI-created threats at scale.
By 2028, we predict agentic AI solutions will start making a significant dent in cybersecurity. Initial applications will be in risk assessments, compliance, and training, the three areas with the most common AI applications currently.
Further, the NIST Cybersecurity Framework and ISO standards, which are predicated on human-scale threat assessments and response, are slated to undergo radical transformations to account for both AI-driven solutions and threats.
We’ll witness the death of checklist-based security standards and the birth of metrics capturing an organization’s capability to adapt to emerging threats.
The message for organizations navigating this transformation is clear: build transparency into AI systems through explainable AI and empower professionals to orchestrate and govern AI solutions for ethical use.
Organizations must reimagine their relationship with risk. What separates good use of AI for cybersecurity from bad use is ethical clarity and strategic intent. These foundational principles can help security experts always stay ahead of the game.
AI can be a cybersecurity force multiplier for organizations, outsmarting criminals sooner rather than later. Learn more about the dual nature of AI for cybersecurity.
Sidharth Yadav is a senior editorial content specialist at G2, where he covers marketing technology and interviews industry leaders. Drawing from his experience as a journalist reporting on conflicts and the environment, he attempts to simplify complex topics and tell compelling stories. Outside work, he enjoys reading literature, particularly Russian fiction, and is passionate about fitness and long-distance running. He also likes to doodle and write about employee experience.
The zero-sum game between cyber adversaries and defenders is now becoming lopsided.
Google just made its biggest acquisition play yet. In a $32 billion all-cash transaction, the...
The role of help desks continues to expand, and now that hackers see them as potential weak...
The zero-sum game between cyber adversaries and defenders is now becoming lopsided.
Google just made its biggest acquisition play yet. In a $32 billion all-cash transaction, the...