September 20, 2024
by Jasmine Kharbanda / September 20, 2024
The zero-sum game between cyber adversaries and defenders is now becoming lopsided.
The advent of artificial intelligence (AI) was nothing less than revolutionary. It promised efficiency, accuracy, speed, and agility, making businesses keen on using the technology to build their competitive edge.
However, the same technology is now being used by cybercriminals to cause widespread disruption, threatening us all.
At the risk of stating the obvious, AI is changing everything.
Despite its proven ability to be helpful in many areas, in matters of cyber risks, AI is being exploited to generate malicious code, craft sophisticated social engineering attacks, use synthetic media such as deepfakes, and even leverage leaked credentials from platforms like ChatGPT.
compromised ChatGPT accounts were discovered on dark web marketplaces in 2023.
Source: Group-IB
“These credentials can not only be used to launch secondary attacks against individuals, but they can also expose private chats and communications on the OpenAI platform, which could be exploited for ransom and blackmail," said Group-IB’s CEO, Dmitry Volkov.
Alarmingly, most businesses are unaware of the creeping dangers they are now facing with cybercriminals armed with AI. Even those who recognize the severity often lack knowledge about available defense upgrades or options to protect themselves from widespread exploitation.
However, despite the irony, the offender can act as your ultimate defender. Many cybersecurity leaders and veterans are taking center stage to discuss where there is a lag when it comes to using AI in the space and what upgraded capabilities are required to outpace adversaries.
While having a strong institutional knowledge of cybersecurity developed over the years as a technical or business professional is important, AI in cybersecurity presents an entirely new set of truths. It represents a clash and a collaboration, but if utilized correctly, it can be a powerful tool to combat constantly evolving cybersecurity threats.
AI has long been a curiosity, examined in boutique research labs on university campuses or in sandbox projects of major corporations’ R&D centers.
Expert systems, as AI was familiarly called in the late 20th century, handled basic levels of inference, rule-based reasoning, and entry-level domain knowledge. Scientists envisioned expert systems useful in cases such as first-generation credit scoring and music genre preferences.
Today, those relatively crude and limited-function precursors to what is now known as generative AI (GenAI) have become a powerful force reshaping knowledge, content, and decision-making in every industry.
In fact, research indicates billions of dollars are spent annually on AI-based systems in dozens of different industries. Five industries—banking and financial services, retail, professional services, discrete manufacturing, and process manufacturing—spend more than $10 billion annually on AI solutions.
Source: Statista
However, numerous other forms of AI have burst onto the scene with similar levels of impact and importance, each with its own unique influence on cybersecurity.
For instance, predictive AI, as the name implies, is well suited for predicting how, where, and when cyberattacks will threaten an organization. It is also good at helping users spot and analyze patterns, making it a great fit for organizations looking to predict behavior that may indicate threats or actual attacks.
Causal AI is also rapidly gaining adoption because it helps organizations understand and create models for cause-and-effect patterns—not only for possible attacks but for the most appropriate responses.
Explainable AI (XAI) is crucial for teams and organizations to comprehend the logic or rationale behind AI-generated decisions, such as alerts and recommendations. By providing transparency, XAI enables prompt, effective, and well-calculated decisions, minimizing potential biases that can arise in manual decision-making processes.
Businesses have placed high bets on AI to enhance their operations and reduce toil and the mounting resource pressure, but they have somehow overlooked the consequences of the technology.
83% of companies claim that AI is a top priority in their business plans. Yet, if asked about the safe use of AI—ensuring it doesn't introduce additional vulnerabilities, privacy threats, or regulatory challenges—teams have unresolved questions rather than a definitive answer.
In contrast, adversaries seem to have clear goals when using AI technology to achieve their nefarious objectives.
Group-IB’s Hi-Tech Crime Trends Report 2023-24 shows AI weaponization as one of the top challenges in the global cyberthreat landscape.
AI has aided in advancing cybercrimes, becoming an open-source technology for low-skilled activists to initiate automated attacks, requiring little effort on their end.
Therefore, more attackers will undoubtedly move toward AI models for capabilities such as technical consultation, scam creation, intelligence gathering, and maintaining their anonymity. Cybercriminals are integrating AI into their workflows to scale their threats' impact, innovate their threat methodologies, and create new revenue streams.
This has been made much easier for them due to the wider availability of inexpensive (and free) AI tools. They also utilize AI to execute hacking toolkits and build malicious tools for exploits and digital espionage while brainstorming attack techniques, tactics, and procedures (TTPs).
Talking specifically about GenAI, which everyone seems to have the hots for currently, there have been many threats observed. Phishing remains a primary cyberthreat, with AI being used to craft convincing phishing emails.
Other than this, let’s take the case of ChatGPT, for example. The release of ChatGPT's GPT-4 model marked a turning point, gaining global popularity even though it has been used for beneficial and harmful purposes.
Users have tried to circumvent ChatGPT's safety measures, such as rewriting hypothetical responses with real details and breaking up sensitive terms and text continuation. A practical case showed that in a dataset of 15 one-day vulnerabilities, GPT-4 was observed to be capable of exploiting 87% of them, based solely on the CVE descriptions.
Source: Group IB
The obvious question is: while businesses manage the unforeseen threats from the accelerating technology, often with limited cybersecurity resources, how can they be robustly protected against these obstructions?
Opinions have been divided about whether AI favors cybercriminals or security experts. However, several industry trends and industry experts claim that AI can be a cybersecurity force multiplier for organizations, outsmarting criminals sooner rather than later.
Even though attackers often gain the initial advantage in using new tools such as GenAI, defenders can more than make up the difference if they understand how to leverage the technology in key areas such as threat intelligence, analytics, and anomaly detection.
Let’s take a look at the areas where you can leverage AI against attacks.
In high-risk-prone industries, especially financial services and retail, AI and ML significantly enhance the security of digital and mobile applications by analyzing user behavior and biometrics. These technologies use ML algorithms to monitor real-time data and suspicious activities that may be missed by security professionals.
For example, they can find cues of threats through unusual keyboard and cursor patterns that indicate a potential threat or fraud attempt.
With AI-powered threat intelligence, identifying, analyzing, and extrapolating threats relevant to businesses and industries becomes a cyclical and sorted activity.
AI tools can analyze historical logs, records, and data to deduce which attacker may strike which region using what tools next. They can also sift through massive data sets from diverse sources, including social media, forums, and the dark web, to identify threat patterns. These capabilities are essential for businesses preparing for potential threats and building preemptive defenses.
It is difficult to handle massive traffic on your digital channels, including tracking network activity, traffic quality (including bad bot activity), and identifying deviations from normal behavior. But with AI, businesses can quickly sift through massive network traffic to spot anomalies, optimizing monitoring and detection resources.
Automation is key to maximizing AI's benefits in cybersecurity.
While technologies like endpoint detection and response (EDR), managed detection and response (MDR), and extended detection and response (XDR) integrate AI to accelerate actions, full automation, driven by advanced AI tools, takes it a step further. This speeds up detection and response times, reduces the likelihood of false positives, and streamlines alert management.
Cybercriminals' illicit networks and operations expand beyond geography and nodes, making it difficult to understand the full extent of their crimes. However, with AI-infused graph interpretation, one can visualize these hidden and disparate connections and sources and turn them into actionable, real-time insights.
With AI, teams can detect suspicious indicators and activities within their infrastructure, recognize patterns and correlate events, and automate insights and responses, enhancing cybersecurity operations and timely responses to potential risks.
AI can identify all of an attacker’s accounts far more reliably and quickly than manual methods. AI tools can crawl the dark web, analyzing forum posts, marketplaces, and other sources to gather intelligence on potential threats, stolen data, or emerging attack techniques. This proactive approach allows organizations to better prepare for and mitigate potential attacks.
AI-powered text and image analysis can detect phishing content, reducing the risk of successful phishing attacks. Advanced AI algorithms can identify subtle indicators of phishing, such as language inconsistencies, abnormal URLs, and visual clues, that might slip past users. AI can also learn from existing phishing techniques to improve its detection abilities.
AI models can be trained to identify patterns of malicious behavior or anomalous activities in network traffic, aiding in the detection of malware, including polymorphic malware that constantly changes code.
AI is significant in identifying the kill chain—the sequential actions taken by cybercriminals to infiltrate a network and launch attacks. Its other use cases are building defenses and supporting intrusive cybersecurity engagements such as red teaming, where cyberattack simulations are conducted in a controlled environment to identify security loopholes and test incident response capabilities.
Teams can use GenAI to understand threat actors and their attack maneuvers and get answers to critical questions like “where am I most vulnerable?” through natural language queries.
Security teams can utilize GenAI to identify vulnerabilities and automate the generation of security patches. These patches can then be tested in a simulated or controlled environment to understand their effectiveness and to ensure they don’t introduce new vulnerabilities. Thus, using AI not only reduces the time taken to deploy patches but also minimizes the risks of human error in manual patching processes.
With network infrastructure facing growing threats, AI enables a shift from traditional rule-based or signature-based detection to more advanced contextual analysis, helping find the hidden links that reveal the complete intent, chain, and process of threat activity.
Large language models (LLMs) are also used to develop self-supervised threat-hunting AI, autonomously scanning network logs and data to provide adaptive and appropriate threat responses, such as quarantining affected systems and malware detonation.
The approach to coding and testing has changed drastically with the advent of AI. There is no longer a need to spend countless hours writing and testing code that could unwarrantedly introduce vulnerabilities. Today, code can be generated, queries can be answered, and playbooks can be created in just minutes.
AI has strengthened offensive security (OffSec) testing by creating diverse and real-life attack simulations, including those based on open-source vulnerabilities. This approach ensures that code is not only robust but also continuously improved.
Another area in which AI tools efficiently help often overworked, in-house cybersecurity staff is quickly and automatically generating training materials, including simulations based on historical data and rapidly changing industry trends on attack vectors.
An additional critical area with which AI can help immeasurably. New tools frequently interpret confusing and contradictory contexts for numerous data types, creating processes, rules, and procedures to further prevent sensitive and personal information from being exfiltrated inappropriately.
Note: Assessing readiness is critical to using AI as part of comprehensive cybersecurity hygiene. Before fully integrating AI solutions into their cybersecurity strategy, companies need to evaluate their current infrastructure, resources, and skill sets.
AI is a powerful force multiplier in fortifying an organization’s cyber defenses, but it must be extended and complemented with well-trained, AI-proficient cybersecurity experts.
A well-defined AI strategy that aligns with your cybersecurity goals is crucial to best enable your cyberdefenses.
However, there often seems to be a learning curve, or teams may have different opinions regarding AI adoption. Therefore, the first and foremost step is for leadership to reach a consensus and expedite their AI readiness.
While there are specific parameters to address based on each business, the pillars to assess are your tech ecosystem, data infrastructure, and operational processes. A comprehensive AI readiness assessment survey can be a great tool to gauge your preparedness.
AI offers limitless potential, but caution is crucial.
As businesses plan to use GenAI to boost operations, innovation, and growth, they must also create frameworks, compliance solutions, and ethical guidelines to manage the technology responsibly.
Putting the right AI tools, processes, and teams in place requires more than just a checklist of cybersecurity readiness activities. It requires detailed short—and long-term planning, a well-resourced and properly orchestrated rollout and deployment, and the development of metrics to test and ensure the efficacy of AI-powered cybersecurity.
Using AI to enhance an organization’s cybersecurity readiness is a strategic decision, but it should not be mistaken for a complete strategy on its own. It’s a starting point for a broader cybersecurity strategy.
While using AI to create more effective and efficient cybersecurity, it is wise to start with a few use cases to build success and momentum. Don’t try to do everything at once.
Also, in the words of legendary college basketball coach John Wooden, “Be quick but don’t hurry.” There is a sense of urgency here. But don’t rush into decisions. Better to take a little more time and get it right than to take less time and get it wrong.
For leaders and professionals reviewing whether to integrate AI into their cybersecurity strategy, understand that over 70% of cybersecurity professionals consider it critical for future defense strategies.
Embrace the opportunities provided by AI in cybersecurity, but do it wisely. Partner with AI and cybersecurity experts, use tried-and-tested strategies, and know your infrastructure needs inside out.
With the AI era in cybersecurity, preparation isn’t just an advantage but a necessity.
Gain insider tips on defending against zero-day attacks and explore best practices shared by leading security experts.
Edited by Shanti S Nair
Jasmine is a seasoned marketing specialist with a dual MBA in marketing and IT and an LSE certification. As Group-IB's global content marketing manager, she is keen on helping businesses and citizens become cyber-secure through the most powerful tool for change and action: effective communication.
Building a tech stack is one of the most difficult tasks that IT admins, cybersecurity...
The role of help desks continues to expand, and now that hackers see them as potential weak...
Building a tech stack is one of the most difficult tasks that IT admins, cybersecurity...