Phishing is an ever-growing concern in cybersecurity. It was the most common attack type in 2023, accounting for 43.3% of email-based threats – and its danger has been supercharged by the rise of generative AI. Businesses are right to be worried.

GenAI has transformed the global cybersecurity landscape. As it evolves, criminals are using it to launch increasingly sophisticated attacks with alarming ease. Despite this, companies can protect themselves from falling victim to an attack and avoid becoming one of the many entities that collectively spent $1.1 billion globally on ransomware payments last year.

Why so many attacks?

Email has been a staple of communication for decades, becoming almost second nature in both personal and professional contexts. This familiarity, however, has led to complacency, making email the perfect channel for cybercriminals.

One malicious application of genAI in emails is to impersonate trusted companies. In 2023, e-commerce and delivery companies DHL (26.1%), Amazon (7.7%), and FedEx (2.3%) were ranked in the top ten most popular choices for impersonations. Typically, these attacks come with urgent messages asking for an equally swift response to verify an “anomalous transaction” or “validate a delivery”. This encourages hurried action, preventing the recipient from taking time to consider the source of the message, and is typically done to gain access to a person’s private information for financial gain or blackmail.

Cybercriminals can also use genAI in a more sophisticated way, to create malicious spear-phishing emails, generate spoofing kits, or even learn how to use more advanced tools like multi-factor authentication (MFA) bypass kits, and even how to generate ransomware. Ransomware remains a major issue as evident in the LockBit ransomware attack earlier this year, amongst many others.

There is more however, genAI has been a boon for many threat actors, and the number of possible ways it can assist an attacker is staggering.

How threat actors use genAI

Threat actors have become adept at leveraging genAI in many ways, but here are some of the most common:

1. Open-source intelligence (OSINT) involves gathering information from publicly available sources, such as security forums, news articles, and social media, in order to impersonate a service or contact in an attempt to add legitimacy and authority to a communication. GenAI has made this process significantly easier. AI’s capability to scour and analyse vast amounts of data quickly means attackers can compile lists of targeted information that they can use to further stage attacks. 

2. Attack Chain Assistance can be provided by popular LLMs to help cybercriminals learn how to build all stages of a given attack. This is particularly helpful for novice threat-actors. In order to levy an attack against a target, the attacker needs to know every step of the attack. GenAI can not only help the attacker learn about a given attack method, it can also help them learn how to carry it out. This provides the knowledge needed to launch attacks to a whole new generation of hackers that prior to genAI lacked the knowledge to do so.

3. Spear phishing generation involves highly customised lures sent in a meticulously targeted way. Criminals research and use detailed knowledge of their targets to ensure higher success rates. In these attacks, users are baited to click links, download attachments, or enter their login details to a fake but genuine-seeming sign-in page.  These attacks aim to derive success through their highly customised nature. MFA bypass kits, such as Evilginx and W3LL panel, have become more commonly paired with spear-phishing attempts.GenAI streamlines the process of setting up the delivery mechanism (an email) with a link to a reverse-proxy service that effectively becomes an adversary-in-the-middle attack. These kits present a convincing login page and capture session cookies during MFA prompts. After this, it redirects the user to the legitimate page, none the wiser to what has happened. 

What should organisations do?

GenAI attacks are on the increase and extremely concerning, but they can be addressed. With the right knowledge, products and expertise, companies can build a robust defence against this evolving threat landscape. 

Employees are an important line of defence, so companies must invest in ongoing employee education, training and awareness of these attacks to create a secure system. However, Hornetsecurity research indicates that 26% of organisations still provide no training, which places them at risk. 

Training should be regular and engaging to ensure employees can identify, deflect and report potential attacks. By combining robust technical defences with an empowered workforce, an organisation can establish a culture where security is important for all employees, regardless of their position.  

Cybersecurity providers are also using AI to counteract these threats. Many now offer comprehensive next-gen protection packages aimed at helping organisations strengthen their defences with AI support. 

One thing to remember is that humans initiate AI attacks, and phishing will always be phishing, regardless of how cleverly disguised it is. AI-enabled attacks are based on known tactics and current technology (so far), which means they have limitations. They can largely be recognised and blocked by email security tools and for the few that make it through, it’s a matter of employees being aware of what to do.  

Hornetsecurity’s mission is to continue to stay ahead of the AI game. We empower organisations of all sizes to focus on their core business, while we protect their email communications, secure their data, help them strengthen their employees’ cybersecurity awareness, and ensure business continuity with next-generation cloud-based solutions. 

 

 

The post The new face of phishing: AI-powered attacks and how businesses can combat them appeared first on Cybersecurity Insiders.