phishing emails
Cybercriminals are using AI bots to craft flawless phishing emails Pixabay

Identifying phishing emails has become an arduous task with the arrival of highly advanced AI chatbots like ChatGPT. According to experts, fraudulent phishing emails have been easy to spot since they usually contain spelling and grammatical errors. However, attackers are now using chatbots to eradicate these easy-to-spot mistakes in their emails.

According to an international advisory issued by policing organisation Europol, criminals are resorting to using all sorts of LLMs (large language models) including ChatGPT. It is worth mentioning here that cybercriminals use a wide range of techniques to get their hands on the personal information of users. For instance, the Nexus Android trojan helps attackers acquire a user's bank details.

How do phishing emails work?

Likewise, cybercriminals use phishing emails to trick recipients into clicking on a legitimate-looking link. Once the recipient clicks on this link, malicious software gets downloaded to their system. Alternatively, the link could direct the user to a fraudulent website that instructs them to provide personal details like a PIN number or password.

A report by Office for National Statistics shows 50 per cent of England and Wales-based adults have received a phishing email last year. So, UK businesses identify phishing emails as the most common form of cyber attack. However, these cybercriminals have now started using AI (artificial intelligence) chatbots to rectify some basic blunders in phishing attempts such as incorrect grammar and spelling.

The use of AI chatbots for cyberattacks

Phishing emails would normally fail to bypass spam filters due to these mistakes. Also, human readers could effortlessly identify these errors. CEO of the US cybersecurity firm Rapid7, Corey Thomas noted that every hacker can now use AI to avoid using poor grammar and misspellings. As a result, the process of identifying phishing attacks is more complicated.

Apparently, phishing attacks previously looked a certain way but that's no longer the case, Thomas explained. The latest data shows ChatGPT is easing cybercrime. OpenAI's newly launched chatbot is backed by Microsoft. The chatbot was recently criticised for being left-leaning, but experts believe it could have a worse impact on people amid the growing rise of cyberattacks.

ChatGPT is the most popular large language model (the technical name for AI chatbots). Darktrace's cybersecurity experts have also pointed out that cybercriminals are using bots to write phishing emails. Notably, these criminals can send longer messages with fewer mistakes.

While the number of malicious email scams has significantly dropped since ChatGPT became official last year, Darktrace's monitoring apparatus has found that the complexity of the aforesaid emails has drastically increased. The company's chief product officer Max Heinemeyer believes this is a major sign that scammers are now drafting longer phishing emails with improved grammar. These flawless malicious emails are probably created with the help of LLMs such as ChatGPT.

Although ChatGPT is likely to be commercialised in the coming days, Heinemeyer claims "the genie is out of the bottle." This technology is being used for scalable and better social engineering. AI enables attackers to create legitimate-looking spear-phishing emails with very little effort compared to what they had to do before, Heinemeyer noted.

What is spear-phishing?

Spear-phishing alludes to emails that aim at acquiring a user's sensitive information such as passwords. Heinemeyer explained that crafting a convincing spear-phishing email can normally be a backbreaking task. However, attackers can now use LLMs to simplify and expedite the process of constructing these types of emails.

"I can just crawl your social media and put it to GPT, and it creates a super-believable tailored email," Heinemeyer explained. You don't need to have good knowledge of the English language to craft a credible email. Europol's advisory report highlights the rise of disinformation, cybercrime, social engineering, and fraud due to the arrival of AI chatbots. The report suggests the system can come in handy for providing instructions to malicious actors about harming others.

A report by US-Israeli cybersecurity firm Check Point revealed it used the new version of ChatGPT to craft genuine-looking phishing emails. It bypassed the chatbot's weak safety procedures by telling it that the phishing email template will be used for an employee awareness programme.