Google Bard
Google Bard has security flaws that are likely to attract scammers and cybercriminals. Pexels

Much to the delight of scammers, it looks like Google's Bard AI chatbot has some serious security flaws that they can take advantage of. It is no secret that AI models including OpenAI's ChatGPT and Google Bard pose security risks.

Unsurprisingly, cybercriminals take full advantage of these loopholes in AI bot technology. For instance, ChatGPT-like chatbots have been used to create genuine-looking content for fake news websites that spread misinformation.

Similarly, Google Bard possesses a flaw that helps bad actors further enhance their phishing skills. Cybersecurity researchers at Check Point were able to effortlessly create a phishing email, and a keylogger, and even generate some simple ransomware code with the help of Bard.

Google Bard Vs ChatGPT: Which AI model is safer?

As part of their research, they checked how Bard fares against its biggest rival ChatGPT in terms of security. They used both platforms for getting three things including some basic ransomware code, malware keyloggers, and phishing emails.

Both platforms did not cooperate when they were directly asked to create phishing emails. However, Bard provided an example of a phishing email, while ChatGPT did not comply. Instead, OpenAI's AI tool pointed out that creating phishing emails means engaging in fraudulent activities, which is illegal.

Next, researchers tried getting malware keyloggers using Bard and ChatGPT. The AI tools did not respond to direct, as well as trick questions. ChatGPT offered a more detailed answer, while Bard simply replied: "I'm not able to help with that, I'm only a language model."

The researchers asked Bard and ChatGPT to provide a keylogger to log their keys. Although both AI bots generated malicious code, ChatGPT added a short disclaimer. Lastly, they asked Bard to generate code for a basic ransomware script.

Eventually, the researchers were able to get the requested ransomware script from Bard. "Bard's anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to those of ChatGPT," they noted. "Consequently, it is much easier to generate malicious content using Bard's capabilities."

Is this security flaw a real cause for concern?

Google has been sparing no effort to improve its Bard AI tool. In fact, the search giant recently rolled out the biggest update yet, which equips Bard with the ability to speak and save conversations. Nevertheless, any new technology is subject to malicious misuse and generative AI is no exception.

Moreover, law enforcement and cybersecurity researchers have previously warned that generative AI tools can come in handy for creating convincing phishing emails and malware. As a result, a cybercriminal who lacks advanced coding knowledge can also engage in sophisticated cyberattacks.

In other words, IT teams that defend organisations worldwide will have a harder time when it comes to defending their premises. Nevertheless, developers are leaving no stone unturned in a bid to teach AI tools to decline requests to facilitate illegal activities.

Since the generative AI market is decentralised, big players like Google, Microsoft, and OpenAI will always be monitored by regulators. So, cybercriminals are likely to use AI tools by smaller companies that aren't capable of preventing abuse.

What do researchers say?

A couple of months after ChatGPT's release, CPR's researchers found that scammers were using the AI tool to build and improve dangerous malware and ransomware. Reportedly, the researchers found a myriad of posts on underground hacking forums where cybercriminals admitted they used ChatGPT to build malware, info stealers, and encryption tools.

It is worth noting that some of these authors lacked programming experience. Aside from this, cybercrooks are building supporting software with the help of AI tools. Some researchers have already warned that generative AI tools can help bad actors gain hacking knowledge.

It was also found that ChatGPT provides instructions on how you can find vulnerabilities within a website. Cybernews researchers requested ChatGPT to help them test a website's vulnerabilities. Notably, OpenAI's widely-popular chatbot duly responded.