ChatGPT
A group of researchers used attack prompts to make ChatGPT reveal personal user data. Pexels

A new study suggests OpenAI's AI-powered chatbot, ChatGPT, currently doesn't understand the concept of privacy.

ChatGPT users have been exploiting the AI chatbot for a while now. For instance, some users have tried to trick ChatGPT into generating a Windows 11 product key by telling the AI chatbot that it will help cure their dog.

However, the recently surfaced study has highlighted a more concerning exploit of ChatGPT. As reported by the folks at 404 Media, a team of researchers have used one simple prompt to make ChatGPT reveal a set of personal user data.

Reportedly, the researchers asked the chatbot to repeat a word forever. In response, the AI revealed a bunch of email addresses, phone numbers and other details.

ChatGPT vulnerability exposed

AI companies spare no effort to internally and externally test Large Language Models (LLMs) before making them available to the public. However, the researchers found that simply asking ChatGPT to repeat the word "poem" forever resulted in the AI bot revealing the contact details of the real founder and CEO of an undisclosed company.

"We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT," the researchers, from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley and ETH Zurich, wrote in a paper published in the open access prejournal arXiv Tuesday.

Similarly, asking the bot to repeat the word "company" caused it to reveal the email address and phone number of a random US-based law firm. According to the report, 16.9 per cent of the time the researchers ran this experiment, the AI bot provided some sort of personally identifiable information.

In addition to revealing phone numbers and email addresses, ChatGPT gave researchers Bitcoin addresses, social media handles, birthdays, fax numbers and even explicit content from dating websites.

Additional information at the end of the report gives us a glimpse into the full responses to some of the researchers' queries. Also, it shows long strings of training data that ChatGPT revealed when prompted using this trick.

In one of the examples, the researchers asked ChatGPT to repeat the word "book". "It correctly repeats this word several times, but then diverges and begins to emit random content," they wrote.

The random content mostly comprised long passages of text taken directly from the internet. The researchers published some specific content that was scraped directly from multiple sources including CNN, fandom wikis, WordPress blogs and Goodreads.

This content contained verbatim passages from random internet comments, news blogs, a casino wholesaling website, Wikipedia pages, copyrighted legal disclaimers, Overflow source code, Stack and Terms of Service agreements.

While OpenAI is reportedly gearing up to add a slew of useful features to ChatGPT, several users recently accused the American AI company of deliberately slowing down its AI bot.