ChatGPT
A researchers has managed to take advantage of GPT-3 Turbo and acquire personal data. Pexels

A recently conducted study by a Ph.D. candidate at Indiana University Bloomington named Rui Zhu has found a potential privacy threat connected to OpenAI's powerful language model, GPT-3.5 Turbo.

Last month, Zhu got in touch with several individuals, including The New York Times' Jeremy White, after getting their email addresses with the help of the GPT-3.5 Turbo model.

"Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him," White noted in a recently published NYT post.

As part of the experiment, Zhu took advantage of a GPT-3.5 Turbo feature that enables the model to recall personal data and managed to evade the model's privacy safeguards without breaking a sweat.

Despite its flaws, the model accurately provided the work addresses of 80 per cent of the Times employees that were tested. Understandably, this discovery has raised concerns about the possibility that ChatGPT-like AI tools could disclose sensitive information without any major modification.

Can you rely on ChatGPT-like AI tools to safeguard your personal information?

This isn't the first time an AI-powered model has been put to wrong use. To recap, over 10,000 ChatGPT accounts were reportedly compromised and sold on the dark web earlier this year.

To those unaware, GPT-3.5 Turbo and GPT-4 are part of OpenAI's suite of language models that continue learning from new information.

The researchers manipulated the tool's security measures by modifying the model's fine-tuning interface, which was originally designed to let users improve the model's knowledge in specific domains. Using this method, researchers forced the standard interface to approve requests that it would normally deny.

Notably, big tech companies like Meta, Google, Microsoft and OpenAI have been sparing no effort in a bid to prevent requests for personal information. Much to their chagrin, researchers continue to find ingenious ways around these safeguards.

Zhu and his team avoided using the model's standard interface. Instead, they achieved the results by using the model's API and adopting a process known as fine-tuning. Similarly, new research found that cybercriminals can exploit a ChatGPT feature that allows them to build their own AI assistants to execute online scams.

Responding to these concerns, OpenAI stressed its dedication to safety and its stance against requests for private data. However, some experts, who are still highly sceptical, have pointed out there's a lack of transparency surrounding the model's training data and the risk linked to AI models that retain sensitive data.

The recently discovered GPT-3.5 Turbo vulnerability has raised more concerns about users' privacy in large-scale language models. According to experts, commercially available models do not offer reliable ways to protect privacy and pose major risks since they constantly integrate diverse data sources.

Citing OpenAI's opaque training data practices, some critics have called for more transparency and improved safeguards to protect personal information harboured by these AI models.