ChatGPT
ChatGPT is returning to users in Italy after an initial ban of the OpenAI-developed chatbot in the country Dado Ruvic/Reuters

Italy's ban on ChatGPT will be lifted at the end of April, it was announced on Wednesday, after the nation's data protection watchdog put OpenAI on notice. The watchdog wanted assurances that ChatGPT's creator, OpenAI, would enforce rules by the end of April to protect users' personal data as well as make sure that minors are protected.

This update comes just two weeks after Italy became the first Western country to ban ChatGPT due to concerns over privacy issues. The initial ban came off the back of a recent data security breach where financial details and user conversations were exposed through ChatGPT.

It led to regulators having to investigate whether or not the chatbot had infringed the European Union's GDPR and Italy's data privacy laws. Due to the large language model, officials were worried that users of ChatGPT would be disclosing sensitive and personal details that would in turn be unexpectedly collected and stored by OpenAI.

The concern regarding minors stemmed from the fears surrounding ChatGPT's ability to create false, biased and toxic material. This would expose younger users to inappropriate content that could potentially be very harmful and dangerous.

Italy has not been the only nation to have taken a look into privacy concerns with ChatGPT, as regulators from France and Canada received official complaints, leading them to the decision to examine whether the tool was wrongly storing, using and collecting personal information and data.

Also, Spain's data protection agency revealed to Reuters that they had requested the European Union's privacy watchdog assess privacy worries surrounding ChatGPT.

Dr Ilia Kolochenko, Founder of ImmuniWeb, and a member of the Europol Data Protection Experts Network has given his view on the current situation surrounding rules on artificial intelligence. He believes issues relating to privacy "are just a small fraction of regulatory troubles that generative AI, such as ChatGPT, may face in the near future."

Dr Kolochenko noted that "many countries are actively working on new legislation for all kinds of AI technologies, aiming at ensuring non-discrimination, explainability, transparency and fairness." He further stated: "The regulatory trend is not a prerogative of European regulators. For example, in the United States, the FTC is poised to actively shape the future of AI."

Regarding other developments around the world, Dr Kolochenko revealed: "The Cyberspace Administration of China is also energetically working on new rules and restrictions for AI companies."

In relation to public concerns, Dr Kolochenko mentioned: "One of the biggest issues is training data, which is frequently collected and used by AI vendors without any permission from content creators."

Whilst copyrighted content gets little to no safety from modern intellectual property (IP) law, Dr Kolochenko stated the majority of "large-scale data-scrapping practices likely violate terms of service of digital resources, such as online libraries and websites, and may eventually lead to an avalanche of litigation for breach of contract and interrelated claims."

Dr Kolochenko believes there is a possibility that "some jurisdictions may even wish to criminally prosecute such practices under their unfair competition laws." However, despite this, he feels banning AI is not the right move as "while law-abiding companies will submissively follow the ban, hostile nation-state and threat actors will readily continue their research and development, gaining an unfair advantage in the global AI race."

Amongst having had issues in Italy, OpenAI is facing trouble in Australia after Brian Hood, a mayor in the country, accused the chatbot service of making false claims about him. Mayor Hood is set to file a defamation lawsuit against ChatGPT and will sue them unless the chatbot tool corrects itself in stating that he went to prison for bribery charges.