Italy has become the first Western nation to put a stop to ChatGPT being used Dado Ruvic/Reuters

Italy has become the first Western country to ban the use of OpenAI's chatbot tool, ChatGPT, with the country's data protection authority relaying that there were privacy issues with the model. In addition to placing a ban on ChatGPT, the Italian watchdog is set to investigate whether the chatbot complied with General Data Protection Regulation.

The ban in Italy comes after its authority found a recent data security breach involving ChatGPT where users' conversations, as well as payment details, were exposed. This prohibition of ChatGPT in Italy ponders a thought as to whether this has set a precedent for the continent of Europe and if other nations will follow suit.

Germany may consider banning ChatGPT as well if necessary, with France and Ireland curious as to what findings the Italian investigation found. However, Sweden currently has no plans to ban ChatGPT in their region.

ChatGPT has already been banned in multiple countries including China, Russia, North Korea and Iran.

The explosive entry ChatGPT has made into the tech landscape has resulted in other major tech companies such as Google launching its own advanced chatbot with Bard. China is set to have its own AI chatbot services as both internet search engine Baidu and e-commerce organisation Alibaba are joining the global AI chatbot race.

The Microsoft-backed chatbot tool has grown in popularity since launching last November, with it being credited for improving the quality and efficiency of work. However, ChatGPT and AI, in general, have also come in for some criticism as there are concerns that not everyone is equipped to utilise chatbots safely and ensure data is not breached.

Dr Ilia Kolochenko, CEO at ImmuniWeb and Adjunct Professor of Cybersecurity & Cyber Law at Capitol Technology University has spoken on the possible privacy issues in the AI landscape. He stated: "Potential privacy violations by generative AI are just the tip of an iceberg of rapidly unfolding legal troubles."

After launching at the tail end of last year, the presence of ChatGPT in the market, according to Dr Kolochenko, meant "companies of all sizes, online libraries and even individuals – whose online content could, or had been, used without permission for the training of generative AI – started updating Terms of Use of their websites to expressly prohibit the collection or use of their online content for AI training."

Dr Kolochenko states that individual software developers are also taking a stand as they "are now incorporating similar provisions to their software licenses when distributing their open-sourced tools, restricting tech giants from stealthily using their source code for generative AI training, without paying the authors a dime.

In contrast to contemporary privacy legislation, which at this moment in time does not have any clear answers to whether or not and how much generative AI violates privacy terms, Dr Kolochenko said: "Website terms of service and software licenses fall under the well-established body of contract law, having an abundance of case law in most countries."

Dr Kolochenko believes AI outlets may be in deep trouble further down the line, further stating: "In jurisdictions where liquidated damages in contracts are permitted and enforceable, violations of the website's terms of use may trigger harsh financial consequences in addition to injunctions and other legal remedies for breach of contract."

Up ahead there may be real complications arising for large tech companies or AI start-ups as Dr Kolochenko reveals that "legislators on both sides of the Atlantic are now discussing enhanced transparency requirements for AI technologies that should include, among other things, mandatory disclosure of training data sets' provenance."

Dr Kolochenko mentions a cause of this could be a possible elegant exposure of "unwarranted and unauthorized use of data," which could then in return trigger "a surge of litigation and class-action lawsuits, in addition to the emerging privacy-related litigation."