Italian Data Protection Authority Bans ChatGPT Over Privacy Concerns

After voicing worries about a recent data compromise and the legitimacy of utilising personal information for training the popular chatbot, Italy’s privacy commission banned ChatGPT. The action was described as an interim one “until ChatGPT respects privacy” by the Italian Data Protection Authority. Why the ban? The authority announced that ChatGPT’s owner, the San Francisco-based […]

Author
Swagath S Senan
Follow us:

After voicing worries about a recent data compromise and the legitimacy of utilising personal information for training the popular chatbot, Italy’s privacy commission banned ChatGPT. The action was described as an interim one “until ChatGPT respects privacy” by the Italian Data Protection Authority.

Why the ban?

The authority announced that ChatGPT’s owner, the San Francisco-based OpenAI, would be subject to an “instant temporary prohibition on the processing of Italian users’ data.” A spokesman for OpenAI stated, “We are committed to safeguarding people’s privacy, and we think we adhere to GDPR and other privacy rules. The company also restricts the usage of personal data in systems like ChatGPT. We deliberately aim to decrease the personal information we utilise while training ChatGPT and other AI systems since we want our AI to understand the globe, not specific private people.

Since its debut in November of last year, ChatGPT has been a phenomenon thanks to its capacity to provide believable-sounding answers to queries as well as a variety of content, including poems, academic essays, and summaries of extensive documents when requested by users. It is driven by a ground-breaking artificial intelligence system that has been taught using a massive amount of internet-sourced data.

In its report, the Italian watchdog raised issues with the chatbot’s data processing. There seems to be “no legal basis” for the extensive gathering and utilisation of private information in order to “train” the system on which the AI depends, the report said, referring to “the lack of a notification to users and to all those engaged whose data is obtained by OpenAI.”

The concerns over advanced AIs

The ban was announced just days after some more than 1,000 supporters of artificial intelligence, including Tesla CEO Elon Musk, called for an abrupt halt to the development and release of “giant” AIs for at least 6 months out of worries of what companies like OpenAI are constructing. “These powerful digital minds (AIs) that no one can understand, predict, or reliably control,” he commented.

The Italian watchdog also brought up a data breach that OpenAI had on March 20. In that incident, communications and some users’ personal information, including email addresses and the last four digits of their credit cards, were partially exposed. A data loss “regarding the chats of users and information relevant to the payment of the subscribers for the service” was reported by the authority to ChatGPT. OpenAI issued an apology at the time and promised to “work carefully to repair trust.”

The authority also seemed to be referencing ChatGPT’s tendency to provide false information when they said that “inaccurate personal data are processed” as a result of “the information provided by ChatGPT not always matching factual conditions.” Finally, it stated that “despite the service ostensibly being aimed to individuals above the age of 13 as per OpenAI’s terms of service,” “a lack of age restrictions exposes minors to getting comments that are completely inappropriate given their age and understanding.”

The Italian authority notified OpenAI that it must submit a report to them within 20 days outlining the steps it has undertaken to protect the privacy of users’ data, failing which it might be fined up to Euro 20 million (£17.5 million) or 4 percent of its annual global revenue. For a response, OpenAI has been approached. Applications from businesses that already have licences from OpenAI to use the same technology powering the chatbot, including Microsoft’s Bing search engine, are not likely to be impacted by the change.

Sam Altman, the CEO of OpenAI, revealed this week that he would travel to six continents in May to meet with users as well as developers to discuss the technology. This will involve a stop in Brussels, where legislators from the European Union have been drafting extensive new regulations to restrict high-risk AI capabilities.