Skip to main content
Clear icon
73º

Italy temporarily blocks ChatGPT over privacy concerns

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, on March 21, 2023, in Boston. The Italian governments privacy watchdog said Friday March 31, 2023 that it is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach. (AP Photo/Michael Dwyer, File) (Michael Dwyer, Copyright 2023 The Associated Press. All rights reserved)

ROME – Italy is temporarily blocking the artificial intelligence software ChatGPT in the wake of a data breach as it investigates a possible violation of stringent European Union data protection rules, the government's privacy watchdog said Friday.

The Italian Data Protection Authority said it was taking provisional action “until ChatGPT respects privacy,” including temporarily limiting the company from processing Italian users' data.

Recommended Videos



U.S.-based OpenAI, which developed the chatbot, said late Friday night it has disabled ChatGPT for Italian users at the government's request. The company said it believes its practices comply with European privacy laws and hopes to make ChatGPT available again soon.

While some public schools and universities around the world have blocked ChatGPT from their local networks over student plagiarism concerns, Italy’s action is “the first nation-scale restriction of a mainstream AI platform by a democracy,” said Alp Toker, director of the advocacy group NetBlocks, which monitors internet access worldwide.

The restriction affects the web version of ChatGPT, popularly used as a writing assistant, but is unlikely to affect software applications from companies that already have licenses with OpenAI to use the same technology driving the chatbot, such as Microsoft’s Bing search engine.

The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the huge trove of digital books and online writings they have ingested.

The Italian watchdog said OpenAI must report within 20 days what measures it has taken to ensure the privacy of users' data or face a fine of up to either 20 million euros (nearly $22 million) or 4% of annual global revenue.

The agency's statement cites the EU's General Data Protection Regulation and pointed to a recent data breach involving ChatGPT “users' conversations" and information about subscriber payments.

OpenAI earlier announced that it had to take ChatGPT offline on March 20 to fix a bug that allowed some people to see the titles, or subject lines, of other users’ chat history.

“Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the company had said. “We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”

Italy's privacy watchdog, known as the Garante, also questioned whether OpenAI had legal justification for its “massive collection and processing of personal data” used to train the platform's algorithms. And it said ChatGPT can sometimes generate — and store — false information about individuals.

Finally, it noted there's no system to verify users' ages, exposing children to responses "absolutely inappropriate to their age and awareness.”

OpenAI said in response that it works "to reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals.”

“We also believe that AI regulation is necessary — so we look forward to working closely with the Garante and educating them on how our systems are built and used,” the company said.

The Italian watchdog's move comes as concerns grow about the artificial intelligence boom. A group of scientists and tech industry leaders published a letter Wednesday calling for companies such as OpenAI to pause the development of more powerful AI models until the fall to give time for society to weigh the risks.

The president of Italy's privacy watchdog agency told Italian state TV Friday evening he was one of those who signed the appeal. Pasquale Stanzione said he did so because “it's not clear what aims are being pursued" ultimately by those developing AI.

If AI should “impinge" on a person's “self-determination” then “this is very dangerous,'' Stanzione said. He also described the absence of filters for users younger than 13 as ”rather grave."

San Francisco-based OpenAI's CEO, Sam Altman, announced this week that he’s embarking on a six-continent trip in May to talk about the technology with users and developers. That includes a stop planned for Brussels, where European Union lawmakers have been negotiating sweeping new rules to limit high-risk AI tools, as well as visits to Madrid, Munich, London and Paris.

European consumer group BEUC called Thursday for EU authorities and the bloc’s 27 member nations to investigate ChatGPT and similar AI chatbots. BEUC said it could be years before the EU's AI legislation takes effect, so authorities need to act faster to protect consumers from possible risks.

“In only a few months, we have seen a massive take-up of ChatGPT, and this is only the beginning,” Deputy Director General Ursula Pachl said.

Waiting for the EU’s AI Act “is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people.”

___

O'Brien reported from Providence, Rhode Island. AP Business Writer Kelvin Chan contributed from London.


Loading...

Recommended Videos