ChatGPT’s ‘hallucination’ problem hit with another privacy complaint in EU

ChatGPT’s ‘hallucination’ problem hit with another privacy complaint in EU

OpenAI is facing another privacy complaint in the European Union. This one, which has been filed by privacy rights nonprofit noyb on behalf of an individual complainant, targets the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals.

The tendency of GenAI tools to produce information that’s plain wrong has been well documented. But it also sets the technology on a collision course with the bloc’s General Data Protection Regulation (GDPR) — which governs how the personal data of regional users can be processed.

Penalties for GDPR compliance failures can reach up to 4% of global annual turnover. Rather more importantly for a resource-rich giant like OpenAI: Data protection regulators can order changes to how information is processed, so GDPR enforcement could reshape how generative AI tools are able to operate in the EU.

OpenAI was already forced to make some changes after an early intervention by Italy’s data protection authority, which briefly forced a local shut down of ChatGPT back in 2023.

Now noyb is filing the latest GDPR complaint against ChatGPT with the Austrian data protection authority on behalf of an unnamed complainant who found the AI chatbot produced an incorrect birth date for them.

Under the GDPR, people in the EU have a suite of rights attached to information about them, including a right to have erroneous data corrected. noyb contends OpenAI is failing to comply with this obligation in respect of its chatbot’s output. It said the company refused the complainant’s request to rectify the incorrect birth date, responding that it was technically impossible for it to correct.

Instead it offered to filter or block the data on certain prompts, such as the name of the complainant.

OpenAI’s privacy policy states users who notice the AI chatbot has generated “factually inaccurate information about you” can submit a “correction request” through privacy.openai.com or by emailing dsar@openai.com. However, it caveats the line by warning: “Given the technical complexity of how our models work, we may not be able to correct the inaccuracy in every instance.”

In that case, OpenAI suggests users request that it removes their personal information from ChatGPT’s output entirely — by filling out a web form.

The problem for the AI giant is that GDPR rights are not à la carte. People in Europe have a right to request rectification. They also have a right to request deletion of their data. But, as noyb points out, it’s not for OpenAI to choose which of these rights are available.

Other elements of the complaint focus on GDPR transparency concerns, with noyb contending OpenAI is unable to say where the data it generates on individuals comes from, nor what data the chatbot stores about people.

This is important because, again, the regulation gives individuals a right to request such info by making a so-called subject access request (SAR). Per noyb, OpenAI did not adequately respond to the complainant’s SAR, failing to disclose any information about the data processed, its sources, or recipients.

Commenting on the complaint in a statement, Maartje de Graaf, data protection lawyer at noyb, said: “Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences. It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The company said it’s asking the Austrian DPA to investigate the complaint about OpenAI’s data processing, as well as urging it to impose a fine to ensure future compliance. But it added that it’s “likely” the case will be dealt with via EU cooperation.

OpenAI is facing a very similar complaint in Poland. Last September, the local data protection authority opened an investigation of ChatGPT following the complaint by a privacy and security researcher who also found he was unable to have incorrect information about him corrected by OpenAI. That complaint also accuses the AI giant of failing to comply with the regulation’s transparency requirements.

The Italian data protection authority, meanwhile, still has an open investigation into ChatGPT. In January it produced a draft decision, saying then that it believes OpenAI has violated the GDPR in a number of ways, including in relation to the chatbot’s tendency to produce misinformation about people. The findings also pertain to other crux issues, such as the lawfulness of processing.

The Italian authority gave OpenAI a month to respond to its findings. A final decision remains pending.

Now, with another GDPR complaint fired at its chatbot, the risk of OpenAI facing a string of GDPR enforcements across different Member States has dialed up.

Last fall the company opened a regional office in Dublin — in a move that looks intended to shrink its regulatory risk by having privacy complaints funneled by Ireland’s Data Protection Commission, thanks to a mechanism in the GDPR that’s intended to streamline oversight of cross-border complaints by funneling them to a single member state authority where the company is “main established.”

Source link

post a comment