Noyb, the European Center for Digital Rights based in Vienna, Austria, has brought a complaint to the Austrian data protection authority, urging an investigation into OpenAI’s data processing methods, particularly regarding the accuracy of personal data handled by its large language models (LLMs).
According to Maartje De Graaf, a Noyb data protection lawyer, companies are currently facing challenges in ensuring compliance with EU law when it comes to processing data about individuals, particularly in creating chatbots akin to ChatGPT.
This move by Noyb isn’t an isolated incident. In December 2023, a study by two European nonprofit organizations shed light on issues with Microsoft’s Bing AI chatbot, now known as Copilot, which provided misleading or incorrect information during political elections in Germany and Switzerland.Â
According to the group, an unnamed public figure asked OpenAI’s chatbot for information about himself and was consistently provided with incorrect information. The chatbot not only gave wrong answers on candidate information, polls, and scandals but also misattributed its sources.
OpenAI allegedly refused the public figure’s request to correct or erase the data, saying it wasn’t possible. It also refused to reveal information on its training data or where it was sourced.
Similarly, Google’s Gemini AI chatbot, although not directly involved in the EU, faced criticism for generating “woke” and inaccurate imagery. Google responded with an apology and pledged to update its model.
These incidents underscore the growing concerns about the reliability and compliance of AI-driven systems, urging a closer examination of their impact on data privacy and accuracy, both within and outside the European Union.
Also Read: Worldcoin Partners with OpenAI and PayPal Amid Growth