Austrian advocacy group NOYB accused OpenAI of violating data privacy rules by providing inaccurate responses that cannot be corrected on its ChatGPT platform. The complaint was filed after a user received incorrect information about their date of birth from the chatbot and was unable to correct or delete the false response. NOYB stated that OpenAI’s chatbot denied the user’s requests to correct the information, and the company was unable to clarify how the data was processed or sourced, which violated the user’s privacy rights under EU law. The complaint highlighted the challenges that AI companies face in complying with GDPR requirements regarding data accuracy and individual access to personal information.

Europe’s General Data Protection Regulation (GDPR) requires that companies maintain accurate information about individuals and allow individuals to access their personal data. However, NOYB’s complaint alleged that OpenAI was unaware of the data stored in ChatGPT or its sources. Despite being aware of the issue, OpenAI seemed unfazed by the complaint, emphasizing that factual accuracy in large language models is an area of ongoing research. NOYB’s data protection lawyer stated that chatbots like ChatGPT do not comply with EU law when they create false information about individuals, emphasizing the need for technology to follow legal requirements rather than the other way around.

Since its launch in 2022, ChatGPT has attracted over 180 million users globally, sparking debates about the applications and risks of AI technology. The rapid growth of ChatGPT and similar machine learning applications has raised concerns about the lack of regulations to address issues like false information, privacy violations, and copyright infringement. In response to these concerns, the EU passed an AI Act in March 2024 to establish a regulatory framework for AI technologies. NOYB’s privacy complaint against OpenAI is part of a series of legal challenges facing the company, including investigations by Italy’s national privacy authority for potential data breaches and exposure of user data.

OpenAI is facing legal challenges in multiple jurisdictions, including in Italy where the chatbot ChatGPT was banned due to privacy concerns. Italy’s data protection authority initiated an investigation into OpenAI’s practices and blocked the chatbot in the country. The European Data Protection Board (EDPB) has established a task force to address issues related to ChatGPT and ensure compliance with data protection regulations across EU member states. Additionally, Elon Musk filed a lawsuit against OpenAI and its CEO, Sam Alman, accusing the company of deviating from the principles of advancing open-source AI for the benefit of humanity. Musk revealed plans to make Grok open source as part of the lawsuit against OpenAI.

The legal challenges facing OpenAI highlight the complexities of regulating AI technologies and ensuring compliance with data privacy laws. As AI applications continue to grow in popularity and influence, there is an increasing need for clear regulatory frameworks to address issues like data accuracy, privacy protection, and accountability. Companies like OpenAI face scrutiny over their practices and must navigate the evolving landscape of AI regulation to ensure that their technologies comply with legal requirements while advancing the benefits of AI for society. The outcome of these legal challenges will have far-reaching implications for the future of AI development and regulation in Europe and beyond.

Share.
Exit mobile version