The Italian Data Protection Authority, also known as Garante, has suspended the use of OpenAI’s chatbot, ChatGPT, in the country due to concerns over privacy violations. ChatGPT is an artificial intelligence (AI) tool with a range of capabilities including chatbot functions, crafting realistic art, passing academic tests, and figuring out tax requirements.
The Italian government is the first to impose a temporary ban on ChatGPT in response to data and privacy concerns, which include a recent data breach that exposed some users’ personal data, and the company’s data collection practices. OpenAI doesn’t have an office in the EU. The Garante has given OpenAI 20 days to respond to these concerns or face a fine of up to $21 million or 4% of its annual revenue.
OpenAI’s data breach occurred on March 20, 2023. It is believed that the bug responsible for the leak has since been patched. However, the incident led to the Garante questioning OpenAI’s data collection practices, which the agency felt may breach the Italian policies on data collection. The agency also criticized the lack of an age verification system to prevent minors from being exposed to inappropriate answers. OpenAI was accused of having an “absence of any legal basis that justifies the massive collection and storage of personal data” to “train” the chatbot.
In response to the concerns raised by the Garante, OpenAI has disabled ChatGPT for users in Italy. The website could not be accessed in Italy, with a notice on the ChatGPT webpage stating that the website’s owner may have set restrictions that prevent users from accessing the site. The company also assured that they actively work to reduce personal data in training their AI systems because they want their AI to learn about the world, not about private individuals. However, the Garante accused Microsoft-backed OpenAI of failing to verify the age of ChatGPT users who are supposed to be 13 or above.
The Italian government’s decision has caught the attention of AI experts globally, with the rapid development of AI technology causing lawmakers in many countries to scrutinize it. Concerns have been raised over AI’s potential impact on national security, jobs, and education, leading many experts to call for new regulations.
In the United States, the Center for AI and Digital Policy has filed a complaint with the Federal Trade Commission, asking them to halt OpenAI from releasing future versions of ChatGPT until appropriate regulations are established. The group accused ChatGPT’s latest version of having the ability to “undertake mass surveillance at scale.” They wrote in a statement, “We recognize a wide range of opportunities and benefits that AI may provide, but unless we are able to maintain control of these systems, we will be unable to manage the risk that will result or the catastrophic outcomes that may emerge.”
OpenAI has faced restrictions in mainland China, Hong Kong, Iran, Russia, and parts of Africa, where residents cannot create OpenAI accounts. In Europe, the European Commission is debating the EU AI Act, which will likely impose new regulations on AI. European Commission Executive Vice President Margrethe Vestager tweeted that the Commission may not be inclined to ban AI but instead regulate the uses of AI. She said, “No matter which #tech we use, we have to continue to advance our freedoms & protect our rights. That’s why we don’t regulate #AI technologies, we regulate the uses of #AI. Let’s not throw away in a few years what has taken decades to build.”
On Wednesday, Elon Musk and a group of AI experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, citing potential risks to society. This move is aimed at ensuring that AI technology is developed in a safe and responsible manner.
OpenAI has been given 20 days to respond to the Italian data protection agency’s concerns, and it remains to be seen how this situation will be resolved. However, this incident serves as a reminder that companies developing AI tools must prioritize the protection of user data and comply with relevant regulations.
Furthermore, this incident highlights the importance of digital literacy and education, particularly in the age of social media and AI-powered chatbots. It is crucial to educate users, particularly young people, on the potential risks and implications of sharing personal information online and interacting with AI-powered tools.
As AI technology continues to advance, it is essential that we address the ethical and societal implications of its use. This requires collaboration between industry, government, and civil society to ensure that AI is developed and used responsibly and for the benefit of all.