The Italian data protection authority “Garante” pushed ahead with a temporary ban on the “ChatGPT” application at the end of March. The Italian authorities were suspicious of the software, which uses artificial intelligence to create human-like texts and new computer programs. “Garante” is not about the use of artificial intelligence (AI), i.e. the attempt to transfer human learning and thinking to the computer and thus give it intelligence, but mainly about data protection. The authority requires the developer of the software, the company “OpenAI”, which is backed by the American Internet company Microsoft, to inform their customers about what happens to their data in ChatGPT.
In addition, she must obtain the approval of the customers if data is to be used to further develop the software, i.e. to let them learn. In addition, access for children under the age of 13 must be prevented. If the requirements are met by the end of April, according to “Garante” in a press release, “OpenAI” will be allowed to reactivate its ChatGPT software for Italy. A spokesman for “OpenAI” in San Francisco promised full cooperation. One is pleased that the Italian authorities wanted to reconsider their decision.
AI law is likely to come in 2025
Meanwhile, Spain and France had also raised similar concerns about ChatGPT. There is not yet an EU-wide regulation for dealing with artificial intelligence and its application in software and products such as self-driving vehicles, in medical technology or in surveillance technologies. The EU Commission submitted a corresponding legislative proposal two years ago, which is still being discussed in the European Parliament. After that, the EU member states have to agree.
By the time the AI Act can actually come into force, it should be at the beginning of 2025. AI developments like ChatGPT were not on the market two years ago and could have been further developed by the time regulation comes into force in the EU, says MEP Axel Voss (CDU). “But the development is there – in fact – so fast that a lot of it no longer fits at the time when the law actually takes effect,” Axel Voss told DW. He has been working on artificial intelligence for the Christian Democratic group for years and is a leading co-writer of the EU’s “Artificial Intelligence Act”.
Risk classes for AI
It is questionable whether an application like the text generator ChatGPT would even be covered by EU rules. The draft law divides the AI programs into several categories from “unacceptable” to “harmless”. Only the applications associated with high and medium risks should be subject to special rules for the documentation of the algorithms, for transparency and for the disclosure of data use. Certain applications that record people’s social behavior in order to predict and evaluate actions, the classification and assessment of people in certain classes (social scoring) and certain areas of face recognition are to be banned. It is still controversial to what extent AI can capture or simulate human feelings. What exactly these risk classes should look like is currently still being discussed in the EU Parliament.
“Actually, for competitive reasons and because we are already lagging behind, we need more optimism in order to deal more intensively with AI. But what the majority in the European Parliament is saying is that they are being guided by fears and worries and are trying to exclude everything,” says Axel Voss, the Christian Democrats’ AI expert. The data protection officers of the EU member states are calling for independent supervision of the AI applications and further data protection adjustments.
Is the ChatGPT application good or bad?
The AI CEO of ChatGPT parent company Microsoft, Natasha Crampton, has spoken out in interviews in favor of concentrating on “high-risk applications” when it comes to regulation. This included general algorithms that could write complex text and new software, but not. The think tank “Future of Life Institute”, which deals with AI, disagrees. Its policy director, Mark Brakel, told DW that general AI applications that use GPT as a basis need to be regulated. They could be built into hundreds of thousands of applications. A chatbot that imitates human communication is only a small part of the possible risky applications. “What concerns us as an organization are the much more complex risks. GPT technology has the ability to show people how to build a biological weapon in a short period of time. There is a great risk that shakes our understanding of truth because AI can generate large amounts of false information that looks as if it came from Deutsche Welle, for example,” said Brakel.
Seal of approval for AI: The EU wants uniform rules that do not deter companies and protect fundamental rights
Just because a technology can also be used for bad things does not mean that the entire technology should be banned. “People fail to recognize the ambiguity of digital developments,” says Axel Voss in an interview with DW. “You just have to train this algorithm further.”
The fact is that the standard in the car that we have today, of course, didn’t exist at the beginning either. “The misuse of cars, for example as a weapon to kill someone, was never intended. But we have to reckon with the fact that many people will also use AI for a negative purpose and we’ll get into an area where we’re very have to be careful,” says Voss.
companies could migrate
The EU Commission and Parliament are trying to find a balance between consumer protection, regulation and the free development of business and research. Because “artificial intelligence” certainly offers enormous opportunities as the driving force behind a digitized society and economy, says EU Commissioner Thierry Breton, who is responsible for industrial policy. The EU does not want to drive developers and providers of AI out of Europe, but on the contrary promote them and encourage them to settle in the EU, said Breton two years ago when the AI law was presented. “We must ensure that the EU is not dependent on foreign or individual providers,” Breton demanded. The industrial data required for AI should be collected, stored and processed in the EU.
The EU laws on AI should make companies responsible. Risk classes for certain applications are not enough, says Mark Brakel from the think tank “Future of Life Institute”. The developers would have to check each individual application for its risks. “This risk management must be mandatory and the results must be published,” suggests Mark Brakel. Sometimes companies don’t even know today what their AI can produce and they themselves are surprised by the results. According to Brakel, chatbots drove a man to suicide in Belgium or encouraged minors to have sex with adults.
“If we are too complicated here, then companies will go somewhere else and develop their algorithms and systems there. Then they will come back and use us here more or less just as a consumer country,” warns MEP Axel Voss, who is in charge of the future rules for AI in the European Parliament occupied.
What is striking about ChatGPT, which is causing heated debates in Europe, is that it was developed in the US for global use. OpenAI will soon face competition from American companies such as Google or Elon Musk’s Twitter, which are said to be working on a chatbot. The Chinese government has stipulated that Chinese companies should launch an AI application with similar functions. The Chinese bot from Baidu is said to be called “Ernie”. And in Europe? According to the industry portal “futurezone”, research is planned, but no European product of its own.