The public letter signed by tycoon Elon Musk and hundreds of experts to stop the development of Artificial Intelligence (AI) provoked a debate among academics on social networks this Thursday.
The vertiginous advances in AI led Musk, founder of the electric car giant Tesla and current owner of Twitter, to sign this letter, in the face of what he calls the “dramatic economic and political disruption (especially for democracy) that AI will cause”.
Although the letter, published on the website futureoflife.org, was signed by independent thinkers such as historian Yuval Noah Hariri, or Apple co-founder Steve Wozniak, some academics protest against what they consider a misinterpretation of the discussion.
Timnit Gebru, a researcher specializing in the ethics of Artificial Intelligence, wrote a scholarly article that was quoted in the letter and expressed dissatisfaction with the use made of the text.
“They basically say the opposite of what we said and quote our article,” he criticizes. The article’s other author, Emily Bender, describes the open letter as a “mess”.
“Over the last few months, we’ve seen AI labs launch into a headlong race to develop and deploy increasingly powerful digital brains that no one, not even their creators, can reliably understand, predict, or control,” say Musk and the experts. .
The director of Open AI, which designed ChatGPT, Sam Altman, admitted that he is “a little afraid” that his algorithm will be used for “wide-scale disinformation, or for cyber-attacks”.
Academics Gebru and Bender state, however, that the danger of AI is “the concentration of power in the hands of people, the reproduction of systems of oppression, the damage to the information ecosystem”.
One of the signatories of the open letter, Emad Mostaque, founder of the British company Stability AI, considered a six-month moratorium inappropriate. “I don’t think a six month hiatus is the best idea,” he said on Twitter.
without contradictions
Psychology professor Gary Marcus, signatory to the letter, considers in a counter-reply that “skeptics should sound an alarm; there is no contradiction in that regard”.
While industry giants like Google, Meta and Microsoft have spent years researching AI-powered programs to speed up tasks like translation or targeted advertising, it’s algorithms from companies like OpenAI that have caused controversy.
Its ChatGPT conversational robot, capable of carrying out complex conversations with humans, has just been updated with a new version, GPT-4, even more powerful.
“Should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs? (…) Should we risk losing control of our civilization? These decisions should not be delegated to non-technological leaders elected”, say the signatories of the letter.
Source: JN