Artificial intelligence between absent controls and accelerated development
With the rapid development of artificial intelligence, warnings are mounting of its potential impact on the US elections. In a heated election season, fake photos of candidates began to appear, as well as fabricated statements by some politicians, adding to misinformation fears. This is what he talked about from the CEO of OpenAI, Sam Altman, in an open hearing before the US Congress, in which he said: “This is one of the things that worries me the most, as these models in technology are evolving in terms of manipulation, persuasion, and interactive provision of false information. We are on the verge of next year’s elections, and these models are constantly improving.
Shows the program «Washington Report »which is the fruit of cooperation between the “Middle East” and “the East”, the nature of artificial intelligence misleading the American voter, and the efforts to impose controls on it, in addition to the ease of spreading fake news through this technology.
Misleading and forgery
Hodan Omar, senior analyst at the Information Technology and Innovation Foundation, points out that AI-enabled misinformation is a real problem that needs to be addressed. Hodan Omar presents this problem in two parts, saying: “On the one hand, it will give legitimacy to things that are not true, that is, it will push people to believe things that did not happen. But on the other hand, it can give a legitimacy to true things and make people not believe things that actually happened.” Hodan Omar gives an example related to election campaigns, noting that media generated by artificial intelligence can change the statements of candidates in a convincing way.
Ravit Totan, an AI ethics consultant, agrees with Hodan Omar’s approach, warning of the damage that false information can do in an election season. Totan spoke about an example of publishing false photos of former US President Donald Trump during a mock arrest in the streets of New York, and she said: “If I had seen these photos five years ago, I would have believed them for sure. There are many people who do not realize the ability and power of this technology and when they see these images, and I am sure that similar images will be generated, it will affect their judgment and this will allow our political institutions to be destabilized from within, regardless of what external agents may do. Totan stressed the importance of quickly imposing controls to control the negative repercussions of this technology, adding, “We cannot sit on the sidelines, hoping for the best with technology and then imposing controls later.”
In addition to manipulating politicians’ images and statements, another problem arose last week in Washington. The circulation of a fake picture of an explosion in the Pentagon spread panic and the decline of the shares of the American Stock Exchange. Ryan Tanelli, legal affairs reporter at Roll Call, spoke about the problems associated with technical programs that have been improved by artificial intelligence, such as Photoshop. In addition to the danger of quickly circulating false news on social media without verifying its credibility. Referring to ChatGBT founder Samuel Altman’s warnings about this tendency in the hearing, Tarinelli said: “We saw Altman refer to these issues, where he mentioned Photoshop. There was concern that people would not be able to distinguish between truth and lies with regard to technical programs such as Photoshop. But he seemed optimistic about the public’s ability to distinguish between what is real and what is not real, but lawmakers are skeptical about this and seek to impose controls.
controls and solutions
The call for government controls came from Altman himself, who sought help from lawmakers at the Senate Judiciary Subcommittee hearing. Tarinelli noted that what was remarkable about the hearing was the legislators’ quest to find some solutions. Among these solutions is the creation of an independent government agency to regulate and control artificial intelligence, and lawmakers are deliberating a bill that would put clear graphic labels on political advertisements produced by artificial intelligence.
Totan points out a suggestion that might solve part of the problem, which is to flag artificial content in a way that helps ensure the authenticity of the text or image. Totan explains the solution, saying: “We are used to seeing advertisements from politicians during the election period, and at the end of the advertisement the candidate declares: I agree with that. And I think we need something like that for text and images, and it has to be integrated into the technology itself, ie in cameras, for example. What if the metadata of the photo includes, for example, a tag that says: This photo was taken by my phone; So I kind of add my signature to it. Then we have information about the source of the image.”
Totan believed that the most appropriate way to confront counterfeiting of content is by marking the real content, instead of marking the fake content, noting that if the source of the image or news is revealed, it becomes easier to determine its credibility.
And while lawmakers are considering solutions and proposals to impose controls, including a proposal by Democratic Majority Leader Chuck Schumer, which is still under construction, some fear that the high speed of the development of this technology exceeds the ability of legislators to act. This is what Tarinelli, who covers Congress regularly, said: “There is a realization that the Senate is a slow institution, especially when it comes to technology issues. This puts it at a disadvantage compared to interactive technology.” Tarinelli recalled the unanimous congressional consensus on the need to reform social media, but no decision has been taken in this regard until now.
Among the proposals presented, Elon Musk proposed to freeze artificial intelligence programs for a period of 6 months to allow for the enactment and imposition of controls. However, Hodan Omar rejected this proposition, considering it impractical. “What are we going to do in those six months?” I asked. It is true that politics is moving at a slow pace, but what is the goal of the six months? Why this time frame? So from a practical point of view, I do not see that this will bring us closer to the result that we want to see, that is, the establishment of targeted controls that deal with the real damage.
Perhaps the most prominent problem created by the manipulation of news through artificial intelligence is that it creates suspicions about real news. Totan talks about her personal experience in this regard, saying: “This is really starting to happen. For me, as an individual and as an informed consumer who knows the extent of the development of these technologies, I am now skeptical of every picture I see and every text message I read. And I think we should be skeptical, being responsible people, and this is what changes the game in terms of the election system.”
loss of jobs
OpenAI, which owns ChatGBT, estimates that 80% of the American workforce will be affected by 10% of the development of artificial intelligence, while 19% of the American workforce will be affected by 50%. Members of Congress have expressed concern about the impact of technology on jobs. Tarinelli points out that this concern became evident in the hearing that hosted Altman, saying, “It is a normal concern in any kind of industrial revolution and its potential impact on jobs.”
Tarinelli talked about Altman’s statements in which he acknowledged the loss of jobs, but on the other hand, he talked about new jobs that this technology will create, adding: “The technology sector supervisors emphasize on the positive side by creating new job opportunities, but there is no doubt that this issue will be A major factor in the future, especially in determining organizational policies.
In the midst of the discussion and questions about the negative impact of artificial intelligence, Hodan Omar mentions its positive aspects in research and daily life, saying: “Of course, there are many positive aspects of artificial intelligence, and I think that we as consumers may not know that there are interesting things in which artificial intelligence can be used. For example, as we walk down the street, we don’t think that the Ministry of Transport is using artificial intelligence to identify potholes in the street and repair the infrastructure. I think one of the reasons why there is such a heated debate about the negatives of artificial intelligence is that there are applications aimed at consumers such as “GBT Chat”, which are things that we can see and interact with, in contrast to the positive things that we cannot see.