Three experts in deepfake-detection software testified before Congress on the rise of malicious deepfakes, which are misleading audio, video, or images created or edited with AI technology. These deepfakes can be used for various purposes, including political disinformation and sexual abuse. The Senate subcommittee on privacy, technology, and the law focused on how Congress can regulate AI technology to combat the spread of deepfakes. Sen. Richard Blumenthal emphasized the need for independent testing of AI technology before release and potential penalties for misuse.

The hearing discussed examples of deepfakes, such as the deepfake audio impersonating President Joe Biden sent to voters in a robocall during the New Hampshire primary. Other deepfakes “undress” real photos of women and girls by superimposing their real faces with fake nudity. The threat of political deepfakes interfering with elections was highlighted as a real and current issue, rather than a hypothetical concern for the future. Zohaib Ahmed, Ben Colman, and Rijul Gupta, CEOs of deepfake-detection software companies, testified about the challenges of detecting and combatting deepfakes.

Despite some states passing laws to ban deepfake election interference or create civil litigation opportunities for victims of deepfake sexual abuse, federal legislation on these issues has stalled in Congress. Sen. Josh Hawley urged bipartisan action to address the dangers of AI technology without proper regulations or safety features. The use of deepfakes in political campaigns, as seen in the case of the New Hampshire Biden robocall, raises concerns about the potential manipulation of elections through misinformation.

NBC News reported that the voice-cloned audio used in the Biden robocall was created by a street magician with ties to a rival Democratic campaign. The software used to create the deepfake audio, ElevenLabs, is accessible to anyone, raising concerns about the ease of creating and disseminating misleading information. Colman emphasized that individuals creating deepfakes often do not adhere to rules or guidelines, making it challenging to prevent their spread. Watermarking technology has been proposed as one solution to verify the authenticity of audio and video content.

The testimony of deepfake-detection software experts underscored the urgent need for regulation and oversight of AI technology to address the proliferation of deepfakes. The potential impact of malicious deepfakes on political campaigns, elections, and individuals’ reputations is a pressing concern that requires bipartisan action from Congress. As technology continues to advance, the threat of deepfakes manipulating public perceptions and influencing decision-making poses a significant risk to democracy and society as a whole. By addressing the regulatory gaps in AI technology and implementing safeguards against deepfake misuse, policymakers can help protect the integrity of information and ensure trust in the digital landscape.

Share.
Exit mobile version