Meta, the owner of popular social media platforms like Facebook, Instagram, WhatsApp, and Threads, announced plans to expand efforts to label content that has been manipulated or generated by artificial intelligence. This decision comes as a response to the growing concerns about the pervasiveness of AI-generated content and its potential risks to the public. The company will now label video, audio, and images as “Made with AI” when their systems detect AI involvement or when creators disclose it during upload. They may also add a more prominent label to content that poses a high risk of deceiving the public on important matters.

The tech industry as a whole is dealing with the challenges posed by AI-generated content. Tools like OpenAI’s Sora have made it possible to create lifelike videos through AI technology. Concerns have been raised about the potential misuse of these technologies, as demonstrated by a political consultant who used AI to create robocalls with President Joe Biden’s voice discouraging people from voting. With the 2024 presidential election approaching, experts predict an increase in AI disinformation campaigns. In response, companies like TikTok and YouTube are also working on tools to identify and label manipulated content.

In a survey conducted by Meta, 82% of over 23,000 respondents in 13 countries expressed support for labels on AI-generated content that depicts people saying things they did not actually say. Meta has committed to enforcing its rules against AI-generated content that violates their Community Standards, including policies on voter interference, bullying, harassment, violence, and incitement. They emphasized the importance of providing users with more information about the content they see online to help them assess its credibility and context.

The decision to label AI-generated content reflects Meta’s efforts to balance transparency with the need to protect freedom of expression online. The company aims to provide users with more context about the content they encounter on their platforms, especially with the increasing sophistication of AI technologies. By identifying and labeling AI-powered content, Meta hopes to reduce the potential for confusion and deception among its users, particularly in the lead-up to important events like elections. This move aligns with similar initiatives by other social media companies to address the challenges posed by AI-generated content and deepfakes.

As part of their efforts to combat AI-generated content, Meta will remove any content that violates their policies, regardless of whether it was created by AI or a person. This includes content that promotes voter interference, bullying, harassment, violence, and incitement. The company’s decision to enforce these rules reflects a broader industry trend towards increased transparency and accountability in handling AI technologies. It also underscores the ongoing need for vigilance in addressing the potential risks associated with the proliferation of AI-generated content across digital platforms.

Share.
Exit mobile version