TikTok announced that it will begin labeling content created using artificial intelligence when it’s uploaded from outside its platform to combat misinformation. The company believes that AI provides incredible creative opportunities but can be confusing or misleading if viewers don’t know the content was AI-generated. This change in policy is part of a broader attempt in the technology industry to provide more safeguards for AI usage, with other companies like Meta and Google also implementing similar measures. The push for digital watermarking and labeling of AI-generated content was also part of an executive order signed by U.S. President Joe Biden last year.

TikTok has partnered with the Coalition for Content Provenance and Authenticity to use their Content Credentials technology, which attaches metadata to content that can be used to recognize and label AI-generated content. This technology, which has been deployed on images and videos and will soon be applied to audio-only content, will help people understand when, how, and where content was made or edited. By attaching Content Credentials to submissions made on TikTok, the platform aims to help identify AI-generated material and provide transparency to users. Other platforms that adopt Content Credentials will also be able to automatically label AI-generated content.

The use of Content Credentials to identify and convey synthetic media to audiences directly is seen as a meaningful step towards AI transparency, according to Claire Leibowicz, head of the AI and Media Integrity Program at the Partnership on AI. TikTok is the first video-sharing platform to implement Content Credentials and has joined the Adobe-led Content Authenticity Initiative to push for wider adoption of the credentials within the industry. This move aims to increase transparency online and help users navigate an increasingly AI-augmented world.

TikTok’s previous policy encouraged users to label content that had been generated or significantly edited by AI and required labeling of all AI-generated content containing realistic images, audio, and video. The platform aims to ensure that users can distinguish between fact and fiction while harnessing the creative opportunities offered by AI. The announcement of this new policy was made on ABC’s “Good Morning America” and comes in response to an ongoing legal battle over TikTok’s future in the United States. Just two days before this announcement, TikTok and its Chinese parent company, ByteDance, filed a lawsuit challenging a new American law that would ban the platform unless it’s sold to an approved buyer. TikTok believes that this law unfairly singles out the platform and represents an attack on free speech, potentially leading to a shutdown next year if the legal battle is lost.

TikTok’s Head of Operations & Trust and Safety, Adam Presser, highlighted the excitement around AI’s capabilities among users and creators, emphasizing the need for transparency and understanding of AI-generated content. The company’s collaboration with the Coalition for Content Provenance and Authenticity and adoption of the Content Credentials technology aim to provide users with information on the origin of content and the use of AI. By adopting these measures, TikTok seeks to create a transparent environment that enhances trust and helps users navigate the complexities of an AI-dominated digital landscape.

Share.
Exit mobile version