Meta announced that it will begin labeling artificial intelligence (AI) generated images across all of its platforms, including Facebook, Threads, and Instagram. The announcement, made on February 6, came just a day after the company's oversight board highlighted the need to change Meta's policy on AI-generated content and focus on preventing the harm it causes, including US President Joe Biden. The complaint was replied to. Digitally altered video that surfaced online. Meta said that although it labels photorealistic images created by its own AI models, it will now work with other companies to label all AI-generated images shared on its platforms.
In a newsroom post on Tuesday, Nick Clegg, Meta's president of global affairs, outlined the need to label AI-generated content to protect users and prevent disinformation, and shared that it had worked on developing a solution. Have started working with industry players. “We are working with industry partners to align on common technical standards that indicate when a piece of content is created using AI,” he said. The social media giant also revealed that currently, it can label images from Google, OpenAI, Microsoft, Adobe, MidJourney, and Shutterstock. It's labeling images created by Meta's own AI models as “imagined with AI.”
To correctly identify AI-generated images, detection tools need a common identifier across all such images. Many companies working with AI have begun adding invisible watermarks and embedding information into the metadata of images to make it clear that it was not created or captured by humans. Meta said it was able to detect AI images from the highlighted companies because it followed industry-approved technical standards.
But there are some issues with it also. First, not every AI image generator uses such tools to make it obvious that the images are not real. Second, Meta has noticed that there are many ways to remove invisible watermarks. To this end, the company has revealed that it is working with industry partners to create an integrated technology for watermarking that cannot be easily removed. Last year, Meta's AI research wing, Fundamental AI Research (FAIR), announced that it was developing a watermarking mechanism called Stable Signatures that embeds markers directly into the image creation process. Google's DeepMind has also released a similar tool called SynthID.
But this just covers the images. AI-generated audio and video have also become common today. Addressing this, Meta acknowledged that a similar recognition technology for audio and video has not yet been created, although development is underway. Until a way to automatically detect and identify such content emerges, the tech giant has added a feature on its platform for users to disclose when they share AI-generated videos or audios. Once disclosed, the platform will add a label to it.
Clegg also highlighted that in a situation where people do not disclose such content, and Meta finds out that it was digitally altered or created, it can fine the user. . Additionally, if the shared content is of high-risk nature and could deceive the public on important matters, it can add even more prominent labels to help users get the context.