Meta No Longer Make Visible Warning Labels For AI-edited Images

The social network said it worked with companies across the industry to improve its labeling process and that it's making these changes to "better reflect the extent of AI used in content."

Meta Will No Longer Make Visible Warning Labels For AI-edited Images - TBPA
Meta Will No Longer Make Visible Warning Labels For AI-edited Images.

Starting next week, Meta will implement a significant change in how it labels AI-edited images on Facebook.

The company has decided to remove the easily accessible labels that indicated whether an image had been modified using AI tools. This decision raises concerns about the transparency of content shared on the platform.

While Meta will continue to inform users about potential AI modifications, the new process necessitates additional steps. Users will now have to interact with a three-dot menu located at the upper right corner of a post.

After tapping this menu, they will need to scroll down to find “AI info” among a sea of options. This extra step could potentially obscure the visibility of crucial information concerning image modifications.

This shift in labeling comes at a time when the prevalence of doctored images is increasing, particularly as election seasons approach.

Although images generated through AI tools will still prominently display the “AI info” label, it remains unclear how the reduction of direct labeling will impact the public’s ability to discern the authenticity of shared images.

The potential for confusion and misinformation has prompted widespread concern among industry professionals and users alike.

The social network said it worked with companies across the industry to improve its labeling process and that it’s making these changes to “better reflect the extent of AI used in content.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here