Meta Rolling Out AI Content Label in April 2024


A few years ago, the big trend was “authenticity.”

people using phone while standing
People using phone while standing. Photo by camilo jimenez

Instagram was “too artificial,” thus a bunch of more “authentic” experiences cropped up.

Now that whole premise is being challenged by the advent of AI-generated content although this isn’t the first attack on social media’s notions around what is real and genuine when it comes to media.

Some people didn’t like filters and overlays and that was a whole other debate. But now that whatever we are looking at could come entirely from some AI process, filters and such seem like less of a concern. After all, deep fakes and other content could be used to spread disinformation.

Perhaps that is why Meta, the parent company of Facebook, Instagram, Threads, and WhatsApp, decided it was time to label AI-generated content on its platforms. After all, we do want to interact with other “humans” on these services to some extent which is really what the whole authenticity debate was about to start.

In a blog post about its policy moving forward, Meta writes:

“We agree that providing transparency and additional context is now the better way to address this content. The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling. If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context. This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere.”

The post then goes on to explain why Meta will keep this kind of content on its platforms as long as it doesn’t violate the company’s policies around what content can be posted. This is interesting given the other tact the company could take – an outright ban on all AI-generated content – but it’s probably born more out of pragmatism than some deep belief in this kind of media being the future. After all, if AI editing is extensive enough, a real photograph could be mistaken for AI generated we would imagine making that kind of verification process a real headache for Meta.

Do you think AI content should come with a label identifying it as such? Let us know your thoughts in the comments.

We have some other news for you to read in our photography news.

About Author

Kehl is our staff photography news writer since 2017 and has over a decade of experience in online media and publishing and you can get to know him better here and follow him on Insta.

Leave a Reply

Your email address will not be published. Required fields are marked *