Artificial intelligence isn’t going anywhere; in fact, it’s about to be absolutely everywhere.
And some people want us to know when we are viewing something born out of a silicon imagination somewhere.
That’s why the folks over at Meta are promising to label AI-generated imagery on the company’s platforms moving forward.
Going hand-in-hand with the whole drive towards “authenticity” – as well as preserving the central conceit of platforms like Instagram in “capturing” reality – Meta’s policy isn’t really that shocking and, for photographers, probably refreshing. After all, it is quite impossible to top, in terms of sheer drama, AI-generated images of vistas and horizons that don’t exist, many featuring lighting and weather from another world.
But wait, you might ask, isn’t Meta itself developing a bunch of AI tools up to and including image generators?
Yes, and part of that work includes identifying so-called “markers” of AI-generated imagery.
“We’re building industry-leading tools that can identify invisible markers at scale – specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards – so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.”
Meta acknowledges that much of this work comes at a time when “deep fake” content is weaponized towards one end or another. Expecting this kind of thing to increase in the future, Meta’s casts these developments as part pragmatic preservation of the company’s various platforms and their authenticity, part heading off a potential credibility problem for those same problems at the pass. What will be curious to see is how robust these detection tools are when it comes to photographs that are real yet edited via AI tools.
Any thoughts you might have on AI-generated images or even AI-assisted workflows are welcome in the comments below.
We have some more headlines for you at this link.