The question of where content is coming from these days is not a trivial one.
With sophisticated AI able to generate text, images, and videos from simple prompts, it is no wonder that people are even more skeptical than before when encountering things on the Internet.
Aside from the obvious implications for people in our industry who make a living from commercial photography, consumers also have a right to know whether or not what they are looking at is real.
This is where authentication credentials, such as the Content Authenticity Initiative’s (CAI) C2PA standard, come into the picture.
Google is going to start making this a part of the authentication process for images used in ads on their network, the company reports, giving consumers the “kind of information helps our users make more informed decisions about the content they’re engaging with — including photos, videos and audio — and builds media literacy and trust.”
Of course, it might not go as far as some people would like. After all, there are some suggestions that AI content should be labeled boldly like a warning on a cigarette package but that is unlikely at this point. The argument for this kind of strong system is that AI is capable of rendering unrealistic realistic images that blur the lines between showcasing an actual product and just making up a fantasy. And these platforms are improving every single day which makes automatic detection (if unlabeled) even more difficult. Essentially society’s laws and traditions aren’t really keeping up with the pace of change in this area and it is something to be concerned about, pro AI or not.
Any thoughts on AI-generated content in the advertising space are welcome in the comments.
We have some other news you might like to read here.