A major concern for photographs and videographers, among others, is that artificial intelligence is going to, first, steal their work then, second, put them out of work.
Depending on where you stand in the debate, all of this is a time of really bad change or massive potential. But while people might disagree over whether or not AI is a good or a bad thing for creators, there is some consensus that content generated using this method needs clear labeling to distinguish it and separate it from the creative works of humans.
Meta, the parent company of Facebook and Instagram, among others, has heard this loud and clear and is reportedly developing a system for detecting and labeling content on Instagram.
That makes sense given that the platform’s raison d’etre is ostensibly rooted in original photos and videos – something that AI would undermine on many levels – but also leaves us to ask the question of what kind of future platforms like Instagram and its rival TikTok face when anything can be generated via text description.
Beyond questions about authenticity, there is also the modern epidemic of misinformation and “deep fakes.” Giving AI content a watermark or distinguishing label to separate it from organic media would also help tame this somewhat unruly aspect of artificial intelligence as things currently stand. Again, whether any of that is effective at helping people mentally discern whether something is real or not is still up for debate but, at minimum, an effort probably should be made one way or the other.
Do you think identifying and labeling AI content will help differentiate it from “real” photos and videos or that this distinction will ultimately be meaningless in the eyes of most viewers? Let us know in the comments.
We have some other headlines for you to read at this link.
[Engadget]