We brought you word of Meta’s new artificial intelligence features that they plan to integrate across their platforms, most notably Instagram.
But we didn’t really say how they developed all of this stuff, namely because it wasn’t detailed.
And now we know why to some extent.
You see, it’s a point of debate in the community (or, not really, depending on where you stand) but many of these so-called AI technologies out there, the ones that generate images from text or what have you, typically trained using images from professional photographers of one sort or the other. Naturally, there’s that whole lawsuit going on that alleges that copyrighted material was used to train sophisticated AI tools. Yet what makes this somewhat different is that Meta already has its own massive user base that generates tons of media every single day.
What better way to train your future AI tools than that?
It certainly calls to mind the old adage, “if you get something for free, you are the product,” although there are some arguments to be made about the sustainability of ecosystems like Instagram without some give and take. To be clear, there’s nothing controversial about Instagram’s AI training on user images but it certainly adds a new wrinkle to the whole debate around this emergent and sure-to-be dominant trend.
Meta representative Nick Clegg told Reuters that the company will also evaluate how people make of use of these AI features in order to develop future improvements. He also outlined how the company excluded images of an overly personal or private nature, underlining that Instagram posts used for training were publicly available (as opposed to private) posts.
What are your thoughts on AI training itself using others’ images? Do you think this is pretty much implied when you sign up for a “free” platform like Instagram? Or should a creator’s rights be respected? Let us know your thoughts on AI and its development in the comments.
We have some other headlines for you at this link.
[Reuters]