New AI platforms are popping up every single day yet the question of “how” they gained their abilities remains in some instances.
While some companies are upfront about what their data policies are, others are a little more opaque.
Either way, that hasn’t stopped some pretty massive lawsuits from being filed.
If you want to protect your content, it might be better to make sure you are and “request” to be excluded from this training according to a recent article in Wired.
The link for the Facebook page to do just that is located here.
Wired points out that this doesn’t get rid of any information Meta has on you currently from your use of the company’s apps but it does prevent your data from third-party usage:
“It’s important to point out that this form does not pertain to the gobs of personal information Meta has already collected from you on its platforms; it only applies to outside data the company may bring in to beef up its generative AI. This outside data could include stuff that’s elsewhere on the internet as well as data purchased from third-party data brokers.”
The use of data whether consented to implicitly or not through end-user agreements is one of the major controversies of our time and it is not only an issue in digital media but also in other creative avenues as well. We’ve covered some of the ins and outs of the issue as well as tried to posit what AI means in terms of the “art” side of media. And what’s particularly important to note is that we are at the very onset of this revolution and the level of sophistication we are already seeing is somewhat mind-blowing. Imagine where it will be in ten years, or even 20?
Any thoughts on AI training on others’ content are welcome in the comments.
We have some more headlines for you to read at this link right here.