A few years back deepfakes seemed to be the latest concern de jour.

Now they’re back and with a vengeance although, this time around, they’re part of a broader generative AI wave that’s taking multiple industries by storm.
What’s becoming impressive across the board is the sophistication of these models.
What’s impressive in this particular case is the model’s ability to generate so much from so little.
ByteDance, the company behind TikTok and other popular social media platforms, unveiled a generative AI that needs little more than a single photograph and a brief sound clip to make a realistic video from both.
Some of the examples offered by the company include a video of Einstein giving a speech he never gave and popstar Taylor Swift doing the same. The model, called OmniHuman-1, was trained on some 19,000 hours of video, TechCrunch reports, and has the ability to edit an existing video and change it quite significantly in addition to making things from whole cloth.
Naturally, concerns about the provenance of training material and what end uses this will all go toward are still at the top of everyone’s list of grievances when it comes to generative AI. The key problem is that, as things get better, it gets harder for us to detect the fakes. In other words, the better generative AI becomes, the worse things become for those of us who appreciate reality. There are further questions about the kinds of content that generative AI will be able to create and what forums are appropriate for it. For example, generative AI media on social platforms would seem to obviate the entire purpose of the app. We’ve joked for a long time that everything on social media is fantasy – it actually being fantasy is a whole other thing though.
Any thoughts that you might have on deepfakes are welcome in the comments.
Don’t forget to check out some other articles at this link.