Last Updated on by
Social media platforms are under a lot of pressure to police their own content – or else have the government do it for them.
Whether it is the EU’s “meme ban” law or the various accusations of fake news on Twitter and Facebook, social media’s job connecting people with one another is being complicated by an increased need to “police” – whatever that means in this context – what their users are doing.
Of course, these companies are in the business of making money first and probably aren’t really trying to step into the realm of banning and “censoring content.”
Not only is it a bad look from a marketing standpoint but it also tends to deter people from participating.
Nonetheless, this reasoning is little comfort to governments and the perpetually offended who are now upset at Google division YouTube for apparently ignoring “toxic videos” that were uploaded to its platform.
And the knowledge of these videos apparently went all the way to the top, as did the call to ignore them.
Engadget reports, “YouTube leaders ignored proposals to alter recommendations to stamp out toxic videos and to tackle conspiracy theories, several former and current employees told Bloomberg. Executives were more concerned with keeping viewers engaged, according to the report.”
Not only is YouTube one of the biggest communities for video out there, but it is second only to Google in terms of search. Analysts estimate that the division is worth some $USD 16 billion per year to parent company Alphabet.
YouTube, for its part, claims to have done all it can to prevent these types of videos from being uploaded to the service.
But Bloomberg reports that, “before YouTube pledged to stop recommending conspiracy videos, a privacy engineer suggested that videos skirting the edges of the site's content policies shouldn't be included in recommendations.”
As of press, “white nationalist” and “neo-Nazi” videos are apparently still up for viewing on YouTube but do not receive monetization options for the uploader or YouTube as a service.