Girl from Black Mirror 'Archangel' Episode, receiving visual safety implant

Comment /

YouTube & The End of History

YouTube, and sites like Facebook, Twitter and Reddit are the public squares of our time. They provide the platform for so much discourse today that exclusion from them proves suffocation-enough to silence. As private companies, unburdened by the forced liberalism of legally protected free speech, their content policies wield great power and dictate so much of what we do and don’t see online.

There is lively debate over the question of free speech – is the Libertarian free-speech absolutism or the Progressive protective censorship the more desirable policy? This is essentially a debate framed around one’s interpretation of the word freedom – is it freedom from, or freedom to?

This problem becomed pronounced when we introduce advertising. Advertisers don’t want to be associated with what they see as divisive, shocking or unpopular content. When advertisers are given the chance to exclude their ads from appearing alongside these subjects, almost all take it – after all, who wants some wisecrack on Twitter @ing you to ask why your product was just advertised before an objectionable video when others’ were not? This in turn encourages these platforms to move to discourage content that hurts their bottom-line; and this in turn develops a tendency towards the seemingly mundane yet nefarious ‘Advertister Friendliness’.

YouTube has the particular problem of both being ad-revenue supported, and being a platform that shares some of that revenue with it’s content creators. This complicates YouTube’s relationship with both parties. On the one hand, they rely entirely on the content these creators generate to have a platform at all, and on the other hand they rely on advertisers being willing to spend considerable amounts of money running ads to keep the ship afloat. Sharing ad-cash with creators is to do business with them, and once you begin to exercise discretion all future relationships are fair-game for scrutiny.

I can just tell you categorically that there is no list of words or keywords or terms or anything like that, that is going to go into our classifiers making an a-priori decision about whether a video is monetised or not.

And, while we may have made mistakes in the past, the nature of our algorithms is that they continue to learn from those mistakes, continue to get better, and we also have programs in place – and Google is striving to be an industry leader in this area of things like Machine Learning fairness.

¿Qué está haciendo mal YouTube? | Entrevista con un jefe de YouTube
What is YouTube doing wrong? | Interview with a YouTube boss
Luisito Comunica
https://youtu.be/sGBr0Bl8y2U

This quote strongly suggests these moderation algorithms are reactive, learning models, presumably learning from user reports and input, probably also from staff. That they do not, a-priori have a blacklist of words does not mean that the models do not themselves form such a list. Also, if Google leads the way in Machine Learning Fairness, it is perhaps only in-so-far as being the prime example of how not to do it.

Should be fine, right? Well, achieving this with no false-positives nor false-negatives is an impossible task. Understanding something as complex and subjective as context alone requires at-least human-level intelligence, as even we can’t agree amongst ourselves. Examples of false-positives, those where legitimate content creators have their work censored aren’t hard to come by – and it seems YouTube is not handling the appeals process well, frequently accused of persistent, skittish unfairness and deafness to the exasperated protests of creators and their fans, the very people the platform relies upon to exist.

In pursuit of an online bubble of safe, advertiser friendly content, free from affront, there will inevitably be unintended casualties. Given the scale and speed of content hitting these platforms, content moderation can not be done by humans, and must be done by machines. YouTube do say publicly that they use ‘algorithms’ to aid in content moderation. This is mostly due to the scale of the moderation problem - YouTube claims there’s over 500 hours of video uploaded every minute, so you’d need a mod team of around 33,000 people working 24/7 to view and moderate all of it manually. There are certainly humans in-the-loop somewhere, but they’ll just handle a fraction of appeals. Blacklists of undesirable topics and phrases will be built, and machine learning models will process information about the content such as the transcript of what’s said and even the imagery in the video, determining it’s content-worthiness, in the end spitting out a simple yea or nay.

The Educational creators on YouTube, covering topics such as Science, History & Politics are particularly afflicted. This is due both to the subject areas they cover, and the amount of time they have to invest in creating their video content. The soft-threat of demonetisation as opposed to outright removal may seem like a fair compromise, but this is not so. Demonetisation hurts actual content creators much more than those looking to shock or disrupt – demonetisation not only stops the stream of ad revenue to the creator, but as this video now just costs YouTube money to deliver, it vanishes from recommendations, having much the same effect as complete removal – so a more appropriate name may be Video Suppression. Loss of livelihood and followers are things we can safely assume won’t phase the ephemeral provocateurs and shitposters.

We would expect videos covering war history, for instance, to be laced with words such as battle, fight, war, kill, capture, suicide, bomb, Nazi, fascist and Holocaust. Videos covering scientific topics can too be struck from the platform because they demonstrate dangerous procedures, or perhaps even because they contain ’nasties’ such as Lead, Uranium, Mercury and Cyanide - all of which can be safe when handled correctly, and have no good reason for being excluded from a Chemist’s repertoire. I hope I don’t need to point out that discussion of differing politics causes some to lash out and demand censorship. While YouTube does publish policy pages YouTube Community Guidelines (for removal from the site), and Advertiser-friendly content guidelines (for demonetisation) , YouTube’s track record of fairly and consistently applying these guidelines is patchy at best.

YouTube just wants to be like a traditional television channel, where they have a late night hosts, and no swearing, and no controversial content, and that kind of thing. […] They added to their rules that you’re not allowed to talk about war, you’re not allowed to talk about battles, you’re not allowed to talk about anything like that. […] I don’t see how a channel that is all about Military History is supposed to continue.

In self-defence, creators have taken to blurring Swastikas or swapping them for Iron Crosses, and using coded language For example, Anti-Centrism: Extreme Niche Political Ideologies (Absolutist Post-Left Hoppean Neoaccelerationism), by Jreg
30th July, 2019
https://youtu.be/kkufPSw_eMU
and
WW2 - OverSimplified (Part 1), by OverSimplified
15th March, 2018
https://youtu.be/_uk_6vfqwTA?t=225
to avoid uttering the new N and F words for fear of automated detection. The accurate telling of history is being self-censored to placate the moderation overlords, and the willingness of creators to continue to cover the more difficult topics in face of this constant struggling against the platform is diminishing.

History is surely the most powerful tool in first understanding, and then combating dangerous and violent ideologies. As the saying goes, those that do not learn history are doomed to repeat it. Is the use of coded language what we really want to be encouraging?

It’s almost like censoring specific words independent of context creates a culture of fear around that idea and makes the idea itself a protected class, making the people that believe in the idea forced [sic] to live in an echo chamber of their own beliefs where they use coded language to get around algorithms like this.

Jreg

Some creators have limited success with alternate sources of income like Patreon, selling merchandise, and handling advertising themselves through brand deals and sponsored videos. This seems a reasonable reaction to the forces YouTube put in place, but it can’t avoid the loss of views and exposure brought by demonetisation that are vital to building & maintaining an audience. For channels that have to put a lot of effort in to making videos, two demonetisations in a row can be a loss of a whole month’s income, and a third could kill the channel.

You could say “Well, if you don’t like it, leave”. And, many are trying to leave, but what credible alternatives to YouTube are there? Sure, there are some other platforms available like Vimeo, but they currently simply don’t compare for viewership or cover a more-specific niche. The current outflow spreads thin between their own sites, network sites such as Nebula from Standard and services like LBRY. This begs the question as to whether anyone does like it this way - viewers and creators alike are frustrated.

If YouTube continues to pursue their ideal of a platform free from harm without taking a principled and clearly reasoned approach to moderation, such as what Twitter are trying to do – though Twitter too has many disgruntled users accuse them of unfairness, bias and unclear rules – they may end up killing all that’s valuable about their own platform. I’d argue adopting Twitter’s approach of trying only to eliminate targeted harassment is much more viable, and less likely to cause them the fundamental problems of alienating their own creators and users.

As a viewer, my concern for the freedom to express content, especially educational content, is growing…


Further Sources

List of YouTube Demonetized Words REVEALED (YouTube Analyzed) [YouTube]
List of YouTube Demonetisation Words on Google Docs

Youtube’s Biggest Lie (Nerd City) [YouTube]

Interviewing The CEO of YouTube Susan Wojcicki [YouTube]