In June 2019, Vox video journalist Carlos Maza posted a video compilation with clips from two years of malice served up via YouTube.
Over those two years, prominent right-wing personality Steven Crowder imitated Maza’s accent and called him, among other things, a “lispy sprite,” a “little queer,” “Mr. Gay Vox,” “Mr. Lispy queer from Vox,” “an angry little queer,” “gay Mexican,” and “gay Latino from Vox.”
The response from Google-owned YouTube at the time: Crowder’s videos didn’t violate its policies, so it wouldn’t remove them.
Our teams spent the last few days conducting an in-depth review of the videos flagged to us, and while we found language that was clearly hurtful, the videos as posted don’t violate our policies.
As an open platform, it’s crucial for us to allow everyone — from creators to journalists to late-night TV hosts — to express their opinions w/in the scope of our policies. Opinions can be deeply offensive, but if they don’t violate our policies, they’ll remain on our site.
Whistling a different tune
Cue a torrent of criticism.
Then, fast-forward six months later, and YouTube is whistling a far different tune. On Wednesday, YouTube updated its policy to ban malicious threats, veiled insults, and the string of malicious drips across videos and comments, each poisonous pearl of which doesn’t violate its policies per se but which, when strung together, create coordinated abuse campaigns.
Neal Mohan, chief product officer at YouTube, had this to say to the BBC:
Even if a single video doesn’t cross the line, with our new harassment policy we can take a pattern of behavior into account for enforcement.
Old policy: no explicit hate speech
Up until this new policy, YouTube had explicitly forbidden hate speech, which it defined as “content promoting violence or hatred” against people or groups related to race, sexuality, nationality and immigration status, among other attributes.
Its previous policies also barred the use of stereotypes that promote hatred, and it forbade “behavior intended to maliciously harass, threaten, or bully others,” including content that “is deliberately posted in order to humiliate someone” or that “makes hurtful and negative personal comments/videos about another person.”
But hey, a YouTube spokesperson told media outlets back in June, with regards to the Maza-Crowder situation, Crowder never instructed his viewers to harass Maza. Nor did Crowder dox Maza’s personal information, so… no harm, no foul?
Yes harm, yes foul. YouTube actually wound up reversing itself and stripping monetization from Crowder’s channel:
New policy: no implied hate speech
And now, under its new policy, it’s not just explicit threats that are prohibited. It’s also veiled or implied threats, including content that simulates violence against an individual or which suggest that violence may occur.
No individual should be subject to harassment that suggests violence.
It’s also building on its hate speech policy to prohibit racial, gender, and LGBTQ abuse:
We will no longer allow content that maliciously insults someone based on protected attributes such as their race, gender expression, or sexual orientation.
YouTube says that this goes for everyone, whether they’re private individuals, YouTube creators, or public officials.
Easier said than done. How well it manages to carry out this ambitious plan is another question entirely. Ars Technica’s Kate Cox presents a number of cases in which YouTube has failed to enforce its abuse policies over the years, most notably when it comes to a) making exceptions for popular, lucrative influencers and 2) for the very gnarly problem of politicians whose content is both highly newsworthy… as well as abusive.
Malice: Do we know it when we see it?
YouTube says that some content will be exempt from the new policy, including insults used in “scripted satire, stand-up comedy, or music”. Another exception is content that features actual or simulated harassment that’s created for documentary purposes, to combat cyberbullying or raise awareness.
There will be howling from those who consider it their right to express themselves when it comes to, say, making fun of somebody’s appearance, but YouTube says it’s not going to tolerate it anymore. It gave a number of examples of content that’s no longer welcome:
- Repeatedly showing pictures of someone and then making statements like “Look at this creature’s teeth, they’re so disgusting!”, with similar commentary targeting intrinsic attributes throughout the video.
- Targeting an individual based on their membership in a protected group, such as by saying: “Look at this filthy [slur targeting a protected group], I wish they’d just get hit by a truck.”
- Using an extreme insult to dehumanize an individual based on their intrinsic attributes. For example: “Look at this dog of a woman! She’s not even a human being – she must be some sort of mutant or animal!”
- Depicting an identifiable individual being murdered, seriously injured, or engaged in a graphic sexual act without their consent.
- Accounts dedicated entirely to focusing on maliciously insulting an identifiable individual.
You and what army?
For those who show a pattern of repeated behavior across multiple videos or comments, YouTube’s going to hit ’em where it hurts: it’s going to snip monetization.
It’s tightening its policies for its YouTube Partner Program (YPP): The consequences for those channels that “repeatedly brush up against our harassment policy” will be suspension, eliminating their ability to make money on YouTube.
Channels that keep up the harassment may see content removed. If they still don’t stop, YouTube may take further action, and could terminate channels altogether.
Starting on Wednesday, videos that violate the new policy may be removed, but they won’t be given strikes. YouTube says it will gradually ramp up enforcement in the coming months.