YouTube Removes More Videos but Still Misses a Lot of Hate

YouTube said a new policy and better technology helped it remove five times as many videos for violating its hate speech rules. But extremists can beat the system.
A study of over 60 popular far-right YouTubers found that the platform was “built to incentivize” polarizing political creators and shocking content.Elena Lacey; Getty Images

On Tuesday, YouTube said it removed more than 17,000 channels and over 100,000 videos between April and June for violating its hate speech rules. In a blog post, the company pointed to the figures—which are five times as high as the previous period’s total—as evidence of its commitment to policing hate speech and its improved ability to detect it. But experts warn that YouTube may be missing the forest for the trees.

“It’s giving us the numbers without focusing on the story behind those numbers,” says Rebecca Lewis, an online extremism researcher at Data & Society whose work primarily focuses on YouTube. “Hate speech has been growing on YouTube, but the announcement is devoid of context and is missing [data on] the moneymakers actually pushing hate speech.”

Lewis says that while YouTube reports removing more videos, the figures lack context needed to assess YouTube’s policing efforts. That’s particularly problematic, she says, because YouTube’s hate speech problem isn’t necessarily about quantity. Her research has found that users who encounter hate speech are most likely to see it on a prominent, high-profile channel, rather than from a random user with a small following.

A study of over 60 popular far-right YouTubers conducted by Lewis last fall found that the platform was “built to incentivize” polarizing political creators and shocking content. “YouTube monetizes influence for everyone, regardless of how harmful their belief systems are,” the report found. “The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online—and in many cases, to generate advertising revenue—as long as it does not explicitly include slurs.”

A YouTube spokesperson said changes in how the platform identifies and reviews content that may violate its rules likely contributed to the dramatic jump in removals. YouTube began cracking down on so-called borderline content and misinformation in January; in June, it revamped its policies prohibiting hateful conduct in an attempt to more actively police extremist content, like that produced by the neo-Nazis, conspiracy theorists, and other hate mongers that have long used the platform to spread their toxic views. The update prohibited content that promoted the superiority of one group or person over another based on their age, gender, race, caste, religion, sexual orientation, or veteran status. It also banned videos that espouse or glorify Nazi ideology, and those that promote conspiracy theories about mass shootings or other so-called “well-documented violent events,” like the Holocaust.

It makes sense that the broadening of YouTube’s hate speech policies would result in a larger number of videos and channels being removed. But the YouTube spokesperson said the full effects of the changes weren’t felt in the second quarter. That’s because YouTube relies on an automated flagging system that takes a couple of months to get up to speed when a new policy is introduced, the spokesperson said.

After YouTube introduces a new policy, human moderators work to train YouTube’s automated flagging system to spot videos that violate the new rule. After providing the system with an initial data set, the human moderators are sent a stream of videos that have been flagged by YouTube’s detection systems as potentially violating those rules and asked to confirm or deny the accuracy of the flag. The setup helps train YouTube’s detection system to make more accurate calls on permissible and impermissible content, but it takes a while—often months—to ramp up, the spokesperson explained.

Once the system has been properly trained, it can automatically detect whether a video is likely to violate YouTube’s hate speech policies based on a scan of images, plus keywords, title, description, watermarks, and other metadata. If the detection system finds that some aspects of a video are highly similar to other videos that have been removed, it will flag it for review by a human moderator, who will make the final call on whether to take it down, the spokesperson said.

Lewis says this approach can be effective at policing spam or scams, but it can be gamed by users, including far-right influencers who generate income from YouTube ads. “These types of influencers are very savvy at avoiding the sort of signals that an automated system would catch,” Lewis explained. “As a human, if you watch [many of] these videos from beginning to end, you can see they do involve targeted harassment and are absolutely in violation of YouTube’s policies.” But, she said, the videos often use coded language “to obscure the context.”

YouTube has hesitated to take down some big-name far-right creators in recent months, even when their videos appear to violate the platform’s rules. Shortly after YouTube announced its new hate speech policies in June, the company decided not to remove controversial videos posted by popular far-right creator Steven Crowder, who used slurs to attack a Cuban-American journalist over his ethnicity and sexual orientation. After public outcry over the decision, YouTube briefly revoked Crowder's ability to run ads next to his videos, then an hour later said his ad privileges would be reinstated if he removed a link to a product with an offensive slogan. YouTube later said it removed the ads because of a pattern of Crowder’s behavior, not the link.

YouTube has made similar policy flip-flops in recent weeks. In August, YouTube banned, then reinstated, the channels of multiple controversial far-right YouTubers following reporters’ inquiries. One of the men whose ban was reversed was banned from entering Britain last year and reportedly was in contact with the suspected gunman in the Christchurch mosque shootings; another is known for promoting the ethnic-replacement conspiracy theory pushed by white nationalists.


More Great WIRED Stories