A year after YouTube’s chief executive promised to curb “problematic” videos, it continues to harbor and even recommend hateful, conspiratorial videos, allowing racists, anti-Semites and proponents of other extremist views to use the platform as an online library for spreading their ideas.
YouTube is particularly valuable to users of Gab.ai and 4chan, social media sites that are popular among hate groups but have scant video capacity of their own. Users on these sites link to YouTube more than to any other website, thousands of times a day, according to the recent work of Data and Society and the Network Contagion Research Institute, both of which track the spread of hate speech.
The platform routinely serves videos espousing neo-Nazi propaganda, phony reports portraying dark-skinned people as violent savages and conspiracy theories claiming that large numbers of leading politicians and celebrities molested children. Critics say that even though YouTube removes millions of videos on average each month, it is slow to identify troubling content and, when it does, is too permissive in what it allows to remain.
Advertisement
The struggle to control the spread of such content poses ethical and political challenges to YouTube and its embattled parent company, Google, whose chief executive, Sundar Pichai, is scheduled to testify on Capitol Hill on Tuesday amid several controversies. Even on the House of Representatives YouTube channel that is due to broadcast the hearing, viewers on Monday could see several videos peddling conspiracy theories recommended by the site’s algorithm.
“YouTube is repeatedly used by malign actors, and individuals or groups, promoting very dangerous, disruptive narratives,” said Sen. Richard Blumenthal (D-Conn.). “So whether it is deliberate or simply reckless, YouTube tends to tolerate messaging and narratives that seem to be at the very, very extreme end of the political spectrum, involving hatred, bias and bigotry.”
YouTube has focused its cleanup efforts on what chief executive Susan Wojcicki in a blog post last year called “violent extremism.” But she also signaled the urgency of tackling other categories of content that allow “bad actors” to take advantage of the platform, which 1.8 billion people log on to each month.
Advertisement
“I’ve also seen up-close that there can be another, more troubling, side of YouTube’s openness. I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm,” Wojcicki wrote. But a large share of videos that researchers and critics regard as hateful don’t necessarily violate YouTube’s policies.
False claims live on
The recommendation engine for YouTube, which queues up an endless succession of clips once users start watching, recently suggested videos claiming that politicians, celebrities and other elite figures were sexually abusing or consuming the remains of children, often in satanic rituals, according to watchdog group AlgoTransparency. The claims echo and often cite the discredited Pizzagate conspiracy, which two years ago led to a man firing shots into a Northwest Washington pizzeria in search of children he believed were being held as sex slaves by Democratic Party leaders.
Advertisement
One recent variation on that theory, which began spreading on YouTube this spring, claimed that Democrat Hillary Clinton and her longtime aide Huma Abedin had sexually assaulted a girl and drank her blood — a conspiracy theory its proponents dubbed “Frazzledrip.”
Although some of these clips were removed after first appearing in April and being quickly debunked by fact-checkers, a Washington Post review found that dozens of videos alleging or discussing these false claims remain online and have been viewed millions of times over the past eight months. YouTube’s search box highlighted the videos when people typed in seemingly innocuous terms such as “HRC video” or “Frazzle.”
YouTube does not have a policy against falsehoods, but it does remove videos that violate its guidelines against hateful, graphic and violent content directed at minorities and other protected groups. It also seeks to give wide latitude to users who upload videos, out of respect for speech freedoms and the free flow of political discourse.
Advertisement
“YouTube is a platform for free speech where anyone can choose to post videos, subject to our Community Guidelines, which we enforce rigorously,” the company said in a statement in response to questions from The Washington Post.
In an attempt to counter the huge volumes of conspiratorial content, the company also has worked to direct users to more-reliable sources — especially after major news events such as mass shootings.
But critics say YouTube and Google generally have faced less scrutiny than Twitter and Facebook — which have been blasted for the hate and disinformation that were spread on their platforms during the 2016 election and its aftermath — and, as a result, YouTube has not moved as aggressively as its rivals to address such problems.
The Pizzagate shooter reportedly had watched a YouTube video about the conspiracy days before heading to Washington from his home in North Carolina, telling a friend that he was “raiding a pedo ring. ... The world is too afraid to act and I’m too stubborn not to.”
Advertisement
The Network Contagion Research Institute found that Robert Bowers, the man charged in a mass shooting that killed 11 at a Pittsburgh synagogue in October, used his Gab account to link to YouTube videos 71 times. These included neo-Nazi propaganda, clips depicting black people as violent thugs and videos calling Jewish people “satanic.”
Data and Society found that 22percent of Gab users link to videos on YouTube and that people pushing racist and anti-Semitic views — often cloaked in engaging but false conspiracy theories — link to one another’s clips on YouTube, make guest appearances on one another’s online shows and engage in the company’s paid conversation boards known as “super chats.” These tactics, the researchers found, bolster the popularity of the videos and fuel the spread of extremist ideologies.
“Sites like Gab rely on YouTube as a media archive for hate and conspiracy content,” said Joan Donovan, a Data and Society researcher. “These videos are often used as ‘evidence’ in debates.”
Advertisement
Some of the Frazzledrip clips purport to show grainy images of Clinton and Abedin committing crimes and speak of invoking the death penalty. One video, which has been viewed 77,000 times and remains online, has a voice-over that says, “Will these children become the dessert at the conclusion of the meal?”
Users of Gab and 4chan’s “Politically Incorrect” chat room discussed Frazzledrip avidly in April and linked to videos on the subject dozens of times, said the Network Contagion Research Institute. The allegations were even more popular on Twitter, which is a vastly larger platform, generating thousands of comments a day at its peak and hundreds of links to YouTube, according to Clemson University researchers.
YouTube said only one of the 16 videos identified by The Washington Post as featuring various versions of the baseless Frazzledrip claims — in a mix of images and verbal discussions — violated its policies. It removed the one video after the inquiry.
Advertisement
That video included images of a body on a table before restrained children and also of Clinton with a bloodied mouth and fangs, claiming that she and Abedin drank the blood of their victim.
Another video, largely consisting of an apparent copy of the video that was removed, remained online.
YouTube declined to explain the discrepancy. Gab declined to comment. The owner of 4chan did not reply to a request for comment.
Clinton and Abedin declined to comment through a spokesman.
A 'vortex' for hate
Researchers increasingly are detailing the role YouTube plays in the spread of extremist ideologies, showing how those who push such content maximize the benefits of using various social media platforms while seeking to evade the particular restrictions on each.
“The center of the vortex of all this stuff is often YouTube,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism.
Advertisement
Although YouTube doesn’t ban conspiracy theories or false news stories, Facebook, YouTube and Twitter have made efforts to reduce the reach of such content this year. YouTube’s community guidelines define hate speech as content that promotes “violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes.” Moderators evaluate each post based on a strike system, with three strikes in a three-month period resulting in termination of an account.
YouTube does not publish statistics describing its effectiveness in detecting hate speech, which the company concedes is among its biggest challenges. Facebook, by contrast, recently began publishing such data, and the results highlight the challenge: Between July and September, its systems caught about half of posts it categorized as hate speech before they were reported by users, compared with more than 90 percent of posts that the study determined to be terrorism-related. AI systems are even less capable of finding hate when it is only video and not text.
Google overall now has more than 10,000 people working on maintaining its community standards. The company declined to release a number for YouTube alone.
But YouTube officials acknowledge that finding and removing hateful videos remains difficult, in part because of the technical limitations of analyzing such a vast and fast-growing repository of video content. Users upload 400 hours of video to YouTube each minute, according to the company.
YouTube reported that 6.8 million of the 7.8 million videos it removed in the second quarter of this year for violating standards were first flagged by computerized systems. But detecting terrorists waving identifiable flags or committing violence is comparatively easy, according to experts, both because the imagery is more consistent and because government officials keep lists of known or suspected terrorist groups and individuals whose content is monitored with particular care.
There is no equivalent list of hate groups or creators of hateful content. YouTube and other social media companies routinely face accusations from conservatives of acting too aggressively against videos that — while treading close to violating restrictions against hateful or violent content — also carry political messages.
“Their enforcement of their ‘community guidelines’ seems arbitrary and selectively enforced, to say the least,” said the creator of one of the videos about Clinton and Abedin, in an email in which he identified himself by only his first name, Sean. “At worst, it’s punitive and targeted at speech they do not like.”
Sean said that YouTube suspended his account after videos on his SGT Report channel received “three strikes” for violations of its guidelines, but he was later reinstated after followers tweeted to the company.
YouTube declined to say why Sean’s account was restored. People who receive strikes from YouTube can appeal those decisions to the company.
Sean’s original video highlighting baseless allegations that Clinton and Abedin terrorized a child was first posted to YouTube in April and viewed 177,000 times before being removed. It is no longer on his channel, although verbatim copies exist on at least one other YouTube channel.
Unlike several of the Frazzledrip clips, Sean’s video did not include any images that depicted the alleged crime and attributed the most disturbing allegations to a tweet his video showed on screen.
Power of recommendations
Former YouTube engineer Guillaume Chaslot, an artificial intelligence expert who once worked to develop the platform’s recommendation algorithm, says he discovered the severity of the problem, which he believes he helped create, on a long bus ride through his native France in 2014, the year after he left the company. A man sitting on the seat next to him was watching a succession of videos claiming that the government had a secret plan to kill one-quarter of the population. Right after one video finished, another started automatically, making roughly the same claim.
Chaslot tried to explain to the man that the conspiracy was obviously untrue and that YouTube’s recommendation engine was simply serving up more of what it thought he wanted. The man at first appeared to understand, Chaslot said, but then concluded: “But there are so many of them.”
The platform’s recommendation engine adds the power of repetition, allowing similar claims — no matter how preposterous — to be served again and again to people who show an initial interest in a subject.
YouTube has built software this year to direct users to more credible sources in breaking-news situations.
Chaslot, who founded the AlgoTransparency watchdog group, said one of the Frazzledrip videos with the words “Lost Hillary snuff tape” in the title was recommended to YouTube users at least 283,000 times. He found that another with the word “Frazzledrip” in its title and several others making references to “pedovores” — people who supposedly eat children — were also offered by YouTube’s recommendation algorithm to users.
“The big problem is people trust way too much what’s on YouTube — in part because it’s Google’s brand,” Chaslot said.
YouTube said in a statement that its recommendation algorithm continues to improve. “No part of the recommendation system that Chaslot worked on during his time at Google is in use in the YouTube recommendations system today,” the statement said.
Julie Tate contributed to this report.
ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZK%2B2v8innKyrX6mypLTNqKOon6lktaLAxJ%2BspWWTpLu0vMirmJyhlah6tbTRoq2eZZ%2Bjerq71K2sm51dmbK0vMitnGaonJqxqLGMraZmm5yarq951Klkqaqfl7mmucCtoJxlpp6xprvSaGlpaWhkfnN7kGlmb2plbIBxrZdmnWyeaGJ%2BcrGXZnBrbGBisnl8kXGYb2qTbH9zq9KtpquxXp3Brrg%3D