Jewish Journal

Twitter rife with hate speech, terror activity

by Jonah Lowenfeld

Posted on Mar. 20, 2013 at 9:08 am

When it comes to stemming the proliferation of hate speech on social media, Rabbi Abraham Cooper, associate dean of the Simon Wiesenthal Center, likes Facebook’s attitude.

“They have rules,” Cooper said of the $63.5 billion social media giant, which has a policy to remove content that is threatening or constitutes hate speech. “When you send in a request, you get an answer. They have removed thousands of pages.”

There is no such policy at Twitter, a privately owned online network that allows users to post any thought or message of 140 characters or fewer and to follow posts from any other user on the system. Twitter’s practice is to allow just about any kind of speech, intentionally serving as an open forum, without an identifiable moderator to whom users can reach out with complaints.

This hands-off policy has faced pushback: A group of French Jewish students, outraged by a rash of anti-Semitic jokes being circulated using the hashtag “#unbonjuif” (“a good Jew”), won a judgment against Twitter in a French court in January that could force the company to divulge the identities of the users behind the tweets.

Twitter recently demonstrated some willingness to follow the laws of a particular country when it comes to hate speech: For example, as of October 2012, the San Francisco-based company ensured that tweets from one neo-Nazi group outlawed by the German government won’t show up in the feeds of Twitter users in Germany.

But Twitter has been much more reluctant to practice any other kind of self-enforced censorship or monitoring of content. Among the findings announced in the Simon Wiesenthal Center’s 15th Annual Digital Terror & Hate Report, released on March 13, the Wiesenthal Center said that hate speech on Twitter has gone up about 30 percent from the previous year.

“Twitter accounted for almost all the increase in hate and terror material we were looking at,” Cooper said.

Cooper said the Wiesenthal Center has reached out to Twitter in the past year, but has received no response. A spokesperson from Twitter declined to comment on the record for this story.

Ryan Budish, a fellow at the Berkman Center for Internet and Society at Harvard University who studies Internet accessibility, said Twitter’s reaction to questions like these in the past has been to point out that people don’t need to follow anyone on Twitter whose views they find objectionable.

“Users aren’t being forced to listen to hate speech,” Budish said, taking care to emphasize that he has no formal links to Twitter. “I believe that Twitter’s generally taken the approach that more speech is a better response than censorship.”

The Wiesenthal Center has been monitoring hate expression online since the late 1990s, and initially its biggest concerns were the thousands of individual Web sites that proliferated throughout the Internet. Now, the center is focused on uncovering hatred and terror activity on the world’s most popular social media platforms, where the vast majority of posts are either “pointless babble” or merely “conversational.”

Unlike Facebook, which since 2011 has required users to use their real names on their profiles, Twitter allows the creation of aliases. The results can be amusing — @ElBloombito, for example, pokes fun at New York City Mayor Michael Bloomberg’s mediocre command of the Spanish language. But they can also be racist and hate-filled, such as Twitter handles that impersonate leaders of the Third Reich, including Adolf Hitler, Josef Mengele and filmmaker Leni Riefenstahl.

Hate speech is only part of Cooper’s concerns about Twitter: so-called “digital terror” activity is easy to find on Twitter, as well.

When Inspire magazine, the English-language Web publication of the al-Qaeda affiliate in Yemen, released its latest issue in late February, Cooper said that “the way they chose to announce it was through Twitter.”

“That’s not a speech issue,” Cooper said, “that’s a terrorism issue.”

But even when confronted with allegations that its users are involved in terrorist activity, Twitter has been reluctant to give over the names of its users to law enforcement. In the handful of cases in which Twitter has handed over information, the company has only done so after being compelled by a court order. 

“When you look at Twitter’s policies, their big picture tends to be hands off — as they put it, ‘Let the tweets fly,’ ” the Berkman Center’s Budish said. “They want to encourage an open atmosphere for dialogue and the exchange of information.”

Facebook, by contrast, actively monitors content that its users post. The company’s policy on hate speech is rather brief: “You will not post content that: is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.” But in February 2012, a former employee of a company that does content moderation for Facebook leaked a document to the Web site Gawker.com with far more specific guidelines.

Along with generic prohibitions against “slurs or racial comments of any kind,” and “hate symbols,” the one-page document included a specific note about “Holocaust denial which focuses on hate speech,” saying that such posts had to be “escalated” for additional monitoring.

Although Cooper would like Twitter to adopt a policy similar to Facebook’s, Budish pointed out that Facebook’s approach has also had its problems.

Both liberal and conservative groups have complained of Facebook’s censoring some of their postings. In September 2012, The New Yorker had one of its cartoons temporarily removed by a Facebook censor because it depicted not just Adam’s nipples, but also Eve’s. Activists on both sides of the abortion debate have accused Facebook of being biased against them, and passionate supporters of breastfeeding have taken issue with the prohibition on certain images of nursing mothers.

“When you start policing content like that, it isn’t cost-free, and it has led to all of these issues,” Budish said. “I can believe that Facebook does not have a bias in what they’re doing. But in order to police the content being uploaded, Facebook censors spend less than one second in determining what content is appropriate or not.

“It just seems very difficult, if not impossible,” Budish continued, “to make highly informed judgments when you have to spend so little time.”

Cooper said he knows he’s not going to root out hatred from across social media platforms; he also has his own disagreements with Facebook over what constitutes hate speech.

What he would like from Twitter, at least, is a conversation.

“The really bad guys are having a field day,” Cooper said, “and we have to make sure that they’re not allowed, unchallenged, to leverage these technologies to promote hate, and worse.” 

Tracker Pixel for Entry


View our privacy policy and terms of service.