"Kathleen" is an outspoken Hillary Clinton supporter. Last Tuesday she took to Twitter to criticize the Trump campaign's Skittles refugee poster, calling it a "disgusting ad." Shortly after, @leslymill — who goes by the name Adorable Deplorable — replied, "i LOVE THE AD. Describes the complexity of the "PROBLEM perfectly."
The political disagreement — very common on Twitter — peaked when @leslymill replied to Kathleen's tweet with an unsolicited photo of a child holding a knife and a newly severed head with the caption, "your heading for a deep hole." The photo, according to the website tangentcode.org, is from a video titled “Information Office of the State of Homs offers families (and ) the liquidation of a Captain in the Army Alnasiri” and shows a child soldier, believed to be associated with ISIS, beheading a man and posing with his head.
After seeing the photo, Kathleen reported the tweet to Twitter using its report forms. Soon after, Twitter replied that its investigation found the alleged violent and threatening tweet did not violate Twitter’s rules, which prohibit tweets involving violent threats, harassment, and hateful conduct. Twitter’s rules explicitly state that one may not “threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.”
This is not uncommon. In a recent BuzzFeed News survey, which asked over 2,700 Twitter users about abuse, 90% of respondents alleged that Twitter didn’t do anything when they reported abuse.
For Kathleen — who asked to remain anonymous (and use a pseudonym) so as not to receive more targeted abuse — the harassment is unsurprising, but unnerving. "I've worked online since 1985, so I've seen it all," she told BuzzFeed News. "But that doesn't mean I think it is ok."
Kathleen's case also raises questions about Twitter's ability to help protect its users from unwanted graphic imagery — the kind frequently used by abusers and trolls to threaten. Reached for comment, Twitter directed BuzzFeed News to a passage from an August blog post on countering violent extremism. The passage notes that "there is no one 'magic algorithm' for identifying terrorist content on the Internet." It also cites "proprietary spam-fighting tools, to supplement reports from our users and help identify repeat account abuse." These tools, according to the post, identified "more than one third of the accounts we ultimately suspended for promoting terrorism."
The post, however, doesn’t address terroristic or graphic imagery that has been co-opted by Twitter accounts that do not explicitly promote terrorism or violence against others. In @leslymill's case, horrific images of death are often used in rebuttal to opposing views, or to express sentiments like "This Is the Real Face of Islam."
When asked to clarify if the company evaluates graphic images such as beheadings on an individual basis, granting exceptions for newsworthiness, Twitter directed BuzzFeed News to a past statement noting that when evaluating media removal requests, "Twitter considers public interest factors such as the newsworthiness of the content and may not be able to honor every request." The company declined to provide further details about its handling of Kathleen's abuse report.
But roughly three hours after BuzzFeed News contacted Twitter about Kathleen's report, the tweet she'd flagged as abusive disappeared from @leslymills’ timeline. Twitter did not respond to queries about its deletion.
Charlie Warzel is a senior writer for BuzzFeed News and is based in New York. Warzel reports on and writes about the intersection of tech and culture.
Contact Charlie Warzel at firstname.lastname@example.org.
Got a confidential tip? Submit it here.