This past weekend, when Unite the Right organizers used Twitter to rally supporters following the removal of their Facebook event, Twitter let the tweet — which advertised the time and location of a white supremacist rally near which one person would later be killed and dozens more injured — remain. And nobody knows exactly why, thanks to Twitter’s policy of not commenting on individual accounts for “privacy and security reasons.”
Since the turn of the decade, Twitter has effectively used this policy to shield itself from accountability; now, it’s denying the public and the press crucial information about how one of the world’s most visible platforms — not to mention the president’s go-to communication mechanism — makes decisions about what’s acceptable and what’s not. Even some present at the policy’s creation say it should no longer be used as a means to dodge questions about Twitter’s motivations.
The “individual accounts” policy, initially created when political action was simply a blip on Twitter's radar, has long made rule enforcement surrounding online abuse and harassment — which have dogged the social network for a decade — appear arbitrary and unclear. The policy has been invoked in lieu of serious, detailed explanations when Twitter has taken action against harassers, banned and then reinstated the white nationalist Richard Spencer, and kept up dozens of threatening and harassing images and tweets even after users filed reports.
And following the election of Donald Trump, who consistently uses the platform to threaten opponents and push his viewpoints, it has taken on even greater importance. The president has appeared to violate Twitter’s rules by unleashing mobs on opponents, or threatening violence. But you would never know how Twitter feels about it, because the president possesses an "individual account” himself.
Back in December 2016, after Trump used his Twitter account to criticize Chuck Jones, an Indiana union organizer who criticized the president, the Washington Post reported that Jones was inundated with threatening phone calls. The Jones incident was tricky and unprecedented territory for Twitter: The president had tweeted something that some thought was a clear violation of Twitter's abuse incitement policy (the New York Times dubbed him the “Cyberbully in Chief.") And Twitter refused to comment on it.
Twitter owes its users an explanation when it comes to the leader of the free world. For example, in the case of Chuck Jones, was Trump’s tweet just barely within the realm of acceptable behavior, or was the service making an exception for the then-president-elect? The same goes for Trump’s tweet this summer that Morning Joe host Mika Brzezinski was "bleeding badly from a face-lift." Was the tweet within the bounds of Twitter’s rules on targeted abuse? Or was it an exception? We won’t know for sure, as Twitter’s response to both incidents was that it does not comment on individual accounts.
As a defense, Twitter and other tech companies suggest that by revealing nothing, they make it harder for trolls to exploit the terms of service. But in practice, the policies make it difficult for journalists or anyone else to hold Twitter accountable for its seemingly inconsistent enforcement decisions. And there’s reason to believe the policy may actually be working in the favor of bad actors who exploit it — an effective trolling tactic is to use Twitter’s harassment reporting infrastructure and tools against those who are fighting or being trolled.
The policy originated in Twitter’s early days, when it didn’t have the bandwidth to deal with the onslaught of inquiries that could show up during major news events, nor a point of view on how to handle those inquiries, according to one former executive who was at the company when the policy was formed. But Twitter has grown significantly in the years since, and branded itself as a news app — its move to the news section of the iOS App Store was an indication of how it sees itself. But while the company has evolved, its policy has not.
“It definitely seems from the outside that the company is relying on a playbook that was established all those years ago,” the former Twitter executive told BuzzFeed News. “If you declare yourself the most relevant speech platform in the world, then you can’t stonewall the media when people want to know your speech rules.”
Another former Twitter executive told BuzzFeed News the policy mirrors other tech company policies. PayPal, for example, declined to comment on individual accounts recently after banning a number of alt-right personalities from its platform. The executive also said that if Twitter started commenting on individual accounts, it would be overwhelmed with the amount of statements it would be required to draft.
There are also valid privacy reasons — especially pertaining to regular citizens — for not sharing sensitive information with the press and greater public, and Twitter is quick to note them. "Twitter takes user privacy and security very seriously and our users count on us to defend and respect their voice. That's why we do not comment publicly on individual accounts, and instead only communicate with the user directly affected by any content, privacy, or security issues,” a Twitter spokesperson told BuzzFeed News.
But there is a middle ground: Twitter could easily explain big decisions — such as when the president threatens war in a tweet, or when white nationalists organize on its platform — but decline to comment on less consequential decisions.
Facebook is often no better. It regularly hides behind terms like “glitch” and “error” when it removes important content from its site, giving little insight into the process that got it removed in the first place. Still, in a conversation with BuzzFeed News earlier this year, Facebook CEO Mark Zuckerberg acknowledged that his company needed to be more transparent and had room to grow with its approach to handling content. “There's a lot of things that we need to get better on [about] this,” he said.
Twitter has also publicly expressed a desire to be more transparent. At the end of 2016, CEO Jack Dorsey tweeted, “we definitely need to be more transparent about why and how. Big priority for this year” and added that “working to better explain and be transparent and real-time about our methods.” But Twitter hasn’t really been more transparent in 2017. In July, the company touted its progress on combating harassment and released some internal figures on abuse prevention — but the stats offered little context or basis for comparison. And in July when BuzzFeed News presented the company with 27 explicit examples of harassment, Twitter did not respond, instead providing a boilerplate statement.
Alex Kantrowitz is a senior technology reporter for BuzzFeed News and is based in San Francisco. He reports on social and communications.
Contact Alex Kantrowitz at email@example.com.
Charlie Warzel is a senior writer for BuzzFeed News and is based in New York. Warzel reports on and writes about the intersection of tech and culture.
Contact Charlie Warzel at firstname.lastname@example.org.
Got a confidential tip? Submit it here.