By now you know the drill: massive news event happens, journalists scramble to figure out what’s going on, and within a couple hours the culprit is found — Russian bots.
Russian bots were blamed for driving attention to the Nunes memo, a Republican-authored document on the Trump-Russia probe. They were blamed for pushing for Roy Moore to win in Alabama’s special election. And here they are wading into the gun debate following the Parkland shooting. “[T]he messages from these automated accounts, or bots, were designed to widen the divide and make compromise even more difficult,” wrote the New York Times in a story following the shooting, citing little more than “Twitter accounts suspected of having links to Russia.”
This is, not to mince words, total bullshit.
The thing is, nearly every time you see a story blaming Russian bots for something, you can be pretty sure that the story can be traced back to a single source: the Hamilton 68 dashboard, founded by a group of respected researchers, including Clint Watts and JM Berger, and currently run under the auspices of the German Marshall Fund.
But even some of the people who popularized that metric now acknowledge it’s become totally overblown.
“I’m not convinced on this bot thing,” said Watts, the cofounder of a project that is widely cited as the main, if not only, source of information on Russian bots. He also called the narrative “overdone.”
The dashboard monitors 600 Twitter accounts “linked to Russian influence efforts online,” according to its own description, which means the accounts are not all directly traced back to Kremlin efforts, or even necessarily to Russia. “They are not all in Russia,” Watts said during a phone interview last week. “We don’t even think they’re all commanded in Russia — at all. We think some of them are legitimately passionate people that are just really into promoting Russia.” So, not bots.
We’ll likely never know the contents of the list for sure — because the researchers decline to divulge the identity of who they are monitoring. (The reasons they give for secrecy include worries that the accounts would then change their behavior and concerns over identifying accounts that are not, in fact, linked to Russian influence efforts, aka making a mistake.)
So that’s strike one: In what other world would we rely on a single source tool of anonymous provenance?
And then there’s strike two. Let’s say, despite that, you still really want to put your faith in those conclusions about Russian influence. Why would you do that? Twitter is actually clogged with bots — and has been for years — so taking a major vulnerability of the platform and using it to tidily explain something murky and complicated is appealing. Add to that the fact that Russia really did run an operation to meddle in the US election, hacking the DNC, running real propaganda campaigns, and deploying trolls to mess with the discourse. The discourse at times seems like an attempt to keep the attention on Russia, more than anything else. Everyone seems to want to believe that Russian trolls are ruling the internet.
And here we get to strike three. One of the hardest things to do — either with the accounts “linked to Russian influence efforts online,” whatever that means, or with the Internet Research Agency trolls who spent many months boosting Donald Trump and denigrating Hillary Clinton — is to measure how effective they really were. Did Russian troll efforts influence any votes? How do we even qualify or quantify that? Did tweets from “influencers” actually affect the gun debate in the United States, already so toxic and partisan before “bot” was a household word?
Even Watts thinks the “blame the bots” shtick has gotten out of control. “It’s somewhat frustrating because sometimes we have people make claims about it or whatever — we’re like, that’s not what it says, go back and look at it,” Watts said. “There are certain times when it does give you great insights, but it’s not a one-time, I look at it for five seconds and write a newspaper article and then that’s it. That doesn’t give you any context about it.”
Jumping to blame the bots is something that’s not just happening in newsrooms around the country, but in government offices around the world. Watts recalled hearing from a couple of Senate staffers half a year ago “that were jumping off a cliff” because of something they saw on the dashboard. “It’s like — whoa, whoa, whoa,” he said, “do you understand what you’re looking at?” Apparently not.
Take the Nunes memo for example — headlines proclaimed that an army of Russian bots was behind the push to declassify the document, all thanks to Hamilton 68. The real culprit? None other than Julian Assange, whose sympathy for Russia — and antipathy to the Trump-Russia investigation — is no secret.
“When Julian Assange says something, Russian influence networks always repeat it,” Watts said. “So he weighed in on the Nunes memo; that’s what made it trend.”
Perhaps, but the reality is much more nuanced. Assigning blame to sock puppets doesn’t take into account the agency of loud and influential online voices like that of the pro-Trump media Twitter personalities who pushed and spun the memo and fashioned hundreds of memes to go viral in the fever swamps. Nor does that explanation credit Sean Hannity, Fox News, or the GOP lawmakers who championed the memo as a Watergate-esque revelation of government malfeasance and kept it in the news for the better part of a month. To chalk the entire memo incident up to foreign interference and automated messaging is to dismiss the call that’s coming from inside the house and give a pass to savvy media manipulators by suggesting they’re nothing more than useful idiots to the Kremlin.
Further complicating the Russian bot narrative? The notion that plenty of automated social media influence campaigns are orchestrated right here in the United States. As BuzzFeed News has reported, MicroChip, “a notorious pro-Trump Twitter ringleader,” has and continues to orchestrate automated networks of Twitter accounts to help push trending topics and advance pro-Trump narratives. And when, after the election, Columbia Tow Center for Digital Journalism professor and researcher Jonathan Albright set out to find the most influential political bot account, he didn’t end up in Saint Petersburg, but Chicago. “I ended up proving that the most influential account through the last week of the 2016 election (across every major election-related hashtag) was a 68-year-old guy [named Daniel John Sobieski] in a basement,” Albright told BuzzFeed News. “He had two accounts, both top five in my group of bots, and his tweets were getting between 20 and 30 millions impressions on Twitter every 24 hours.”
The story made the front page of the Washington Post in early 2017 and detailed that Sobieski, who spent his pre-Twitter days writing letters to the editor, tweeted prolifically during the days and let an automated account do the talking at night. To dismiss him as solely a bot is to ignore part of the story. “‘Omg that 68-year-old blind dude who posts with a 5,000-strong automated patriot Twitter sock-puppeting group that can make things trend in 15 minutes stole the election’ is more like it,” Albright said.
So why the endless talk about Russian bots?
Part of the issue may be in the terminology. In the same way that most modern “hacks” aren’t hacks in the traditional sense but instead a series of small lies, fake links, web pages, and simple human deception, bots are often not deployed the way we’ve come to imagine. “I think ‘enhanced voice’ or ‘false amplification’ is a better term than bots,” Albright told BuzzFeed News. “Bots exist, no doubt, but not in the ways many report on and/or assume. Every bot is different, but all can be linked to human goals, desired outcomes, and programming. And most bots are associated directly with a human identity that is managed by an individual or group of people.”
But what does it mean if the “bots” measured by Hamilton 68 have nothing to do with the Kremlin at all? “We never connect any of this directly to if you see one thing on the dashboard, that this somehow is a Kremlin-approved influence operation,” said Bret Schafer, an analyst with the dashboard.
Watts also acknowledges that: “I still question the effectiveness of bots, having watched it for a lot of time. Like a real influencer, I’m going to say Assange — I'm not saying he’s a Russian operative — but it’s like Assange creates far more impact with lower reach or lower volume or content than a bot that randomly tweets Assange content.”
The bot detectives aren't the only ones catering to anti-Trump confirmation bias, and a desire to see Russia’s hand as stronger than it might have been. Since Trump's election, a number of vigilante Twitter investigators have threaded together breathless tweetstorms implying multidimensional Trump-Russia plots — the smoking gun always just out of reach. And the anxiety around Russian intervention has also produced some research with distressingly lax methodology. A recent Oxford research paper, for example, argued that Trump voters are much more likely to share junk news, a conclusion reached in part because the researchers categorized many mainstream conservative sites like the National Review as "junk" as well as outlets like the New York Daily News. Similarly, among some propaganda monitoring outfits, there's a tendency in the research to overinflate the influence of particular Russian state news outlets, like Sputnik or RT. Russian trolls and propaganda efforts exist, but that’s no excuse for the increasingly shoddy research driving so much analysis these days.
One of the most disorienting parts of today’s geopolitical information warfare is that all sides feel and act as if they’re winning, and it’s not hard to know who or what has the most influence.
The Great Bot Panic, for instance, poses a series of contradictions. It is true that bots are a serious problem. It is also true that the bot problem is exaggerated. It is true that Russian bots are a conspiracy theory that provides a tidy explanation for complicated developments. It is also true that Russian influence efforts may be happening before our eyes without us really knowing the full scope in the moment. It is true that the reflexive blaming of bots suggests that there's real fear of believing that sentient American human beings (people who might be your neighbor or your kids' teachers, or even just a 68-year-old in a basement) share incendiary, potentially unsavory political opinions. And yet it is also true that dismissing the problem isn’t acceptable either. Fifteen months after the election, despite the reports and indictments and congressional testimony, we’re not all that much closer to truly understanding the scope or influence of the Russian bots. The only thing both sides of the argument have in common is that they distrust the conclusions and motives of the other and find them dangerous.
So, in that sense, maybe the bots have won. ●
Bret Schafer's name was misspelled in an earlier version of this post.
Kevin Collier contributed additional reporting to this story.
Miriam Elder is the world editor for BuzzFeed News and is based in New York. Her secure PGP fingerprint is 5B5F EC17 C20B C11F 226D 3EBE 6205 F92F AC14 DCB1
Contact Miriam Elder at firstname.lastname@example.org.
Charlie Warzel is a senior writer for BuzzFeed News and is based in New York. Warzel reports on and writes about the intersection of tech and culture.
Contact Charlie Warzel at email@example.com.
Got a confidential tip? Submit it here.