Since summer’s end, Silicon Valley’s biggest tech companies have been embroiled in an endless series of missteps and mini scandals.
In early October, after the Las Vegas shooting, Facebook’s crisis response page was flooded with spammy and hyperpartisan news, and Google search queries served up links to 4chan, which was spreading hoaxes to politicize the tragedy. Then, Facebook CEO Mark Zuckerberg toured storm-ravaged Puerto Rico in a VR hangout, his cartoon avatar high-fiving a fellow Facebook employee while the two waded in floodwaters. Google unveiled a new Maps feature that showed users how many calories they would burn by walking — and tracked that calorie estimates in mini cupcakes, "for perspective.". Twitter briefly suspended actor Rose McGowan in the middle of a series of tweets about sexual harassment, then two weeks later an employee briefly deactivated the account of the president of the United States on their last day at work, then a bug caused the platform to restrict search terms for hashtags like #bisexual, making it look as if they were being censored. Apple rolled out an update that included a glitch that caused every iPhone in the world to replace the word “i” with a capital A and a question mark in a square symbol. And after the Texas mass shooting, Google spread misinformation by listing conspiratorial tweets about the shooter as “Popular on Twitter," and YouTube’s algorithm surfaced videos propagating a conspiracy theory that the Texas shooter was linked to antifa. Earlier this month, Twitter verified Jason Kessler, a white supremacist — and then apologized and paused all verification of accounts indefinitely after a backlash.
Viewed separately, each of these missteps could be seen as reasonably small but unfortunate errors. All were remedied and apologized for fairly quickly. Each one has its own explanation that, in the right context, feels at least somewhat understandable. Google's and Facebook’s algorithms weren’t ready for the speed at which misinformation popped up in the aftermath of the Vegas shooting (and Google’s failures in the aftermath of the Texas shooting prove it is still struggling). Zuckerberg’s disaster infomercial was meant to showcase the (highly dubious) cliché that VR could make us feel deeper empathy for the destroyed nation (for which Facebook did raise millions in disaster relief). Twitter’s freeze of McGowan’s account was meant to safeguard the spread of private information. Google was just trying to help people be healthier. And on and on.
But these unforced errors couldn’t be coming at a worse time. Between election interference, misinformation, and Washington’s growing unease at its unprecedented scale and influence, Big Tech is in the middle of a legitimate crisis of confidence. At a moment when more people than ever are questioning the platforms’ roles as engines of social power, legitimately wondering whether they’re responsible enough to safeguard us from algorithmically destabilizing democracy, every sloppy mistake feels particularly unsettling. How are we supposed to trust Facebook with safeguarding our elections if it can’t get through a VR demo without offending an entire sovereign US territory? Can we trust Twitter as the president’s primary tool of international diplomacy if a contract employee can just click a button and wipe him off the network?
Baked into the frustration surrounding these gaffes is a sense that Silicon Valley’s biggest companies are incapable of the necessary introspection to see themselves the way their critics might.
“The issue here is that the platform creators are hobbled in their ability to see beyond their own best intentions,” former White House chief digital officer and Silicon Valley veteran Jason Goldman told BuzzFeed News after the Facebook VR debacle. “There's a preexisting bias toward ‘we're doing good.’ The irony is that Facebook is itself a machine for serving people content that confirms their own biases,” he said.
Google’s inability to imagine that a cupcake calorie counter you can’t opt out of might offend — or Twitter’s failure to see how verifying a white supremacist would read like it was conferring legitimacy on his movement — suggests that these companies are emotionally stunted, frozen in an era when their intentions were rarely questioned and their ambitions lauded as novel and almost universally pure. They’re stuck in the circa 2014 model of “delight” — but Easter egg–y doohickeys like calorie counters and VR demos simply land differently when they’re coming from companies that many believe are at least partially responsible for the toxic political and cultural climate of the internet.
Similarly, amateurish mistakes like accidentally blocking a hashtag or promoting fake news feel more forgivable when they’re coming from a young company trying to change the world than they do from one that’s already changed it. As anyone who watched Big Tech’s hearings before Congress can attest, Facebook is no longer just the place you go to kill time, and Google is no longer just a really good search engine, and Twitter isn’t just a collection of hashtag games and dumb jokes — they’re massive companies with world-historic power, and they don’t seem to recognize that consumers aren’t approaching them with the good faith they used to.
Trustworthiness, arguably more than ever, is Silicon Valley’s most coveted feature now. The Googles, Facebooks, and Apples of the world have long been savvy about rebranding fundamental rights such as privacy as incentives to use their products — it’s a marketing tactic that’s crucial to the success of their businesses, which aim to be your everything, and need your data to do that. But the last year has revealed that trust goes well beyond individual user privacy and into more complicated territory: Are these companies aware enough or capable enough or even willing to be responsible stewards of the unprecedented systems they’ve built?
Which is why the last two months of unforced errors has been so damaging to Big Tech. Twitter’s suspension of Rose McGowan or its verification of Kessler aren’t just small blunders, they’re signals to frustrated users — proof in their minds that despite a renewed commitment from leadership to curb harassment, the promises are hollow.
Similarly, Facebook’s confusing rollout of its revenge porn upload test program isn’t just a gaffe, but a sign that it still has significant blind spots when it comes to how is users will interpret its actions. Truly solving issues like revenge porn are intensely difficult to navigate while preserving privacy but, as Slate wrote, the “tone-deaf approach” of “asking women who have been victims to upload naked photos of themselves” shows a profound lack of empathy on the part of the social network and an inability to intuit the reaction of its users.
Even Apple has contributed to this feeling of uneasiness with its “i” glitch, which is, reportedly, the result of the fact that “the machine learning algorithm for autocorrect was learning something it never should have learned.” The glitch was fixed within a week but lasted long enough for many to realize how one of our primary modes of simple person-to-person communication are subject to the whims of proprietary, opaque algorithms that are learning independently of humans.
Of course, it’s unlikely that any of these tone-deaf decisions or oversights will lead to more than a few handfuls of users to quit the platforms. Public opinion of Big Tech companies is still high — 88% of respondents view Google favorably; Facebook hovers around 60%. But each unforced error peels back the curtain for a brief moment, allowing us to contemplate their centrality in our lives and the greater culture. They get us asking questions. If Apple’s machine learning can unlearn the letter “i,” what else could it decide to forget? Why doesn’t Google have algorithmic guardrails in place for breaking news? And why did it make the same mistakes after Texas that it did during Vegas?
Judging by these companies’ responses, it appears that Big Tech is beginning to feel the heat too. Not long ago, these kinds of mistakes would merit little more than a casual blog update and a reminder that Silicon Valley is always testing, tweaking, and moving fast and breaking things. Today, they’re cause for internal reevaluations and major policy shifts. After its cupcake calorie counter sparked outrage, Google didn’t make it opt-in — it simply killed the feature. Just recently, in response to the backlash to verifying a white supremacist, Twitter appears to have completely changed its verification policy that stood for nearly a decade.
For years, Silicon Valley has operated under the assumption that consumers would take it at its best intentions. But the last few months suggest that we’ve entered a new era of tech skepticism.The platforms have long demanded more — more trust, more data, more goodwill — from their users. Now, perhaps, it's the users’ turn to make demands.
Charlie Warzel is a senior writer for BuzzFeed News and is based in New York. Warzel reports on and writes about the intersection of tech and culture.
Contact Charlie Warzel at email@example.com.
Got a confidential tip? Submit it here.