In November, Twitter introduced a trio of features intended to combat abuse, hate speech, and trolling — a mute filter, muted conversation threads, and new user report infrastructure. Today, the company unveiled three more, making good on its public pledge last week to work more quickly to make its platform a safer place. Twitter introduced new "safe search" results, a timeline change intended to collapse "potentially abusive or low-quality" tweets, and a new effort to crack down on the creation of new abusive accounts from repeat offenders.
On Twitter, shutting down serial harassers requires a time-consuming, inefficient whack-a-mole style of policing because it's so easy for trolls to quickly return to the platform with a new identity after they've been banned. To stop this, Twitter is implementing a stricter policy that will "identify people who have been permanently suspended and stop them from creating new accounts." The company didn't say how it plans to enforce this policy.
Twitter is also rolling out new "safe search" results to filter out tweets that "contain potentially sensitive content and Tweets from blocked and muted accounts." While it's unclear exactly how Twitter defines "potentially sensitive" content — and how effective its filters will be in parsing and weeding out abuse — the change could help people more easily avoid content they don't want to see.
Finally, Twitter is changing its timeline to identify "potentially abusive and low-quality replies": It's created a mechanism that collapses them. These tweets will still be discoverable, but you'll need to click a "Show less relevant replies" button to see them. According to Twitter, it will look like this:
Like safe search, Twitter's "show less relevant replies" feature is largely cosmetic — it hides abuse instead of fixing it. That said, it could help shield people who've suffered abuse on Twitter from further unwanted interactions.
These anti-abuse updates come as Twitter finds itself at the center of the political world, a key communication tool of the Trump administration, its supporters, and opponents. As such, Twitter is under increasing scrutiny for the harassment that sometimes occurs on its platform, its efforts to police it, and how its rules forbidding that kind of behavior might apply to one of the most powerful Twitter accounts around.
Amid the chaos, Twitter is focusing unprecedented attention on its abuse problem. Last week, its VP of engineering, Ed Ho, tweeted that the company was making harassment a "primary focus" inside Twitter with an eye toward faster iteration and response to pressing product problems. Shortly after, Twitter announced a small but important tweak by allowing users to report abusers even if the abuser blocked that person (previously a long-standing flaw in abuse reporting made it so that trolls could harass a user and then block them in order to prevent being reported). "We heard your feedback," Twitter said at the time.
That Twitter finally appears to be listening to users who've called for better anti-harassment tools is a heartening. But after a decade of inaction, winning back user trust back won't come easily.
Charlie Warzel is a senior writer for BuzzFeed News and is based in New York. Warzel reports on and writes about the intersection of tech and culture.
Contact Charlie Warzel at firstname.lastname@example.org.
Got a confidential tip? Submit it here.