British far-right groups should be banned from the BBC and other media because they end up being the “acceptable face” for extremist ideologies used by terrorists, according to a new report by former prime minister Tony Blair’s think tank.
The Tony Blair Institute for Global Change will release two new reports on Wednesday aiming to start a debate around the platforming of far-right and extremist groups in the UK media.
One of the reports looked at four far-right activist groups — Generation Identity, Britain First, For Britain, and the British National Party — and found that there was overlap between their core messages and the manifestos of Norwegian terrorist Anders Breivik and alleged Christchurch mosque terrorist Brenton Tarrant.
It showed Generation Identity and the BNP were both pushing the central extremist themes used by Breivik and Tarrant in their manifestos around a “government-enforced ‘white genocide’” and the “great replacement theory”.
Azmina Siddique, a policy adviser at the Tony Blair Institute, said one recommendation was for the Home Office to come up with a new classification for “hate groups” that didn’t necessarily incite violence but, rather, pushed extremist ideas that demonise minority groups.
She said these “hate groups” would then be prevented from appearing in the UK media or “engaging with public institutions” like universities.
“These groups present themselves as legitimate spokespeople for broader communities; you see this with Islamist groups as well,” Siddique said.
“There's one thing about having a free and fair debate and giving people representation, and then there's overemphasising how much influence they have.”
Siddique pointed to the BBC’s recent coverage of Generation Identity. Earlier this year, the BBC’s Newsnight interviewed a leader of the UK branch of Generation Identity in the immediate aftermath of the mosque massacre in Christchurch, New Zealand. The BBC defended the decision at the time. It later turned out that the suspected Christchurch shooter had been in contact with and donated to the far-right group.
The BBC has since followed up with more interviews with different branches of the “far-right youth network”: a video profile at the European headquarters and an explainer with an interview with the group’s leader in Austria.
“It's about acknowledging where their ideological roots are, and acknowledging that they're not innocuous,” Siddique said. “This isn't part of the broader public discourse; it stems from extremist ideology.
“That can get lost because they have such an acceptable face.”
Unlike existing UK law around terrorist and proscribed organisations — where the Home Office outlaws groups which incite violence, like Hezbollah and the IRA – Siddique said the new classification could work like a “time-limited, warning label” which would be up for review: “It’s saying, ‘Look, these groups believe in the same ideologies terrorists do.’”
But free speech campaigners are unimpressed. Chief executive of the Index on Censorship Jodie Ginsberg told BuzzFeed News the UK government didn’t need new laws; rather, it should focus on enforcing the ones already in place.
"The government already has plenty of legislation it can use to tackle those who deliberately stir up racial and religious hatred, and in particular laws that address incitement to violence,” Ginsberg said.
“Governments have for decades tried — and failed — to find a way of defining extremist language in a way that would not end up simply scooping up vast swathes of legal political speech.
“Proposals to give the government greater powers to define and outlaw new kinds of speech simply open the door to more and more state censorship of speech.”
The Tony Blair Institute also called for consistency with the way tech companies deal with misinformation spread by activists on their platforms. In April, the Sri Lankan government blocked access to social media to stem the spread of disinformation in the wake of deadly attacks on churches and hotels.
Authors of the report criticised Sri Lanka’s response but said the UK government should work with tech companies to come up with “emergency algorithms” to remove misinformation posted during a crisis, or, as they put it, “nip online hate in the bud before it spills offline”.
Ginsberg called the idea that there should be “emergency periods” for when certain speech should be removed from the internet “troubling”.
“Relying on tech companies to implement emergency algorithms that only isolate ‘bad speech’ betrays a woeful ignorance of the way in which such algorithms work,” she said.
“Using this kind of blunt tool inevitably ends up preventing much essential expression — including distribution of lifesaving information and evidence gathering of crime — as well as any hateful content."