Facebook Bans British Far-Right Groups and Their Leaders, Says They ‘Spread Hate’

Facebook has banned a number of far-right groups and their leaders in the United Kingdom after determining that they “spread hate.”

The social network announced Thursday that it had imposed bans on the British National Party, Britain First, the English Defence League, Knights Templar International and the National Front.

Seven individuals associated with the groups were also banned, including former British National Party leader Nick Griffin, and Paul Golding and Jayda Fransen of Britain First.

Facebook said the groups had been removed under a policy that bans “those who proclaim a violent or hateful mission or are engaged in acts of hate or violence.”

The restrictions also apply to Instagram.

It’s the latest example of social media companies stepping up efforts to police content on their platforms after years of criticism that they were not doing enough to stop hate speech and extremist content.

Facebook removed British far-right activist Tommy Robinson, whose real name is Stephen Yaxley-Lennon, for posting hate speech in February.

The company has come under increased scrutiny after a terror attack targeting two New Zealand mosques in March was streamed live on its platform.

Following the attack, Facebook announced that it would ban all “praise, support and representation of white nationalism and separatism.”

The British groups banned on Thursday trafficked in some of the objectionable content that Facebook has promised to remove from its platforms.

Britain First is an ultra-nationalist group that opposes immigration. President Donald Trump caused outrage in 2017 when he retweeted three anti-Muslim videos posted by Fransen, its former deputy leader.

Facebook said that posts expressing praise or support for individuals and groups removed from its platforms on Thursday will also be banned.

Mark Skilton, a professor who studies artificial intelligence at Warwick Business School in England, said the bans were “long overdue” but still not enough.

“Social media platforms need stronger automated controls beyond the occasional ‘token gesture’ of media announcements banning people,” he said.

Regulators are increasingly pushing for new powers to punish social media companies for failing to tackle hate speech. The European Union is threatening to legislate new rules if voluntary arrangements don’t work.

The UK government proposed new rules earlier this month that would make internet companies legally responsible for unlawful content and material that is damaging to individuals or the country.

Tech executives could face fines and criminal penalties under the proposal.

Notice: you are using an outdated browser. Microsoft does not recommend using IE as your default browser. Some features on this website, like video and images, might not work properly. For the best experience, please upgrade your browser.