Facebook, Google, Microsoft and Twitter on Monday announced they had joined forces in an attempt to curb explicit terrorist imagery online.
The move follows criticism from Brussels that big US social media groups have made insufficient effort to clamp down on hate speech.
In a statement, the technology groups said they were building new technology that would identify extremist content, including terrorist recruitment videos and images of executions, via a digital fingerprint known as a “hash”, which would then be compiled into a shared global database. Once created, the hash would be attached like a watermark to content, which would then be easy to identify and take down.
“Our companies will begin sharing hashes of the most extreme and egregious terrorist images and videos we have removed from our services,” the companies said. “By sharing this information with each other, we may use the shared hashes to help identify potential terrorist content on our respective hosted consumer platforms.”
The project will be presented at the EU Internet Forum on Thursday, with the database launching in early 2017.
The companies said the collaboration was not a knee-jerk response to European demands but had been under development for several months.
Social media companies have long been accused of failing to take enough action to prevent terrorists using their platforms for propaganda and recruitment. This year, the British parliament’s home affairs committee said they were “consciously failing” to staunch the flow of terrorist propaganda, adding that it was “alarming” that the companies had only a few hundred employees monitoring networks with billions of accounts.
The European Commission on Sunday signalled new laws might be implemented if the companies did not make a concerted effort to delete hate speech and radical content posted by terrorist groups more quickly.
In the US, the state and justice departments have encouraged technology companies to develop ways to spread counter-extremism content, which targets those at risk of radicalisation with more moderate messages. The Counter Extremism Project, a US non-profit organisation, recently launched technology to identify terrorist content, based on techniques used to locate and take down child pornography.
“For social media platforms, their business models depend on being open, so they are all afraid of that blunt instrument of legislation,” said Zahed Amanullah, head of counter narratives at the Institute for Strategic Dialogue, a London think-tank. “They are all trying to improve algorithms that explicitly identify extremist content from a database, even though it’s not an exact science.”
Although Brussels has heaped pressure on big internet companies to be more assiduous, the commission has proved reluctant to fundamentally alter the legal framework that means they are not liable for content posted by users as long as they act swiftly to remove illicit material once they are aware of it.
For social media platforms, their business models depend on being open, so they are all afraid of that blunt instrument of legislation
But this legal protection does not exist if an internet company becomes an active curator by searching for such content.
Officials in Brussels are discussing a “good Samaritan” clause that would give tech groups more legal protection when they sought to root out content such as hate speech or material that infringed copyright. Industry groups have been lobbying for such a rule since last year.
Critics say this would tilt the law too far in favour of censorship, with internet platforms incentivised to take down content rather than erring on the side of free speech.
“The commission keeps complaining that the online companies are too powerful — and their solution is to have those companies, in the absence of responsibility, use their algorithms to make decisions about what we can say and do online,” said Joe McNamee, executive director of EDRI, a group that lobbies for civil rights online.
Facebook and Twitter insist the new hash database will not act as a generalised censorship tool but instead assist human users in flagging the most egregious content.
Additional reporting by Hannah Kuchler in San Francisco
Sample the FT’s top stories for a week
You select the topic, we deliver the news.