University team launches AI-driven approach to ‘decode’ online antisemitism
search

The latest Jewish News

Read this week’s digital edition

Click Here

University team launches AI-driven approach to ‘decode’ online antisemitism

Kings College London has helped put together a team of discourse analysts, computational linguists and historians trying to develop a highly complex system to fight online hate

Computer 
(Photo by Glenn Carstens-Peters on Unsplash)
Computer (Photo by Glenn Carstens-Peters on Unsplash)

A British university team is spearheading a major drive to use Artificial Intelligence to tackle subtle online antisemitism in ground-breaking work being revealed this week.

Kings College London has helped put together a team of discourse analysts, computational linguists and historians trying to develop a highly complex, AI-driven approach to identifying online antisemitism – both explicit and implied.

“The combination of these research disciplines is unique to-date, both in its set-up as well as in the subject matter of the analysis itself,” said organisers.

AI and machine learning are key to combatting the deluge of often untraceable Jew hatred online, with researchers hoping that the advanced algorithms now being developed can stay one step ahead of armchair antisemites by identifying and flagging hateful content before it can be seen and shared.

The ‘Decoding Antisemitism’ project, announced on Monday, is being funded by the Alfred Landecker Foundation,which joined forces with King’s as well as the Center for Research on Antisemitism at the Technical University of Berlin and other scientific institutions in Europe and Israel.

Analysts said computers would help run through vast amounts of data and images that humans simply could not assess, owing to the sheer quantity of it.

Studies show that most antisemitic defamation is expressed in implicit ways, such as by writing ‘juice’ instead of ‘Jews’, or using allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images.

This implicit antisemitism is much harder to detect – and harder to punish – but researchers are hoping to train their new AI programmes by inputting examples so that it can soon act autonomously to understand how antisemitism expresses itself.

“We have seen in the past that social media companies, already found wanting when it comes to limiting hate speech on their platforms, are very reluctant to act upon such hidden hatred against Jews,” they said.

“The effect is that online users are emboldened to continue to spread and share their hateful messages. The problem has recently been exacerbated, as seen by the rise in conspiracy myths accusing Jews of creating and spreading COVID-19.”

A spokesman for the Alfred Landecker Foundation said it wants to promote “a public discourse in which hateful voices are not allowed to dominate”, adding: “This is why one of the aims of the project is to develop an open source tool that can be used for websites and that is compatible with social media profiles.

“The idea is to support freedom of speech while making sure that antisemites and racists don’t drive away all those interested in respectful discussions.”

Foundation chief executive Dr Andreas Eberhardt said: “Antisemitism and hatred directed against minorities are putting the future of our open society in jeopardy. The problem is only getting worse in the digital sphere. It is essential that we use innovative approaches – such as AI – to tackle these issues head on.”

Berlin-based linguist Dr Matthias Becker said: “Hate speech online and hate crimes are to some extent always connected. In order to prevent more and more users being radicalised on the web, it is important to identify the real dimensions of antisemitism, taking into account the implicit forms that might become more explicit over time.”

Dr Daniel Allington, an AI lecturer at King’s, said the task was difficult because “hatred is often expressed in subtle ways and constantly changes form”.

He added: “Machine learning can serve as a force multiplier, extending the ability of human moderators to identify content that may need to be removed. It is only through partnerships such as this that we can hope to make progress towards protecting minorities in these hard to reach spaces.”

The focus of the project is initially on Germany, France and the UK, but will later be expanded to cover other countries and languages.

Danny Stone, chief executive of the Antisemitism Policy Trust, said there was reason to be cautious, however.

“There is an unhappy history when it comes to AI and antisemitism,” he said. “Tay, the Microsoft-designed chatterbot, was quickly shut down having learned from its social media interactions to post, amongst other content, anti-Jewish racism.”

 

read more:
comments