The digital world is not the same as the offline world. What works in one doesn’t necessarily work in the other.
Hate actors are adept at understanding and creatively exploiting the digital world. One way they spread antisemitism and Holocaust denial is the parasitic tactic of ‘trolling’ public figures on social media with abuse, hateful ideas and misinformation in order to trigger a reaction that amplifies their spread.
It helps if we consider an actual example. If we saw someone preaching Holocaust denial in the middle of the street, we would challenge them, in part on moral grounds, in part to ensure onlookers didn’t hear only lies.
In the digital world, direct engagement, however, can be counterproductive. Densely-interconnected but low-reach hate actors may have hundreds of followers, but that’s because they’re all following each other. When they spout hate, in reality they spout it only to each other, and the broader public doesn’t see it.
But if a public figure targeted by trolls reacts to antisemitic abuse and responds directly or retweets their lies with a comment, they first rebroadcast their message to their followers, and, second, signal to the platform’s algorithm that this is a post that drives engagement, time spent on the platform, and thus advertising dollars, pushing it up other user’s timelines.
Engaging them online is the offline equivalent of handing them a megaphone. A better response is to ignore the post, block trolls and, if the content breaches rules or laws, report it to the platform or law enforcement. To counter misinformation, far more powerful in the battle for truth is not to amplify untruths, but instead share educational material to our audiences from, say, the Holocaust Educational Trust or Yad Vashem. This may even help to inoculate a public figure’s followers against exposure elsewhere.
The truth is that we have been doing quite the opposite for far too long, inadvertently pushing the arguments of antisemites into millions of users’ timelines, making antisemitism and Holocaust denial feel more widespread and normal than they really are. This counterproductively helps to normalise it.
The Center for Countering Digital Hate (CCDH) was set up to inform people about the nature of hate in the online world, the tactics it uses and how to counter their strategies.
This is why we launched, with endorsements from Rachel Riley, Gary Lineker and a host of other celebrities, our #DontFeedTheTrolls initiative, telling social media users to ignore, block and report trolls and then to take time for self-care, because getting abuse is always horrid.
We also believe that technology companies must act, but they have proven reluctant. To date, their responses to campaigners have varied from denial, delay to outright gaslighting. No more. In December, our celebrity patron, Rachel Riley, tweeted images of posts CCDH found on Facebook groups in which Nazi-level hate was being preached to thousands of members.
The groups – one of which was first reported to Facebook by the Community Security Trust more than two years ago – were shut down within days thanks to the harsh glare of public exposure by a high-profile figure.
In 2020, we are escalating our efforts to force them to do the right thing. It’s hard for tech giants to argue publicly against shutting down groups in which images of Hitler, black people portrayed as apes, and Jews with cloven hooves are spread widely. We’ve had enough of cynical diversions.
The time of arguing over algorithms, artificial intelligence detection and what constitutes hate is over. Tech giants need to demonstrate the will to act, now, to stop hate spreading.
- Imran Ahmed is Chief executive, Center for Countering Digital Hate