Earlier this week, a study was released examining the number of antisemitic incidents that take place on social media worldwide. It found that in 2016 alone, more than 382,000 antisemitic posts were made onto platforms such as Facebook, Twitter, Instagram, and YouTube.
That’s one antisemitic post every 83 seconds.
The survey further detailed that an overwhelming 63 per cent of those posts had been found on Twitter (reaffirming past surveys that labelled it the go-to platform for anti-Jewish hatred), followed by 16 per cent on blogs, 11 per cent on Facebook, six per cent on Instagram, two per cent on YouTube, and two percent on other forums.
Although the survey doesn’t specify as much, I imagine its algorithm only detected antisemitism on public accounts, meaning that accounts set to private would not be included in the results, nor would any posts using the (((echoes))) symbol, which are nearly impossible to find with typical computer searches.
Therefore, it’s important to remember that this number is actually skewed to the low side.
These statistics make a pretty strong case to call on social media platforms to better monitor the content they are supporting. Twitter, for example, claims to adhere to its General Policies and “Rules,” which explicitly states, “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”
Meanwhile, I could log onto Twitter right this second and find, within my first minute of searching, anywhere from five to ten accounts that are explicitly violating these rules. Should it be my responsibility to report these accounts to Twitter? Given that we’re able to easily conduct surveys to locate the purveyors of antisemitism, wouldn’t platforms like Twitter be able to develop algorithms that automatically flag these accounts? It’s not like it would be difficult – when a Twitter account uses words that are derogatory and offensive to visible identity groups, that account could automatically be flagged and a Twitter staff member could evaluate it.
Mind you, if it handles discrimination like Facebook does – for example, a colleague of mine recently flagged two photos that were blatantly antisemitic and quickly learned that neither “violated Facebook’s community standards” – it would only serve to enable such racist and antisemitic behavior.
If this survey establishes anything, it’s that it is time for social media platforms to properly monitor for hateful content and start holding accountable those who promote such hatred.
Fortunately, some companies have started doing just that.
Last week, reports indicated that several major companies – including WalMart, Pepsi Co., Starbucks, and General Motors – are participating in an advertising blackout on YouTube after learning that their advertisements were running alongside videos containing racist and antisemitic content.
This latest wave of companies pulling advertisements from the Google-owned video service comes on the heels of a public apology (page does not exist) from Google’s chief business officer Philip Schindler, who promised that Google will be “taking a tougher stance on hateful, offensive and derogatory content.” This includes “taking a hard look at our existing community guidelines to determine what content is allowed on the platform – not just what content can be monetized.”
We know that those who look to spew racist and discriminatory views do so under the guise of “free speech” in a bid to persuade platforms that any attempt to stop them would be a hindrance of their rights. But let’s not forget: in Canada, hate speech is against the law. The Criminal Code of Canada prohibits “hate propaganda,” and the Canadian Human Rights Act prohibits discrimination on various grounds. If someone is committing hate speech but is doing so in an online forum, should the law no longer apply?
Furthermore, if platforms like Facebook and Twitter insist that their Community Guidelines are there to ensure an inclusive environment free of hatred and discrimination, why not take these guidelines seriously? Why not follow the example of companies like WalMart and Starbucks and eradicate racist and antisemitic hatred from social media? When there’s a whopping 382,000 antisemitic posts per year (and that’s not counting hate speech against other minority groups), I’m pretty sure that’s a clear sign that it’s time for a real change.
Sara McCleary has written extensively on a wide range of topics while working as a news reporter and freelancer. She has also completed a master’s degree in history, and further graduate work in interdisciplinary humanities.