I find the debate surrounding social media platforms’ ethical obligations to be fascinating.
The conversation poses questions on whether sites like Facebook and Twitter should be monitored by some form of police in a similar manner to the physical world, which almost always ends in a heated discussion about the issue of free speech, and whether hate speech laws ( at least in Canada) should be upheld online.
Every social media site has its own guidelines for users regarding what it allows and what can be flagged and/or removed. Facebook, for example, says that it will remove “hate speech, which includes content that directly attacks people based on their: race, ethnicity, national origin, religious affiliation” etc.
And yet, documents leaked in recent weeks outlining the company’s guidelines for moderators have drawn attention to Facebook’s stance on Holocaust denial, which is clearly an example of hate speech. Apparently, Facebook “does not welcome local law that stands as an obstacle to an open and connected world,” so the company prefers to flout laws against Holocaust denial.
According to a report in The Guardian, Holocaust denial is illegal in 14 countries, but only four of those – France, Germany, Israel, and Austria – actively ensure that Facebook upholds its laws. These are the only four countries that moderators are expected to remove such content, “not on grounds of taste, but because the company fears it might get sued.”
Now, I’m all for encouraging open dialogues. Facebook’s Community Standards states that, “People can use Facebook to challenge ideas, institutions, and practices. Such discussion can promote debate and greater understanding.” That’s a great ideal to aspire to, but is that really what’s going on? Are the people who are posting Holocaust denial memes (or anti-Zionist comics or anti-refugee images or any other such material) really doing it to spark a rational discussion?
And, as B'nai Brith's 2016 Annual Audit of Anitsemitic Incidents shows, Holocaust denial actually jumped up by 15 per cent last year in comparison to 2015, with several high-profile cases having taken place, including that of University of Lethbridge Professor Anthony Hall and former Green Party candidate Monika Schaefer.
Sadly, all the evidence points to the negative. It seems like many users on social media make racist and antisemitic comments and posts to spread their hatred and try to instil similar feelings unto others. They’re doing it to try to undermine the confidence and trust of minority groups they’ve decided are beneath them, and in the hopes that their message of hate will make it to members of those groups so that they’ll know where they stand. It doesn't matter if anyone replies or comments – they’ve put their beliefs out there, and that’s what’s important to them. Even if someone does reply – have you ever tried to have a rational conversation with a Holocaust denier, or flat-earther, or a believer in any such conspiracy theory? In my experience at least, they don’t tend to be particularly open to opposing viewpoints.
This is why the debate about the obligations of social media platforms vis-à-vis free speech vs. hate speech is so controversial. Obviously Facebook doesn’t want to lose users, so it defends the inflammatory content by arguing that it’s promoting dialogue, even though most people know that’s not what’s going on. But is it really up to Facebook to make those calls and determine who has the right to use the platform?
One interesting idea has been floated in British parliament, and I’m hoping we’ll hear more about it soon. I couldn’t find too much about the idea beyond a brief conversation within a news report, but after an All-Party Parliamentary Inquiry into antisemitism, MPs asked for an examination into whether it would be possible to ban social media users who spread racism and hatred. They point out that it is not without precedent: sex offenders can have their internet access restricted, and the racism ban could work the same way.
The idea will come with all kinds of discussion around freedom of speech (the shield of most antisemitic internet users), but it’s an intriguing proposal that I think needs to be explored, not only in the UK, but here in Canada as well. It would leave the decision about what is permissible and what is ‘hate speech’ in the hands of the courts rather than Facebook moderators who likely aren't legal or human rights experts.
Sara McCleary has written extensively on a wide range of topics while working as a news reporter and freelancer. She has also completed a master's degree in history, and further graduate work in interdisciplinary humanities.