“If governments are to retain a firm hold of authority and not be compelled to yield to agitators, it is imperative that freedom of judgment should be granted, so that men may live together in harmony, however diverse, or even openly contradictory their opinions may be. In a democracy…everyone submits to the control of authority over his actions, but not over his judgment and reason.”
So wrote Dutch philosopher Baruch Spinoza in his Theological Political Treatise of 1670. At the time, most rulers and thinkers believed that a policy of free speech would lead to bloodshed, sedition and atheism. Spinoza, on the other hand, argued that freedom of conscience and speech were necessary preconditions for pluralism, tolerance and liberty. The Portuguese-born Jewish philosopher wasn’t blind to the potential harms of free speech, but thought that they were outweighed by the benefits. “I confess that from such freedom [of speech], inconveniences may sometimes arise,” he wrote. “But what question was ever settled so wisely that no abuses could possibly spring therefrom?”
But in this online age, with free, instant global communication available to all, trust in the benefits of uninhibited speech has eroded—even amongst those who benefit most from these freedoms. In particular, the apparent unwillingness of tech companies to resist demands for a “safe” internet, free from all “harmful” content, is a recipe for the repudiation of the free-speech legacy that first began to take shape in the Enlightenment. In some cases, it may even lead to new forms of digital McCarthyism.
In a much-discussed Washington Post op-ed published in March, for instance, Facebook CEO Mark Zuckerberg explicitly invited governments to regulate online content, writing that “lawmakers often tell me we [companies] have too much power over speech, and frankly I agree. I’ve come to believe that we shouldn’t make so many important decisions about speech on our own.” According to Zuckerberg, “internet companies should be accountable for enforcing standards on harmful content. It’s impossible to remove all harmful content from the Internet, but when people use dozens of different sharing services—all with their own policies and processes—we need a more standardized approach”
To be fair, massive social-media platforms such as Facebook do face hard questions when it comes to the governance of online speech. Facebook has tried to meet this challenge by producing evolving content standards enforced by an army of thousands of moderators, who apply opaque rules running to some 1,400 pages. The result has too often been arbitrary content deletions and user bans, fueling accusations of political bias. Call it “moderation without representation.”
The recent terrorist attacks in New Zealand, part of a marked increase in deadly white supremacist attacks, have only exacerbated existing concerns over far-right extremism. Accordingly, Facebook has moved to ban even non-violent expressions of “white nationalism,” in addition to “white supremacism,” which already was prohibited. The most recent purge resulted in a permanent ban of a number of right-wing conspiracy theorists and provocateurs, including Alex Jones and Milo Yiannopoulos, as well as the leader of Nation of Islam, Louis Farrakhan. Yet Facebook still allows Hizb-ut-Tahrir, banned in Germany, despite its leaders’ advocacy of a global caliphate that would substitute Islamic law for secular democracy. Advocating revolution and the forcible subjection of one socioeconomic class over another also is kosher on major online platforms. Antifa, a loose network of far-left groups, which sometimes incite, glorify and engage in violence at rallies and demonstrations, also is allowed to organize protests on Facebook.
These paradoxes exist because Facebook´s cobbled-together censorship regime tends to reflect the moral contradictions that already were embedded in so-called WEIRD intellectual spheres—i.e., the concerns of people who are Western and Educated, and from Industrialized, Rich Democracies. In WEIRD societies, the promotion of anti-discrimination has become a guiding value, reflecting both traumatic historical experiences and a modern spirit of genuine idealism.
This is exemplified by American college students´ attitudes toward freedom of speech. The majority of such students value free speech highly in the abstract. But they also typically think this freedom sometimes conflicts with diversity and inclusivity. And when conflicts occur, most favour the latter over the former. When it comes to hate speech, a full 68% of students think social-media companies should be responsible for limiting such expression. The WEIRD values underlying Silicon Valley censorship regimes are embedded in Twitter´s rules on “hateful conduct.” They emphasize the protection of “women, people of color, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, marginalized and historically underrepresented communities.” Specific examples of hate speech include “deadnaming” and the misgendering of trans people.
So unlike in dictatorships such as China, where censorship is a tool of authoritarian state control, WEIRD censorship generally is borne out of good intentions. But it still has negative consequences for free expression, since it serves to value the speech and dignity of groups differently. This not only creates a new form of discrimination, it also encourages the classification of human beings as members of specific ethnic, racial or religious groups—as opposed to treating people as individuals who may hold different views and have different thresholds for offense and emotional hurt. In fact, research suggests that multicultural policies of emphasizing racial and ethnic differences may increase popular beliefs in “race essentialism.” Such effects are difficult to square with any principled defense of freedom of expression, or even cultural pluralism, especially since group essentialism can express itself as tribalism.
By noting the problem of “hate speech” on digital platforms, Zuckerberg himself seems to have implicitly conceded that any future system of government regulation should not be based on robust First Amendment principles. That’s because hateful speech generally is protected from censorship in the United States, except in such cases where it would have the intended and reasonably foreseeable effect of inciting imminent lawless action. Of course, Facebook isn’t governed directly by the First Amendment, which constrains government, not private citizens or companies. But there is an important difference between Facebook adopting restrictive community standards (a) on its own initiative, and (b) as a means to comply with a universally applied government standard that governs all major platforms and even has extraterritorial reach.
One problem is that the WEIRD values that inform the community standards of Facebook and Twitter are, themselves, far from universal. In 2015, a Pew study surveyed global attitudes toward free speech in 38 countries. It showed that Americans and Western Europeans (in that order) are much more tolerant of speech that is sexually explicit or religiously offensive than people in most other nations—especially survey respondents in Asia, Africa and the Middle East.
Both Facebook and Twitter ban speech deemed to be hateful on the basis of sexual orientation and gender identity. In countries such as Russia, Uganda, Egypt and Singapore, however, censorship is used for the opposite purpose—to “protect” traditional values from the supposed “moral corruption” of LGBT influence. Facebook, like most popular social media networks, has users all over the world. But there is no one set of global norms that can satisfy both the demands of the LGBT community and homophobes claiming to be harmed by displays of “deviant moral practices” or “gay propaganda.”
Religion poses a similar problem. The rise of anti-Muslim bigotry in the West has led to a crackdown on hate speech, which sometimes now includes even blasphemy. In particular, social media companies have sometimes removed content from atheist and ex-Muslim groups critical of Islam. But in many Muslim-majority countries, non-Muslim minorities, secularists and atheists are openly discriminated against, including through nebulous laws that prohibit the criticism of Islam. And so it is difficult to imagine, say, Pakistan, Iran and Saudi Arabia on one hand, and Denmark, Canada and The Netherlands on the other, agreeing on global content standards in this area.
In other words, if we are to realize Zuckerberg´s idea of an internet that is “safe” from “harmful” content, we will have to choose which groups get to enjoy a digital safe space. Ironically this could pave the way for a weakening of progressive-friendly censorship regimes in cases where global attitudes toward women, the LGBT community and secularism differ sharply from those found in the WEIRD world of Silicon Valley.
Globally binding standards also could serve to make free-speech protections less robust in the aftermath of a crisis, when national-security concerns and moral panics induce democracies to compromise their citizens’ civil liberties. Such would be the case with the EU proposal that tech companies be required to remove “terrorist content” within an hour of being notified, or face huge fines. France´s law against “fake news” provides another example. Facebook already has taken the unprecedented step of allowing French regulators to “embed” within its moderator corps, increasing the risk of collusion between private companies and government. Following Germany´s far reaching NetzDG law against fake news and dangerous agitation, Facebook has fallen in line by hiring more than a thousand new moderators. And on April 8, the UK government presented a “White Paper” aimed at “mak[ing] Britain the safest place in the world to be online,” giving its regulators new wide-ranging powers to force tech companies to remove “harmful” content or risk harsh penalties.
Some might insist that these initiatives constitute an attempt to strengthen rather than weaken democratic values. But increasing government control over the Internet sets a dangerous precedent. Had Facebook been operating in the 1950s, Zuckerberg’s censorship-friendly attitude would have played into the era’s worst McCarthyist tendencies. And how many of those in favor of the assertive French model of regulation would feel comfortable if the Trump Administration were armed with the same powers to snoop and censor American users?
If the countless terabytes uploaded to internet platforms every second were to be policed by human beings, even the most restrictive and detailed content standards would be unenforceable—much as the Catholic Church´s Index of Censorship couldn’t keep up with the flood of “heretical” texts inspired by the Reformation, Scientific Revolution and Enlightenment. (In the words of a 16th-century censor, “what we need is a halt to printing, so that the Church can catch up with this deluge of publications.”) That may be one of the big reasons why the internet is now as free as it is, in fact.
But things may be changing, thanks to the possibility of cross-network (and even cross-border) censorship protocols, and new artificial-intelligence technologies. As Mark Zuckerberg himself put it, “building AI tools is going to be the scalable way to identify and root out most of this harmful content.” Even now, the vast majority of porn uploaded to Facebook never makes it on to users´ feeds, as it is identified by algorithms and automatically deleted. When it comes to hate speech, around 50% of such content is flagged by AI. The algorithms are always getting better, and democratic governments are likely to seize on such models since they can be implemented at relatively low cost.
It’s also notable that the EU’s new copyright directive includes provisions that hold internet hosting services responsible for copyright infringements on their platforms. As UN Special Rapporteur David Kaye warned, this development “appears destined to drive internet platforms toward monitoring and restriction of user-generated content even at the point of upload.” Such innovations may move us uncomfortably close to the reintroduction of pre-publication censorship (also known as “prior restraint”), the abolishment of which was one of the most important victories for Enlightenment ideals. In regard to the English Licensing Act, whose prior-restraint provisions finally lapsed in 1695, John Locke wrote to lawmakers: “I know not why a man should not have liberty to print whatever he would speak; and to be answerable for the one, just as he is for the other, if he transgresses the law in either. But gagging a man, for fear he should talk heresy or sedition, has no other ground than such as will make chains necessary, for fear a man should use violence if his hands were free, and must at last end in the imprisonment of all who you will suspect may be guilty of treason or misdemeanor.”
Perversely, tech giants such as Facebook and Google may actually benefit from online censorship, which may explain why Zuckerberg seems willing to compromise the freedoms he relied upon to build his empire. The enforcement of a standardized global content regulation scheme would create a formidable barrier to entry for potential competitors, as compliance would require either armies of censors, or large-scale software systems, or both. Zuckerberg has admitted as much, albeit not in so many words. In his testimony to Congress, he noted that “when you add more rules that companies need to follow, that’s something that larger companies like ours just has the resources to go do and it just might be harder for a smaller company just getting started to comply with.”
This would hardly be the first time in history that market incumbents with a monopoly on communication technology lobbied for censorship. The aforementioned English Licensing Act of 1662 gave England’s private Stationer’s Company a monopoly on the publication of books—which meant that the company also had to police the licensing, trading and production of print to ensure that English laws against seditious libel, blasphemy and heresy were respected. Naturally, the Stationer’s Company was among the most vocal supporters of licensing.
In fact, much of Locke’s opposition to the Licensing Act was aimed at the “lazy, ignorant Company of Stationers,” whose monopoly effectively limited the availability of books. Locke highlighted the much more liberal Dutch Republic (where, in exile, he had written his Letter Concerning Toleration). In the early Enlightenment, the Dutch Republic became a European hub for the printing of daring newspapers, books and pamphlets. These were then smuggled into less liberal European states and shared through clandestine networks of liberals, philosophers, freemasons, religious dissidents and others keen on expanding their minds with heterodox, subversive and shocking ideas—much in the same way that, in our own age, social media serves as the only method of spreading uncensored news and opinion in many authoritarian states. Such a function would surely be hampered by legally binding global-content standards.
It is true that speech may cause harm, and some content really should be off limits. This year marks the 25th anniversary of the genocide in Rwanda, which was sparked during a period when incitement to mass extermination was amplified by radio transmissions. Evidence also suggests that the use of social media has contributed to ethnic cleansing in Myanmar and Sri Lanka. And the internet continues to be a useful tool for both right-wing and jihadist terrorists alike.
Yet the novelty of social media and the lack of a settled communication culture may also lead us to overestimate the harms and underestimate the benefits of uninhibited global communications. No tweet, YouTube video or Facebook update has ever come close to inspiring the amount of hatred, extremism and mass killing that ultimately arose from Mein Kampf or Mao’s Little Red Book. Yet few countries ban the sale, purchase or distribution of these old-school print publications. In fact you can buy them both on Amazon. Likewise, new research suggests that the scale and effect of “fake news” was greatly exaggerated in the aftermath of the 2016 U.S. presidential election.
A more promising way to mitigate the real harms of online speech would be for tech companies to voluntarily commit to heed the more limited concept of Dangerous Speech, a term coined by scholar Susan Benesch. Dangerous Speech, defined as expression “that can increase the risk that its audience will condone or commit violence against members of another group,” is a narrower and more precise category than “hate speech,” and is aimed at preventing mass violence rather than more subjective forms of harm.
No doubt, such an approach would entail its own problems, and hard cases would be unavoidable. But it would be preferable to legally binding global standards that would almost inevitably dilute existing free-speech protections in states that are liberal, while legitimizing further restrictions in those that are authoritarian. As Spinoza wisely cautioned, “he who seeks to regulate everything by law is more likely to arouse vices than to reform them.”