In May 2019, France called for increasing government oversight over Facebook. Now Facebook has agreed to hand over to French judges the identification data of French users suspected of hate speech on its platform, according to France’s Secretary of State for the Digital Sector, Cédric O.
Previously, according to a Reuters report, “Facebook had refrained from handing over identification data of people suspected of hate speech because it was not compelled to do so under U.S.-French legal conventions and because it was worried countries without an independent judiciary could abuse it”. Until now, Reuters noted, Facebook had only cooperated with the French judiciary on matters related to terrorist attacks and violent acts by transferring the IP addresses and other identification data of suspected individuals to French judges who formally demanded it.
Now, however, “hate speech” — as speech that fails to comply with current political orthodoxy is conveniently labelled — appears to have become comparable to terrorism and violent crime. How autocratic, yet Cédric O apparently loves it: “This is huge news, it means that the judicial process will be able to run normally”.
It is highly probable that other countries will want to have a similar agreement with Facebook; it also appears likely that Facebook would comply. In May, for instance, as France was debating legislation that would give a new “independent regulator” the power to fine tech companies up to 4% of their global revenue if they do not do enough to remove “hateful content” from their network, Facebook’s CEO Mark Zuckerberg commented: “I am hopeful that it [the French proposal] can become a model that can be used across the EU”.
France is the first and so far only country to have entered into such an agreement with Facebook.
The new agreement could signal the de-facto end of free speech on Facebook for French citizens. Self-censorship in Europe is already widespread: a recent survey in Germany showed that two thirds of Germans are “very careful” about what topics they discuss in public — Islam and migrants being the most taboo. Knowing that a mere Facebook post could end you up in front of a judge in court is very likely to put a decisive damper on anyone’s desire to speak freely.
French authorities are already in the process of setting an extremely public example of what can happen to those who use their freedom of speech on the internet. Marine Le Pen, leader of the National Rally Party, was recently ordered to stand trial and could face a maximum sentence of three years in prison and a fine of 75,000 euros ($85,000) for circulating “violent messages that incite terrorism or pornography or seriously harm human dignity”. In 2015, she had tweeted images of atrocities committed by ISIS in Syria and Iraq to show what ISIS was doing.
If Facebook’s agreement with France is replicated by other European countries, whatever is left of free speech in Europe, especially on the internet, is likely to dry up fast.
In early July, France’s National Assembly adopted a draft bill designed to curtail online hate speech. The draft bill gives social media platforms 24 hours to remove “hateful content” or risk fines of up to 4% percent of their global revenue. The bill has gone to the French Senate and could become law after parliament’s summer recess. If it does, France will be the second country in Europe after Germany to pass a law that directly makes a social media company censor its users on behalf of the state.
Also in early July, in Germany — where the censorship law, known as NetzDG, also requires Facebook to remove content within 24 hours or face fines of up to 50 million euros — the Federal Office of Justice imposed a €2 million regulatory fine on Facebook “for the incomplete information provided in its published report [the publication of its transparency report for the first half of 2018 required under NetzDG] on the number of complaints received about unlawful content. This provides the general public with a distorted image both of the amount of unlawful content and of the social network’s response”.
According to Germany’s Federal Office of Justice, Facebook does not inform its users sufficiently of the option to report “criminal content” in the specific “NetzDG reporting form”:
“Facebook has two reporting systems in place: its standard feedback and reporting channels on the one hand, and the ‘NetzDG reporting form’ on the other. Users who wish to submit a complaint about criminal content under the Network Enforcement Act find themselves steered towards the standard channels, since the parallel existence of standard channels and the ‘NetzDG reporting form’ is not made sufficiently transparent, and the ‘NetzDG reporting form’ is too hidden…Where social networks offer more than one reporting channel, this must be made clear and transparent to users, and the complaints received via these channels are to be included in the transparency report. After all, procedures to handle complaints of unlawful content have a considerable impact on transparency.”
In response, Facebook said:
“We want to remove hate speech as quickly and effectively as possible and work to do so. We are confident our published NetzDG reports are in accordance with the law, but as many critics have pointed out, the law lacks clarity.”
While Facebook claims to be fighting hate speech online, including claiming to have removed millions of pieces of terrorist content from its platform, according to a recent report from the Daily Beast, 105 posts of some of Al Qaeda’s most notorious terrorists are still up on Facebook, as well as YouTube.
The terrorists include Ibrahim Suleiman al-Rubaish, who was imprisoned for more than five years in Guantanamo Bay for training with al Qaeda and fighting alongside the Taliban in Afghanistan against the United States, and Anwar al-Awlaki, an American-born terrorist, both killed by American drone strikes. According to one US counter-terrorism official, speaking in September of 2016:
“If you were to look at people who had committed acts of terrorism or had been arrested and you took a poll, you’d find that the majority of them had some kind of exposure to Awlaki.”
Awlaki was preaching and spreading his message of jihad in American mosques as early as the 1990s. At the Masjid Ar-Ribat al-Islami mosque in San Diego, between 1996-2000, two of the future 9/11 hijackers attended his sermons. He is also reported to have inspired several other terrorists, such as the Fort Hood terrorist, Major Nidal Malik Hasan, with whom he exchanged emails, and the Tsarnaev brothers, who bombed the 2013 Boston marathon. Apparently, that sort of activity does not bother Facebook: The Daily Beast reportedly found the videos through simple searches in Arabic using only the names of the jihadists.
That Facebook appears to be “creatively” selective in how it chooses to follow its own rules is nothing new. As previously reported by Gatestone Institute, Ahmad Qadan in Sweden publicly raised funds for ISIS for two years. Facebook only deleted the posts after the Swedish Security Service (Säpo) approached Facebook. In November 2017, Ahmad was sentenced to six months in prison for using Facebook to collect money to fund weapons purchases for the ISIS and Jabhat al-Nusra terror groups and for posting messages calling for “serious acts of violence primarily or disproportionately aimed at civilians with the intention of creating terror amongst the public.”
In September 2018, Canadian media exposed that a Toronto terrorist leader, Zakaria Amara, while serving a life sentence for plotting Al Qaeda-inspired truck bombings in downtown Toronto, nevertheless had a Facebook page on which he posted prison photos and notes about what made him a terrorist. Only after Canadian media outlets contacted Facebook to ask about the account did Facebook delete Amara’s account “for violating our community standards.”
When will Facebook — and YouTube — make it a priority to remove material featuring the terrorist Awlaki, whose incitement has inspired actual terrorists to kill people?
Recent Comments