The Canadian government is considering new rules governing how social media regulates potentially harmful user content. The proposed law has now been criticized by online academics – across the world – as the worst in the world.
Surprisingly, the proposed law reads as a list of the most widely condemned policy ideas in the world. Among other things, these views were strongly opposed by human rights groups and were dismissed as unconstitutional. There is no doubt that a proposed provincial government law poses a serious threat to human rights in Canada.
The government’s intentions are good. The purpose of this law is to reduce five types of harmful online content: child sexual content, terrorist content, violence-promoting content, hate speech, and illegal sharing of intimate images.
Although this content is already invalid, further reduction of its distribution is a reasonable goal. Governments around the world, and especially in Europe, have introduced legislation to combat these risks. The problem is not the intention of the government. The problem is the government solution.
Serious privacy issues
The rule is simple. First, it will be necessary for online platforms to monitor all user speech and test its potential for harm. Internet service providers will need to take “every precautionary measure,” including the use of automated programs, identifying harmful content and blocking your visibility.
Second, anyone will be able to mark the content as harmful. The communication platform will have 24 hours from the first flagship to check if the content is really harmful. Failure to remove harmful content at this time could result in a substantial penalty: up to 3 percent of the service provider’s global revenue or $ 10 million, whichever is higher. For Facebook, that could be a penalty of $ 2.6 billion per post.
Active user voice monitoring reveals significant privacy issues. Without restrictions on effective surveillance, national governments will be able to significantly increase their oversight capabilities.
The Charter of Rights and Freedoms protects all Canadians from the senseless search. But under the proposed law, reasonable logical suspicions would not require the service provider, representing the government, to conduct searches. All content posted online will be searched. Potentially harmful content may be stored by the service provider and transferred – privately – to the government for prosecution.
Canadians who have nothing to hide may still be scared. Social media platforms process pieces of content on a daily basis. Active monitoring is only possible with the default system. Automated systems, however, are notoriously inaccurate. Even the accuracy of Facebook hand-crafted content is reported to be less than 90 percent.
Social media is not like newspapers; Accurate review of all content is not possible with performance. The result is unsatisfactory: Many innocent Canadians will be referred to criminal prosecution under the proposed law.
But it is getting worse. If an online communications service provider decides that your content is not harmed during a critical 24-hour review period, and the government later decides otherwise, the provider will lose 3 percent of global revenue. Therefore, any sensible platform will look at more content than completely illegal content. Human rights activists call this disturbing practice a “bailout.”
Identifying illegal content is difficult, so the risk of binding restrictions is high. Limitations of hate speech may well illustrate the problem. The proposal expects the platforms to use the Supreme Court of Canada in a hate speech. Identifying hate speech is difficult in the courts, let alone algorithms or low-paid content moderators who have to make decisions in a matter of seconds. While offensive speech is not necessarily hate speech, the platforms may remove anything that is even more offensive. Ironically, the smallest groups that this law seeks to protect are the ones that are most likely to be abused.
We must seek the best
So, what should you do about cyberbullying? One step in the right direction is to realize that not all risks are the same. For example, it is much easier to see child pornography than hate speech. Similarly, timelines for past deletion should be shorter than for recent ones.
And while retaliation for pornography may only qualify for removal from a victim’s application, offensive speech may require submission from the poster and a private agency or court before the removal is required by law. Some authorities make a difference. Canada should too.
Controlling online harm is a serious problem the Canadian government, like everyone else, has to deal with to protect its citizens. Child pornography, terrorist content, resentment, hate speech, and revenge have no place in Canada. There is much that can be done to curb their proliferation online.
But the proposed law creates more problems than it solves. It reads as a collection of the worst policy ideas introduced worldwide in the last decade. No other free democracy was willing to accept these restrictions.
The threats to privacy and freedom of speech are obvious. Canadians should look for the best.