require social media companies to perform a duty of care to users

0

Hate speech is proliferating online, and governments, regulators and social media companies are struggling to keep pace with their efforts to combat it.

Again this week, the racist abuse of black English soccer players on Facebook and Twitter brought the issue to the fore and showed how slow and inefficient tech companies have been in trying to control it and the urgent need for laws. more stringent.

Australia’s piecemeal approach

In Australia, the regulation of this type of harmful online practice is still in its infancy.

In February, a digital industry body drew up an Australian code of conduct on disinformation and disinformation, which most major tech companies have adopted.

However, this self-regulatory approach has been criticized by community organizations for its voluntary and opt-in nature. Some have argued that the code sets its threshold too high by requiring an “imminent and serious” threat from any of the damage identified. This could make it ineffective.

The Australian Communications and Media Authority has been tasked with reporting on the effectiveness of the code, which is expected to happen soon.



Read More: 6 Actions The Australian Government Can Take Right Now To Target Racism Online


Australia also has the “Safety by Design” framework, developed by Commissioner eSafety. This is another voluntary code of practice that encourages tech companies to mitigate risk in the way they design their products.

At the end of June, the federal parliament also promulgated a new law on online security. This legislation was developed in the wake of the Christchurch Massacre that was broadcast live. It regulates certain types of relevant harm, such as cyberbullying of children and live broadcasts that could promote or incite extreme violence.

The law creates a complaints-based system to remove harmful material, and in some cases the Electronic Security Commissioner has the power to block sites. It also has a broad mandate in terms of covering a variety of online services. Yet it only addresses a few specific types of harm, not all of the harmful discourse.

An investigation into extremist movements and radicalization is also underway by the Joint Parliamentary Committee on Intelligence and Security (PJCIS). It is charged with examining steps the federal government could take to disrupt and deter hate speech, terrorism and extremism online, as well as the role of social media and the internet in enabling extremists to operate. organize.

The investigation was due to report to the Home Secretary in April, and it is now overdue.

A much deeper problem

These measures are a step in the right direction, but they still attempt to approach each specific type of hate speech as a separate issue.

ASIO recently reported the issue of hate echo chambers online. And Australian researchers have highlighted how right-wing extremists routinely dehumanize Muslims, Jews and immigrants as a way to rally their support behind radical worldviews and to socialize people towards violent responses.

On mainstream platforms, this is achieved through a relentless regime of warped news that supports conspiracy theories and the “other” of marginalized communities.



Read more: Facebook’s inability to pay attention to languages ​​other than English allows hate speech to flourish


We know that the existing laws to combat these types of hate speech and disinformation are inadequate.

For example, we have civil laws against discrimination and hate speech, but they rely on victims to take legal action themselves. Members of targeted communities can deeply fear the repercussions of legal action and this can lead to enormous personal costs.

What other countries are doing

Governments around the world are grappling with this problem as well. In New Zealand, for example, there has been considerable debate about reforming the hate speech law, particularly whether there needs to be a clear link to violence before the regulation of hate speech can. be justified.

Germany has enacted one of the toughest hate speech laws online, fining social media companies up to € 50 million for failing to remove “clearly illegal” content. Civil rights activists, however, argue that it infringes on freedom of expression.

France also passed a law last year that would have required online platforms to remove hateful content reported by users within 24 hours, but a court overturned the provision on the grounds that it infringed on the freedom of the public. ‘expression in a way that was unnecessary, appropriate and proportionate.

A potential new model in the UK

There is another more holistic approach in the UK that could lead the way.

The Carnegie Trust has developed a proposal to introduce a legal duty of care in response to online harm. In the same way that we require builders of roads, buildings or bridges to exercise a duty of care to the people who use them, the idea is that social media companies should be required to remedy the damage that their platforms can cause users.

The UK government incorporated the idea into its online safety bill, which was just released in May for public discussion. Presented as a “new framework to fight harmful online content”, the gigantic legislation (which has 145 pages) is built around due diligence obligations.

There are still some concerns. The Carnegie Trust itself has criticized a number of aspects of the bill. And the powers conferred on the Secretary of Culture are of particular interest to defenders of freedom of expression.

Despite these concerns, there is a lot to be said about the comprehensive approach being pursued. First, the legislation fits within the existing framework of negligence law, in which companies owe a duty of care, or accountability, to the general public who use the facilities they create and enable.

Second, it places the burden of responsibility on the social media companies to protect people from the damage that might be caused by their products. This is a better approach than the government penalizing social media companies after the fact for hosting illegal or harmful content (as happens under German law), or forcing an electronic security commissioner to do the heavy lifting on the regulations.



Read more: Will the government’s online safety laws for social media come at the expense of free speech?


More importantly, this approach allows for broad coverage of existing and emerging types of online damage in a rapidly changing environment. For example, online speech that poses a threat to the democratic process would fall under the new law.

While the details of the UK bill will undoubtedly be debated in the coming months, it offers an opportunity to effectively tackle a problem that many say is growing and growing, but at the same time. very difficult time to solve. A legal duty of care may be just what you need.


Rita Jabri Markwell, Australian Muslim Advocacy Network (AMAN) Advisor, contributed to this article. The civil society organization monitors online hate and engages directly with Facebook, Twitter and the Global Counterterrorism Internet Forum.


Source link

Leave A Reply

Your email address will not be published.