Regulating Online Safety: Lessons from Australia

Regulating Online Safety: Lessons from Australia

Author: Dr. Rys Farthing

Seven years ago, Australia passed its first online safety bill, the Enhancing Online Safety Actupdating and expanding it in 2021 with the Online Safety Act. While both Acts had problems and pitfalls, these were ‘global firsts’ at attempts to legislate to address the problem. As the UK’s Online Safety Bills slowly passes its way, under a now caretaker government, through its Third reading and into the House of Lords, it is timely to reflect on some of the lessons from the Australian experience over the past seven years. Below are four reflections on how the UK can ensure that its reforms are able to adequately tackle online abuse in all of its forms.

The Take-Down Strategy

First, focusing on notice and take-down won’t fix things. No country can delete their way out of this problem, one piece of content at a time. While this may sound a little obvious, when Australia was forging the path for the world’s first online safety law, take-down was the central strategy.

Australia’s first legislative attempt, the Enhancing Online Safety Act 2015, embraced this straightforward and ‘single-minded’ approach. If the content was deemed to be cyber-bullying targeting children, it had to be taken down. While the scale of the risks the digital world poses is immense, and by today’s standards a ‘cyber-bullying only’ focus seems woefully inadequate, it was a bold first move. New ways to define what cyberbullying was, new mechanisms to report it, new responsibilities for digital service providers to take it down and new authorities to oversee all this needed to be first imagined then implemented.

This mammoth effort created a take-down centric path that Australian regulations have been stuck in ever since. In 2018, for example, non-consensual image sharing was added to the act as the second type of unsafe content to address. And in the 2021 update another type of unsafe content to the list, cyber-abuse of adults (as well as ‘abhorrent violent material’ as defined by the Criminal Code and material denied classification under Australia’s Classification Board, bringing into line with existing regulations) .

One of the key problems of this approach might have crossed your mind already. What exactly is cyberbullying or cyber-abuse material? Under the 2021 act, cyber-abuse is defined as reasonable content that an ‘ordinary person’ would agree was intended to harm an adult, and an ‘ordinary reasonable person’ would consider ‘threatening, harassing or offensive’. That’s a frightfully open definition that’s bound to clash with all sorts of cultural and class expectations, as well as the obvious clash between victim’s experiences and the privileged perspective of perception. What feels very insulting or offensive to someone on the receiving end might be considered ‘just in jest’ by offenders. It’s also focused entirely on individual safety, missing online threats to social or community risk. If your approach centers around deleting ‘bad content’ someone has to define it. And that’s always going to be a problem.

In the UK, this has been partly kicked into the long-grass in the Online Safety Bill. While there’s clarity about addressing already illegal content, there’s an expectation that regulators can and will define legal-but-harmful content later. While we’re expecting it to be a high threshold, that goes beyond agreeing or causing offense, it’s still open. The lessons from Australia are that this isn’t easy: the definitions matter and deserve close attention.

Another problem with this approach, as implemented in Australia, is that it puts all of the burden on victims to report content after the harm. The Australian Acts lack any proactive responsibilities or monitoring by either the Commission or platforms. Harm inevitably has to happen before the Acts ‘kick in’. The requirements in the UK’s act, around increasing transparency (especially around legal-but-harmful content), are welcome. They should shift the balance of responsibility from victims to platforms.

Focus on Systems and Processes

Secondly, flowing from this, the central flaw of a take-down centric approach becomes apparent: its impact is always going to be modest. In 2020-21, Australia issued 2 takedown notices regarding image based abuse, 5 Abhorrent Violent Material notices and addressed 954 complaints of cyber bullying directed at children. Regulators — and victims — are stuck playing whack-a-mole, requesting this or that piece of content be taken down as quickly as they’re posted. Without a systemic focus, or a trillion dollar budget for regulators to become de facto global content moderators, it just doesn’t work. What’s needed is a focus on systems and processes, and what digital services themselves can do to reduce the risks online before harm happens.

This is where the UK’s draft Online Safety Act shows its potential, in the multiple overlapping duties of care it creates for platforms. Incidentally, this systemic focus was somewhat included in Australia’s updated 2021 approach, as a sort of add-on that will see a co-regulatory approach to “basic online safety” standards shortly. While Australia seems to have adopted a content first, systemic safety second approach, the UK’s has reversed this, which potentially has the capacity to be far more effective. At the very least, both countries will prove to be excellent case studies for global comparative studies for years to come.

An Independent Regulator running a public complaints process

While our first two points have a ‘what not to do’ flavour, our third and forth are Australian innovations notably lacking from current UK proposals that might weaken the overall impact. Our very first version of the act, way back in 2015, established the politically popular office of the eSafety Commissioner. The eSafety Commissioner is an independent regulator who is also tasked with running a public complaints mechanism, alongside a more significant education mandate. The independence of a regulator, and a public facing complaints procedure have been the key ingredients for the although limited gains Australia has had in the online space.

The public complaints mechanism has meant that under every version of Australia’s online safety legislation, members of the public have been able to access a complaints service that operates as a ‘backstop’ to the public. Children, parents, women and those targeted by some of the worst forms of online content, often left with no recourse from platforms themselves, have been able to avail themselves of an independent office able to compel platforms to remove content. This is not a systemic solution and the remedies on offer are limited, but it provides a sense of safety. It’s a hard sell to convince a voting public that legislation is working and keep them safer if their own individual experiences of harm have no avenue for redress.

In the unique Australian milieux, this popularity has been problematic. The accessibility and popularity of these individualized solutions may have provided cover for the lack of systemic solutions. Projecting the perception of safety without a systemic underpinning can be in fact disingenuous and facilitate the perpetuation of harms. But an Online Safety Bill that includes both might be genuinely effective and popular.

Australia’s eSafety Commissioner is independent from politics (although the appointees themselves were from Big Tech, which has been criticized). The current proposals in the UK open up the space for potential executive influence on the regulatory oversight. Political independence in Australia has afforded public trust in the eSafety Commissioner, as well as enduring influence within their remit. The Australian experience could be instructive here too; political independence has enabled greater influence.

Final thoughts

The proposals on the table in the UK appear to be a very distant cousin to Australian legislation. These differences will — hopefully — avoid some of the significant problems that have hampered the impact in Australia. But there may be elements missing from the Australian model that Ofcom simply cannot fulfill. It will be interesting to see, when the Bill is finally moved, the nature and scale of the impact it can create and how this compares to a very different Antipodean approach.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *