By Will Berrington, PR and Media Manager
From the early 1990s, all anyone needed to access the internet’s vast content was to know the right URL or search terms, whether it was news, mental health information, pornography, or extremely violent videos. Occasionally, you just had to click to tell a website you were 18 – with no checks that you really were.
Last year, this changed with the Online Safety Act. Certain types of content which were not illegal but were dangerous became age-gated, with the host websites required to make sure their users were over 18.
This was not universally popular. Memes about the restrictions flourished. People bragged about tricking the age-gating. Others said this must be down to parents to do, instead of the government.
But for me, it’s clear that the Online Safety Act has been a huge success.
From dial-up to distress
Being born in 1995, I was part of the first generation to grow up with the internet. I remember the tones of the dial-up, spent far too long on Pokémon forums, and once got accidentally got my AOL account banned for saying what I later found out to be a rude word in a chat room. When I realised I was LGBT+, I found so much validation in online spaces, at a time when my schooling still felt the lingering effects of the recently repealed Section 28.
This positive side of the internet isn’t something limited to my experiences. When the Mental Health Foundation surveyed young people last year, we found that around three quarters of young people felt very or somewhat connected with others through online communities, and two thirds had been in an online community which had made them feel more confident or supported in who they are.
But this was not all the internet was. I was fourteen years old when I was first passed a phone on the school bus showing someone being subjected to extreme violence. I quickly passed it back – but I had classmates who didn’t, and saw horrifying images.
This was not the only dark side of the internet. People would occasionally trick you into viewing distressing or extreme sexual content with links supposedly to something else. The first time I ever saw someone die was because I clicked on the wrong link on a forum.
And as I grew up, this content also was served by the platforms themselves. I remember browsing Tumblr, and seeing guides on how to self-harm alongside graphic images, in the same places as the LGBT+ positive pages I’d followed. When I asked a friend about content they’d put up alluding to self-harm, they said “it’s just what you do on Tumblr.”
Seven in ten people aged 16 – 21 have similarly seen disturbing or harmful content in online communities. More than a third of young people have seen suicide or self-harm content and more than a quarter have been exposed to pro-eating disorder posts. Similar to my experiences, research from the Office of the Children’s Commissioner suggests many children are exposed to harmful content like pornography is not because they choose to look, but because they stumble across it by accident.
This content had a very real impact on me, feeling less safe with every non-consensual exposure to self-harm, to sexual content, or to extreme violence. While some of my friends have shrugged this off when asked about it, others have offered a different perspective – “of course it messed us up, but that was just part of being a teenager online at the time.”
But now? Things can be different.
The impact and potential of the Online Safety Act
The Online Safety Act changes quite a bit for online platforms, giving these businesses responsibilities like those placed on providers of adult content/services:
- They became legally required to identify and remove illegal content, and ensure it doesn’t feature on your timeline.
- Any platform likely to be accessed by children must implement effective age-gating to block access to harmful content such as pornography.
- Platforms must also prevent access to harmful content like self-harm encouragement and eating disorder content.
- All users – including adults - must be given a greater degree of control over the content they see on their timelines.
Each of these would have helped prevent the harmful content young people were exposed to for years. No longer would my teenage self have had the self-harm videos served directly to him. My classmates would have never shared the extreme violence that they’d come across on a major social media platform. Any exposure to this sort of content would have been a deliberate choice – not an experience thrust upon me and my friends.
Some people complain that this is government over-reach, and that parents should take responsibility for what their children are seeing online. Leaving aside the practical impossibilities of this, with parents increasingly overstretched with work, caring responsibilities, and the general demands of life - I ask you what my parents could have ever done about the links I was tricked into clicking? About the extreme violence passed around on the school bus?
Parents cannot be everywhere. And as much as our parents may be able to keep us safe from certain threats, parental oversight is little match for the might of a multi-trillion dollar industry of social media sites and other online communities. Many platforms showed little progress over two decades on preventing people from seeing this content or assisting parents in protecting children – until the Online Safety Act came into force.
Others contend that this is all pointless, because of the ability to download a VPN, which allows you to avoid many of the safety improvements created by the Online Safety Act, or because of the ability to trick age-gating. But the difference is this is a deliberate choice for individuals. This is where parental responsibilities, and having honest conversations about the content teenagers are looking at, comes in. And while I would of course not endorse this, children have always “snuck in” to adult spaces – that is the risk with all age gating.
These aren’t valid arguments to just let young people have access to these spaces freely. Instead, both the fallibility of age-gating, and the need to empower parents, highlights why the responsibility should fall on websites. These huge companies are best placed to not only control the content as the hosts of it, but also to identify patterns of access from young people and patch where there are ‘holes in the fence.’
Ultimately, the Online Safety Act’s greatest strength is giving us control. Those of us who wish to avoid this content altogether can do so by choosing to filter out adult content. No longer will people accidentally click on links to extreme violence, or be suggested self-harm content on their timelines.
We have been given protection from, and control over, the content we see - something I wish I’d had as a teenager. And for that, we have to praise the Online Safety Act.