Safeguarding and the protection of children and vulnerable adults is the responsibility of everyone whether in a personal (moral) or a professional (statutory) capacity. But the global nature of the internet, ease of communication, the fast-paced nature of technology, including the fact that connected devices are integral to people’s lives means the online world has added complexity to safeguarding and protecting children and vulnerable adults.
Illegal and unacceptable content and activity is widespread online, the most serious of which threatens national security and the physical safety of children. Online platforms can be, amongst other things:
- a tool for abuse and bullying
- a means to undermine democratic values and debate, including mis and disinformation
- used by terrorist groups to spread propaganda and radicalise
- a way for sex offenders to view and share illegal material, or groom and live stream the abuse of children
- used by criminal gangs to promote gang culture and incite violence.
Additionally, platforms can be used for other online behaviours or content which may not be illegal but may be detrimental to both children and adults, for example:
- the potential impact on mental health and wellbeing
- echo chambers and filter bubbles driven by algorithms; being presented with one side of an argument rather than seeing a range of opinions.
It is widely recognised that the internet and the world wide web were never designed with children in mind; many protective measures are reactive and inconsistent rather than proactive. Historically, tech companies have largely self-regulated in relation to content, contact and conduct of users, seemingly only responding when there is public outcry. A large proportion of the blame is often attributed to legislation from the United States, specifically The Communications Decency Act (CDA) 1996 Section 230 and whilst this is US legislation, the effects are worldwide given that many tech companies are based in the US.
Often cited as, “The 26 words that made the internet”, the intentions behind CDA s230 were good, allowing users freedom of speech whilst protecting the platforms user generated content is published on. Whilst there is no protection against illegal content, without CDA s230 there would be no Amazon reviews or Facebook comments, YouTube videos would be severely restricted and much more.
Section 230, The Communications Decency Act 1996 is legislation in the United States which provides immunity to owners of any ‘interactive computer service’ for anything posted online by third parties.
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
But this creates a challenge: whilst there are statutory measures to prevent or remove content that is illegal, what about content that is legal but potentially harmful? Most interactive online services have age restrictions, commonly 13, to comply with advertising laws (the U.S. Children's Online Privacy and Protection Act, and in the UK the General Data Protection Regulations, GDPR), yet very few have effective age-verification processes or parental controls. Furthermore, potentially harmful content doesn’t just relate to children, it can significantly affect adults too. For example, misinformation and disinformation related to COVID-19 or the efficacy of vaccines, election campaigns and much more.
The UK has started to lead the way in this area with the introduction of the Age-Appropriate Design Code. Often called The Children’s Code, this is a statutory code of practice under the Data Protection Act 2018 brought into legislation in September 2020 which places a baseline of protection automatically by design and default.
But the Children’s Code is UK legislation; the internet is global. Whilst everyone is at some level of risk, front of mind for any service delivery should be protections for those who are vulnerable, children and adults alike. Evidence is clear that those with a real-world vulnerability are not only more likely to experience online risks but suffer more than their non-vulnerable peers.
But what is meant by ‘vulnerable’? In the context of online harms in general, vulnerability is widespread, the term is often used with children and/or adults with additional needs, children in care, young people in pupil referral units and much more. But anyone can be vulnerable, for example consider election periods and a prospective elected member who uses public social media as part of the campaign process. That person is now vulnerable to abuse, harassment and much more. During the 2019 General Election campaign 4.2 million tweets were sampled in a study; candidate abuse was found in nearly 4.5 per cent of all replies compared to just under 3.3 per cent in the 2017 General Election.
What is the UK doing?
The UK led the way in this area with the introduction of the Age-Appropriate Design Code. Often called The Children’s Code, this is a statutory code of practice under the Data Protection Act 2018 brought into legislation in September 2020 which places a baseline of protection automatically by design and default.
The Online Safety Act 2023 went much further, placing world-first legal duties on social media platforms. The strongest protections in the Act are designed for children, requiring social media platforms and search services to prevent children from accessing harmful and age-inappropriate content. Adult users also receive protections, with platforms required to be more transparent about harmful content and giving users more control over the types of content they want to see. Platforms that fail to comply with the rules face significant fines, with those running platforms at risk of prison if they fail to protect children.