Visit our devolution and LGR hub for the latest information, support and resources

Must Know: Online harms

front cover for must know online harms publications
The purpose of this guide is to raise awareness about online harms and empower councillors by providing an introduction to online risks, an overview of the Online Safety Bill, key considerations, signposting to useful resources, as well as a checklist to support effective decision making.

Introduction

Safeguarding and the protection of children and vulnerable adults is the responsibility of everyone whether in a personal (moral) or a professional (statutory) capacity. But the global nature of the internet, ease of communication, the fast-paced nature of technology, including the fact that connected devices are integral to people’s lives means the online world has added complexity to safeguarding and protecting children and vulnerable adults.

Illegal and unacceptable content and activity is widespread online, the most serious of which threatens national security and the physical safety of children. Online platforms can be, amongst other things:

  • a tool for abuse and bullying
  • a means to undermine democratic values and debate, including mis and disinformation
  • used by terrorist groups to spread propaganda and radicalise
  • a way for sex offenders to view and share illegal material, or groom and live stream the abuse of children
  • used by criminal gangs to promote gang culture and incite violence.

Additionally, platforms can be used for other online behaviours or content which may not be illegal but may be detrimental to both children and adults, for example:

  • the potential impact on mental health and wellbeing
  • echo chambers and filter bubbles driven by algorithms; being presented with one side of an argument rather than seeing a range of opinions.

It is widely recognised that the internet and the world wide web were never designed with children in mind; many protective measures are reactive and inconsistent rather than proactive. Historically, tech companies have largely self-regulated in relation to content, contact and conduct of users, seemingly only responding when there is public outcry. A large proportion of the blame is often attributed to legislation from the United States, specifically The Communications Decency Act (CDA) 1996 Section 230 and whilst this is US legislation, the effects are worldwide given that many tech companies are based in the US.

Often cited as, “The 26 words that made the internet”, the intentions behind CDA s230 were good, allowing users freedom of speech whilst protecting the platforms  user generated content is published on. Whilst there is no protection against illegal content, without CDA s230 there would be no Amazon reviews or Facebook comments, YouTube videos would be severely restricted and much more.

What is CDA s230?

Section 230, The Communications Decency Act 1996 is legislation in the United States which provides immunity to owners of any ‘interactive computer service’ for anything posted online by third parties. 

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

But this creates a challenge: whilst there are statutory measures to prevent or remove content that is illegal, what about content that is legal but potentially harmful? Most interactive online services have age restrictions, commonly 13, to comply with advertising laws (the U.S. Children's Online Privacy and Protection Act, and in the UK the General Data Protection Regulations, GDPR), yet very few have effective age-verification processes or parental controls. Furthermore, potentially harmful content doesn’t just relate to children, it can significantly affect adults too. For example, misinformation and disinformation related to COVID-19 or the efficacy of vaccines, election campaigns and much more.

The UK has started to lead the way in this area with the introduction of the Age-Appropriate Design Code. Often called The Children’s Code, this is a statutory code of practice under the Data Protection Act 2018 brought into legislation in September 2020 which places a baseline of protection automatically by design and default.

But the Children’s Code is UK legislation; the internet is global. Whilst everyone is at some level of risk, front of mind for any service delivery should be protections for those who are vulnerable, children and adults alike. Evidence is clear that those with a real-world vulnerability are not only more likely to experience online risks but suffer more than their non-vulnerable peers. 

But what is meant by ‘vulnerable’? In the context of online harms in general, vulnerability is widespread, the term is often used with children and/or adults with additional needs, children in care, young people in pupil referral units and much more. But anyone can be vulnerable, for example consider election periods and a prospective elected member who uses public social media as part of the campaign process. That person is now vulnerable to abuse, harassment and much more. During the 2019 General Election campaign 4.2 million tweets were sampled in a study; candidate abuse was found in nearly 4.5 per cent of all replies compared to just under 3.3 per cent in the 2017 General Election.

What is the UK doing?

The UK led the way in this area with the introduction of the Age-Appropriate Design Code. Often called The Children’s Code, this is a statutory code of practice under the Data Protection Act 2018 brought into legislation in September 2020 which places a baseline of protection automatically by design and default.

The Online Safety Act 2023 went much further, placing world-first legal duties on social media platforms. The strongest protections in the Act are designed for children, requiring social media platforms and search services to prevent children from accessing harmful and age-inappropriate content. Adult users also receive protections, with platforms required to be more transparent about harmful content and giving users more control over the types of content they want to see. Platforms that fail to comply with the rules face significant fines, with those running platforms at risk of prison if they fail to protect children. 
 

Purpose of the guide

The purpose of this guide is to raise awareness and empower councillors by providing:

  • an introduction to online risks and harms with examples
  • an overview of the Online Safety Act
  • signposting to helpful resources and legislation
  • key implications and considerations for councillors
  • a checklist for councillors to support them in effective decision making.

Online harms

If you were asked to make a list of real-life risks, it would be a never-ending list. This is similarly the case online. One of the key messages to understand is: almost any behaviour can be enacted online.

For example, can self-harm be enacted online? The answer is yes; it can be carried out by an individual sending vile or derogatory messages to themselves that others can see. These messages are often sent using other accounts, sometimes anonymised and there is emerging evidence that this is a growing problem, particularly amongst teenagers.

This gives rise to questions, for example how many professionals are aware of this? Is this taken into consideration during interventions? How widespread is it within your communities? Anecdotal evidence suggests it isn’t widely known about and a recent review of children and youth services shows there is poor awareness of the breadth of online risks by professionals and frontline staff, with a narrow focus on child sexual exploitation.

The example above gives you an understanding of the enormous scope of online risks that may lead to a harmful situation. To help our understanding of risk it is useful to simplify into categories, commonly referred to as the 3C’s, which are:

  • content (person as recipient)
  • contact (person as participant)
  • conduct (person as actor).

Whilst these categories are often spoken in the context of children, they are equally attributable to adults and can be further sub-divided into commercial, aggressive, sexual and values, for example:

  Commercial Aggressive Sexual Values
Content Advertising Hateful content Sexual content Misleading information
Contact Tracking Being bullied or harassed Being groomed or exploited Self-harm
Conduct Hacking Bullying or harassing others Stalking Image-based abuse, health and wellbeing

 

These examples only scrape the surface of the risks and harms that we currently know about; as time passes by we learn about new risks and harms, particularly related to the fast-paced nature of technology and how this technology can diversify in terms of reach and impact.

Risk and harm

Offline and online, risk is inevitable in all of our lives, none more so than when growing up. Without taking risks, children and adults would not know how to recognise, risk-assess and mitigate situations where there is a likelihood of harm and therefore build resilience. Over the years there have been a number of campaigns to increase the resilience of young people in particular. One such example would be the Rise Above Campaign from Public Health England and the Rise Above for Schools programme which gave schools the resources to help build crucial life-skills, boosting resilience and improving mental health and wellbeing across a number of different aspects such as bullying and cyberbullying, positive relationships and friendships, body image in a digital world and more.

These campaigns and programmes are vital in order to build awareness, as risk doesn’t necessarily mean harm; based on a number of different factors, risk is a probability of harm and different factors can make a person more resilient or vulnerable. For example, a 2020 UK study of 6,000 young people aged 13-16 showed that some had shared nude images of themselves because they wanted to, either within a relationship, for fun, or because they thought they looked good. The majority stated that nothing bad happened and therefore will ignore online safety advice given at school or at home. In contrast, those with one or more vulnerabilities, in eagerness to be accepted, are far more likely to be pressured or blackmailed into sharing nudes, often with terrible consequences such as further blackmail or being bullied. Within this study, among those who shared nude images, 18 per cent were pressured or blackmailed into it.

The Government has committed to strengthening the law to tackle those who exploit children for criminal purposes.

Child exploitation

What is it?

There are various forms of child exploitation which follow the same general pattern: an offender takes advantage of an imbalance of power to coerce, deceive or manipulate a child. In this part of the guide, we will briefly look at child criminal exploitation (CCE) and child sexual exploitation (CSE). Child exploitation often (but not always) involves grooming, which is when an emotional connection is built to gain trust.

Child criminal exploitation

This includes being forced into shoplifting or threatening other young people. Increasingly it includes forcing young people and vulnerable adults into moving drugs, known as county lines, where gangs and criminal networks export illegal drugs into other areas of the UK and use dedicated mobile phone lines, called deal lines. Children and young people can be contacted and groomed online before taking part in ‘real world’ activity.

Child sexual exploitation

As technology advances new variations of CSE emerge; over recent years there has been a huge rise in a new form of CSE via live streaming, in which victims are coerced into taking and sharing indecent images/videos (known as self-generated) using apps or online services with webcams. Although all forms of CSE are of significant concern, live streaming is fast becoming the most significant area of concern and, according to the UK-based Internet Watch Foundation April 2021 report, sibling self-generated abuse is on the rise, where a child is groomed and exploited to abuse their brother or sister on camera.

What does the research say?

Knowing real figures is impossible - often victims won’t tell anyone for a number of reasons such as shame, guilt, or victims may not be aware they are being groomed or exploited. Since 2019, the national County Lines Programme has closed more than 5,600 county lines and arrested more than 16,000 people, with nearly 9,000 people referred by police to safeguarding services.  The National Crime Agency estimates there are between 680,000 and 830,000 people in the UK who pose a physical and online sexual threat to children. But the internet is global, communication with a child is easy. In 2023 the UK’s Internet Watch Foundation dealt with 275,652 webpages confirmed to contain child sexual abuse imagery, and 92 per cent of content removed contained “self-generated” images/videos. This includes a 1,815 per cent increase since 2019 in the number of reports featuring children aged 7-10 (up from 5,443 to 104,282); 99 per cent of these reports were girls. 

Bullying and intimidation

What is it?

Bullying and intimidation involves the repetitive, intentional hurting of one person or group by another person or group, where the relationship involves an imbalance of power. It can take many different forms, from the relatively easy to spot, for example abuse and threats on public social media or websites, to the more difficult to uncover such as private messaging or anonymous apps. It can also happen within online gaming, for example continually being targeted and killed early in the game (referred to as ‘griefing’ by young people).

Sometimes it will take more indirect forms, such as the passing around of gossip and rumours, isolating people from their online social groups, for example leaving them out of a conversation amongst friends, e.g. WhatsApp groups. Motivations for bullying and intimidating behaviour can be widespread, but commonly include attitudes towards:

  • appearance
  • sexuality
  • race
  • culture
  • religion.

What does the research say?

The overall trend is one of a problem that is increasing. Some studies indicate online bullying has overtaken the traditional real-world bullying, while other studies indicate most bullying is face-to-face with ‘online’ used as an extension.

The 2020 Ditch the Label study of 13,387 young people indicates that 25 per cent of young people have been bullied and 3 per cent have bullied. Of those who had been bullied:

  • 44 per cent said they felt anxious
  • 36 per cent said they felt depressed
  • 33 per cent had suicidal thoughts
  • 27 per cent had self-harmed
  • 18 per cent had truanted from school/college.

Extremism and radicalisation

What is it?

Defined as ‘the vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs,’ extremism refers to an ideology considered to be outside the mainstream attitudes of society.

Radicalisation is the process where someone changes their perception and beliefs to become more extremist.

Extremists use the online space to target and exploit vulnerable people, and to spread divisive propaganda and disinformation.

There are no typical indicators that point to a risk of radicalisation, but vulnerabilities are often exploited which would include:

  • low self-esteem or social isolation
  • being a victim of bullying or discrimination
  • confusion about faith or identity.

Equally, radicalisation can be difficult to spot but indicators would include:

  • isolation from family and friends
  • unwillingness or inability to discuss their views
  • increased levels of anger
  • talking as if from a scripted speech
  • sudden disrespectful attitude towards others
  • increased secretiveness, particularly around internet use.

Beyond radicalisation, online extremist narratives can stoke division and sow mistrust between communities, impacting on local cohesion and helping to fuel hate crime and other forms of criminality.

What does the research say?

Research from Hope Not Hate (State of Hate 2021)  concludes that the pandemic has quickened the demise of many traditional far-right groups whilst younger, more tech-savvy activists have thrived, often using unmoderated platforms or gaming sites, including a new extreme-right group called the National Partisan Movement, which is an international Nazi group made up of 70 teenagers from 13 countries, eight members in the UK. Whilst this may seem like a low number it is worth noting that activists on some platforms have a considerable number of followers.

Furthermore, a Commission for Countering Extremism study into how extremists exploited the pandemic shows that extremists used it to engage in disinformation to incite hatred and divide communities, creating conditions conducive for extremism.

The Met Police noted the role of online misinformation in fuelling widespread anti-immigration riots in the UK in August 2024, with prison sentences handed to some for their online activity during the riots

Misinformation and disinformation

What is it?

Misinformation and disinformation are widespread online, often circulated via social media or YouTube videos and can cover every conceivable or inconceivable topic. 

The terms misinformation and disinformation are generally quite similar and are often used interchangeably, but there is an important distinction:

  • misinformation refers to false or out of context information, which is presented as factual, regardless of an intent to deceive
  • disinformation is false information where there is intent to deceive.

It can sometimes be difficult to discern between what is true or false, misleading or an opinion, up-to-date or out-of-date information and even, particularly online where there can be a lack of emotional contact, a joke or malicious intent.

The consequences can be varied but include mistrust, confusion, fear and bias which lead to political polarisation, undermining democracy and much more.

The pandemic is a good example of the spread of mis and disinformation but there are many other examples where sharing can be heightened, for example elections and not-quite-truths, or bad actor interference, such as foreign states spreading disinformation by targeting particular groups on social media.

What does the research say?

It is impossible to know the scale of mis and disinformation. However, there are triggers, such as elections and the pandemic which give a greater understanding. In week one of lockdown Ofcom reported nearly 50 per cent of people were seeing information online they thought to be false or misleading about the pandemic, with this figure at almost 60 per cent for 18–36 year olds. An analysis of the most viewed YouTube videos related to coronavirus found that over 25 per cent  of the top videos contained misinformation with views totalling 62 million.

In relation to children, misinformation and disinformation is commonplace, often enacted through so-called online challenges or enticements such as gifts. A relatively common example of this is the enticement of free in-game currency, eg Fifa coins, Robux and V-Bucks. These often circulate on YouTube and other social media channels where a link is shared for the child to enter their username/password details in order to receive the free gift. However, this is phishing, a means whereby a false link is used to deceive a person into revealing user credentials.

Addiction

What is it?

Addiction is most commonly associated with aspects such as drugs, nicotine, gambling and alcohol, but it has also become a commonly used term to describe a broad range of online behaviours, such as online gaming addiction (internet gaming disorder), online gambling addiction, social media addiction, mobile phone addiction, or even just internet addiction in general, which then spans into other areas such as screen-time.

It could be argued that online exacerbates an existing addiction or leads to addictive behaviour such as gambling, but currently the science is contradictory. More often than not, the term addiction is used in the colloquial sense, particularly by concerned parents.

What does the research say?

Online addiction is an area with many different arguments and little agreement amongst scientists, for example does online (eg social media, smartphones) cause addiction, or is addiction correlated to online?

The causation/correlation argument is an important one. With the exception of internet gaming disorder, which has been criticised by some scientists due to a lack of robust evidence, there are no recognised online disorders. In the words of leading UK psychologist, Dr Amy Orben, “There is very little evidence and even less high quality, robust and transparent evidence”. However, there is widespread concern in relation to the use of social engineering tactics by tech companies, such as nudge techniques and persuasive design to keep users within apps and games for the purpose of making money. A number of countries, such as Belgium, have banned the use of ‘loot boxes’ in games which are thought to promote gambling-like behaviours, particularly with children. In the UK, the Government published its “High stakes: gambling reform for the digital age” white paper in 2023 outlining a range of plans to ensure safety in gambling, including protecting children and young people. Following consultation, the Government decided not to adjust the legal definition of gambling to capture loot boxes, but committed to first pursue industry-led protections and support better longer term research into the impacts of video games.  

Fraud and identity theft

What is it?

Regardless of age, your identity is one of your most important assets. Put simply, ‘your name, address and date of birth provide enough information to create another you’. These details can be used to open bank accounts, loan applications, mobile phone contracts, order goods and much more.

Identity theft is when your personal/private details are stolen. Identity fraud is when those stolen details are used for fraudulent purposes.

Criminals are increasingly using technology in more complex ways, often using social engineering tricks such as fear or urgency to lure people into revealing personal and private information, for example phishing scams. But whilst many people are aware of the basic safeguards to protect their identity, such as storing documents safely, shredding or destroying old documents, etc. this information can be relatively easy to find online. For example, you may not publish your birthday celebration on Facebook or Instagram, but a friend may wish you a happy 40th birthday within a public account, meaning your date of birth is now public. Perhaps that image taken in the restaurant having a lovely meal, with a credit card sitting on the table ready to pay the bill. These are innocent, everyday examples, yet the consequences can be significant. Using very simple search techniques, in many cases criminals can find personal and private information with relative ease. Equally, company data breaches can be a key enabler of fraud, something which individuals have little control over.

What does the research say?

Fraud is the most commonly experienced crime in the UK. The Crime Survey for England and Wales estimated that there were 3.2 million fraud offences in the year ending March 2024. Around one in seven fraud offences was reported to the police or Action Fraud. According to Cifas, the UK’s fraud prevention service, identity theft accounts for the majority (64 per cent) of fraud cases with increasing use of Artificial Intelligence (AI) and data harvesting techniques to fraudulently open and abuse accounts, steal identities and takeover customer accounts.

Online Safety Act

The Online Safety Act 2023 places duties on social media companies and search services, making them more responsible for their users’ safety on their platforms. This includes implementing systems and processes to reduce risks that services are used for illegal activity, and taking down illegal content when it does appear.

Platforms will be required to prevent children from accessing harmful and age-inappropriate content and provide parents and children with clear and accessible ways to report problems online when they do arise.

Providers’ safety duties are proportionate to factors including the risk of harm to individuals, and the size and capacity of each provider. 

Ofcom – the independent regulator - is required to take users’ rights into account when setting out steps to take. And providers have simultaneous duties to pay particular regard to users’ rights when fulfilling their safety duties.

The Government has published a full explainer of the Act

Who does the Act apply to?

The Act’s duties apply to search services and services that allow users to post content online or to interact with each other. This includes a range of websites, apps and other services, including social media services, consumer file cloud storage and sharing sites, video sharing platforms, online forums, dating services, and online instant messaging services.

The Act applies to services even if the companies providing them are outside the UK should they have links to the UK. This includes if the service has a significant number of UK users, if the UK is a target market or it is capable of being accessed by UK users and there is a material risk of significant harm to such users.

What is covered by the Act?

The Act requires all companies to take robust action against illegal content and activity including that related to:

  • child sexual abuse
  • controlling or coercive behaviour
  • fraud
  • racially or religiously aggravated public order offences
  • intimate image abuse
  • terrorism.

Companies with platforms likely to be accessed by children need to take steps to protect children from harmful content, through preventing access (Primary Priority Content) or only giving age-appropriate access (Priority Content). The types of content in these categories are:

Primary Priority Content

  • pornography
  • content that encourages, promotes, or provides instructions for either:
    • self-harm
    • eating disorders or
    • suicide

Priority content

  • bullying
  • abusive or hateful content
  • content which depicts or encourages serious violence or injury
  • content which encourages dangerous stunts and challenges; and
  • content which encourages the ingestion, inhalation or exposure to harmful substances.

Major platforms will be required to offer adult users greater control over the kinds of content they see and who they engage with online, including filtering out unverified users to help stop anonymous trolls from contacting people.

How will the Act be implemented and enforced?

Ofcom is leading work on the implementation, developing guidance and codes of practice to set our how online platforms can meet their duties. When these guides have been published, Ofcom will monitor how effective platforms are at protecting internet users from harm and will have powers to take action where necessary. 

Companies can be fined up to £18 million or 10 per cent of the qualifying worldwide revenue, whichever is greater. Ofcom can hold companies and senior managers (where they are at fault) criminally liable of they fail to comply with Ofcom enforcement notices in relation to specific child safety duties.

In the most extreme cases, with the agreement of the courts, Ofcom will be able to require payment providers, advertisers and internet service providers to stop working with a site, preventing it from generating money or being accessed from the UK.

A series of criminal offence shave been introduced by the Act and are now in force, including encouraging or assisting serious self-harm, sending false information intended to cause non-trivial harm, sending threatening communications, and cyberflashing.

Implications

All councils are committed to ensuring the best possible access to their services to ensure parity, economy of scale and democratic freedoms. This means that consideration needs to be given to how all users are able to access services, that they are protected and that risks are mitigated.

But as you have seen within this guide, the scope of online risks and harms is enormous and the potential impact on individuals, communities, public authorities and others is significant. Meaningful intervention requires a collaborative approach and whilst the Online Safety Act should have a positive impact, we cannot rely on technical solutions to wholly prevent what are largely behavioural issues. Furthermore, whilst the Online Safety Act initially targets those tech companies which have the most impact on our daily lives, any service provider, including public authorities, should give due consideration to a number of aspects to ensure they are delivering the most appropriate and cost-effective services, including:

  • Awareness – do councillors, officers and frontline staff such as social care, children’s and adult services and other professionals have a good, up-to-date understanding of online risks and harms? This is the cornerstone, the one aspect that affects all others and is fundamental to any service delivery, without which priority areas cannot be established and meaningful interventions put in place.
  • Scrutiny and challenge – do councillors offer scrutiny and challenge in relation to programme and project development and online harms? Are the right questions being asked and are there effective risk mitigations in place?
  • Collaboration – is there effective collaboration between teams including multi-agency teams? As mentioned previously within the guide, almost any behaviour can be enacted online, is this taken into account and recorded appropriately to inform future initiatives and interventions?
  • Funding – is funding allocated to those priority areas and services? Do initiatives and interventions have a positive impact and are they cost effective?
  • Service delivery – when providing access to information or engaging with constituents via third-party platforms, for example. Facebook, Twitter, Instagram, the platforms are bound by the statute in the Online Safety Act. However, it would be best practice for councillors and officers to understand the Act and to risk assess the use of these platforms. Equally any in-house platforms should be risk-assessed in relation to the content, the method of engagement and importantly, moderation to ensure that engagement is not counter-intuitive to the Act.
  • Corporate parenting – looked-after children will get access to not only services online but will most likely be given one or more devices (this could be owned by the local authority or could become owned by the child). How are children taught about keeping themselves safe online? What settings have been used to prevent access to unsuitable content?

Personal responsibility and online harms

Though the Online Safety Act does not specifically address councillors’ personal responsibilities and behaviours, there are significant reputational risks that all councillors should be aware of when conducting themselves online. 

Social media and other online forums have become important public spaces where councillors can share political information and engage with other councillors and residents. Modelling good behaviours in the online world is crucial to promote positive democratic engagement and protect elected members’ individual reputations and that of their councils. Councillors will want to avoid promoting or assisting in any of the risks mentioned above, ensuring that they are not, for example, promoting mis or disinformation or deploying any bullying tactics.

Councillors should seek support from social media or communications professionals if they need to and should promote and signpost credible sources of support and information to others.

The LGA has developed a range of guidance to support councillors in their online communications and guidance on being a good digital citizenincluding how to avoid spreading misinformation, which councillors may find helpful.

Checklist

These questions are designed to help you to support your organisation in developing best practice

Awareness 

Raising awareness for online harms and the risks that they can inflict on individuals is crucial. Therefore, all stakeholders need to be aware of the risks and their impact and this needs to be considered within the development of project and campaigns.

  • Are relevant stakeholders aware of the risks of online harms including councillors/members of staff?
  • Have identified stakeholders been trained in online risks?
  • Are residents aware of the risks of the broad range of online harms and how they can report incidents?
  • Is the council promoting and exemplifying best practice?
  • Has the council considered the implications of the Online Safety Act and the compliance issues?

Scrutiny and challenge

Scrutiny and challenge is a key role of all councillors. Online harms need to be factored in and challenged in the same way as other projects and programmes.

  • Do councillors have confidence that any project or programme of work have considered the risk of online harms and put in place effective mitigation? (For example, do communications plans consider risks around misinformation, or do financial inclusion plans consider the risks of identity fraud?)

Collaboration

Lessons learnt should be shared across multi-disciplinary and multi-agency teams so that the most effective responses to online harms or the risk of online harms can be identified.

  • Are there cross-council and multi-agency approaches in place to mitigate risks and tackle online harms where appropriate?
  • How is learning shared to ensure effective approaches?

Funding 

  • Is funding available to tackle online harms, for example through communications campaigns, youth services or offline support to deal with the impacts of harms?
  • Where programmes to tackle online harms are introduced, what evidence is available that these are effective?

Service delivery 

Where funding is assigned to digital projects or projects that involved technology it is essential that risks are mitigated for users.

  • Do digital projects factor in mitigating online risks, eg adopting the Age Appropriate Design Code as a method of best practice?
  • Is digital the most effective way of delivering the service?
  • Are users protected?
  • Are partners and factoring in online risks and has there been due diligence on their arrangements?

When deciding on how to deliver services to constituents and users it is important that digital service delivery is considered especially where a third-party provider will be used.

  • Has the platform been risk assessed?
  • Are there plans in place to help to mitigate online risks?
  • How will any online incidents be handled?
  • Are there additional safeguards that need to be in place to support adults at risk and children?

Governance 

Online harms and mitigating risks should be factored across all of the established governance arrangements.

  • Are leaders aware of their responsibilities around online harms?
  • Are online harms factored into project, campaign and programme initiation?
  • Are online harms risk assessed?
  • Do online harms feature in the risk log?
  • Are online harms concurrent with offline safeguarding arrangements?

Annex A: Resources

Annex B: The law

Extremism and radicalisation

The Prevent duty refers to Section 26 of the Counter-Terrorism and Security Act 2015 which states that specified authorities, which includes colleges and universities, adult education providers and sub-contractors, should have due regard to the need to prevent people from being drawn into terrorism.

Child exploitation

Child criminal exploitation is covered the Modern Slavery Act 2015. Child sexual exploitation is covered within the Sexual Offences Act 2003.

Bullying and intimidation

Intimidation may constitute an offence under the Protection from Harassment Act 1997, but unlike in some other countries there’s no specific crime of bullying. Perpetrators may be prosecuted under a number of pieces of legislation, for example:

  • Protection from Harassment Act 1997
  • Malicious Communications Act 1988
  • Computer Misuse Act 1990
  • Defamation Act 2013.

Misinformation and disinformation

There is currently no legislation in relation to mis and disinformation, but this is what the Online Safety Bill hopes to tackle, by imposing a legal duty of care on companies, ensuring disinformation is tackled effectively while respecting freedom of expression and promoting innovation.

Addiction

There is little in relation to the law in the context of this guide and addiction, however in relation to gambling-like behaviours within online games, the Digital, Culture, Media and Sport (DCMS) Select Committee inquiry into Immersive and Addictive Technologies launched a call for evidence in June 2020 to understand the impact of loot boxes as part of a commitment to review the Gambling Act 2005. This consultation ended on 22nd Nov 2020 and at the time of writing this guide the feedback is under review.

Fraud and identity theft

There are numerous laws which cover various aspects of fraud and identity theft, but the main Act is The Fraud Act 2006 which has two relevant pieces of legislation relating to identity crime.

About the authors

This guide was produced by Charlotte Aynsley and Alan Mackenzie.

Charlotte Aynsley – Rethinking Safeguarding

Charlotte has a broad range of experience in the field of digital safeguarding, spending the last 10 years supporting Government, local authorities, charities and schools to keep children safe online. Her work has included high profiled initiatives such as the NSPCC’s Share Aware campaign, the It Starts With You online safety campaign from Walt Disney’s Club Penguin, and national safeguarding advice on sexting in schools and colleges.

More recently Charlotte has been working with high-profile organisations including the National Cyber Security Centre (NCSC), NCA – CEOP (National Crime Agency - Child Exploitation and Online Protection Centre), the Princes Trust, Girlguiding, the Mayor of London and the NSPCC, to develop leading edge safer platforms and advice and resources for professionals working with children to keep them safer online.

Alan Mackenzie – E-safety Adviser

Alan is a consultant who has extensive experience working in the public, private and third sector specifically in relation to online safety and the use of technology by children, young people and adults. With a local authority background, after retiring from the Royal Navy in 2005 he was the Service Manager for 367 schools on behalf of Children’s Services, which included responsibility for county-wide online safety, working in partnership with the Safeguarding Children’s Board, the Police, the third sector and others to fulfil national and county council priorities in relation to policy, education and awareness. In 2011 Alan became an independent consultant with a focus on the education of children, young people, staff, parents, governing bodies and trustees, as well as ensuring that schools are fulfilling their statutory obligations by conducting comprehensive audits.

Alan is regularly commissioned for projects from charities such as the NSPCC and organisations related to online safety and online harms, including writing position papers and white papers, risk assessments, educational resources and briefings to a wide range of organisations,