Resetting the relationship between local and national government. Read our Local Government White Paper

Online Safety Bill, Second Reading , House of Commons, 19 April 2022

The Local Government Association (LGA) supports the overall aims of the Online Safety Bill (OSB), which makes provisions for the regulation by Ofcom of certain internet services.

View allCommunity safety articles
View allChildren and young people articles

Key Messages

  • The Local Government Association (LGA) supports the overall aims of the Online Safety Bill (OSB), which makes provisions for the regulation by Ofcom of certain internet services. The regulation proposed in this Bill is aimed at ensuring platforms have systems and processes in place to deal with illegal and harmful content and their associated risk. The Bill primarily does this by introducing duties of care to some user-to-user services and search engines. The Bill also imposes duties on such providers in relation to the protection of users’ rights to freedom of expression and privacy.
  • Under the provisions of the Bill, all regulated services will have a duty of care in relation to illegal content and if services are deemed accessible by children, a duty to protect children from harm. Further to this, the Bill sets out that regulated services will be categorised into three categories within which different duties will apply. For example, providers of user-to-user services which meet a specified threshold (“Category 1 services”) are subject to additional duties. Further clarity is needed as to how services will be categorised.
  • The central premise of the Bill is to protect users from harm and associated risk. Whilst the Bill provides a definition of harm and sets out how harmful content will be prioritised for child and adult users; it remains unclear what types of harm platforms will need to protect users against. We urge the Government to set out their proposed categories of harmful content to children and adults as soon as possible and consult with independent experts to define and prioritise harmful content. Ofcom must also be given adequate resources so that they can be agile and produce guidance at pace in line with emerging ‘harmful’ issues.
  • The LGA recognises the delicate balance this legislation must maintain between preserving users’ freedom of expression and civil liberties whilst also protecting users from harmful content. We therefore welcome the user verification and user empowerment duty within this Bill that apply to category 1 services. The LGA welcomes these duties as they provide choice to adult users on what content and users they want to engage with whilst also allowing users to remain anonymous should they want or need too.
  • The OSB introduces four new criminal offences: a harm-based communication offence, a false communication offence, a threatening communication offence and a cyber-flashing offence. The LGA previously called for cyber-flashing to be made a criminal offence, so we welcome its inclusion within the Bill. Overall, these offences are a useful provision to ensure that individuals posting harmful content are held to account. However, it will be dependent on the police and CPS being given adequate resources, training, and comprehensive guidance to ensure these offences are used appropriately.
  • To ensure services adhere to their new responsibilities, the Bill introduces new regulatory powers and responsibilities for Ofcom. Ofcom will be responsible for drafting codes of practice for all duties and ensuring services have the systems in place to adhere to these responsibilities; they also have powers to hold services to account should they need to. We ask Ofcom to engage fully with relevant groups such as political parties and the LGA when developing its codes of practice to ensure there is consideration of unintended consequences.
  • The LGA supported the Draft Online Safety Bill Joint Committee’s recommendation calling for Ofcom to publish a ‘safety by design’ code of practice. It is disappointing this has not been adopted, the LGA encourages the Government to produce a ‘safety by design’ overarching code of practice that can be referenced and adopted within the individual codes of practice.
  • The LGA broadly welcomes the new harm-based, threatening, and false communication offences, as well as the user empowerment and verification duty that will enable users to control what content and users they interact with. However, we encourage the Government and Ofcom to go further and adopt a clearer and more robust provisions to manage ‘low-level’ abuse experienced by councillors that falls below the criminal threshold. As part of this, the LGA would like assurances from the Government that the democratic and journalistic protections set out in this Bill will not inadvertently protect perpetrators of abuse.
  • Councillors are experiencing increasing levels of online intimidation, abuse and threats made against them, which can prevent elected members from representing the communities they serve and undermine public trust in democratic processes. We hope this Bill will go some way in addressing the concerns we have heard from our membership.

Further Information

The Online Safety Bill was published on 17 March 2022. The explanatory notes to Bill set out the case for a new regulatory framework for internet services. Many of these proposals were set out in Draft Online Safety Bill and were already subject to consultation, following the Government’s Online Harm White Paper and scrutiny from the Draft Online Safety Bill Joint Committee.

This Bill is of significant interest to councils, covering a wide range of issues from child protection and public health issues to abuse and intimidation and free speech. The wide-ranging nature of the Bill, and the significant role of the internet in the lives of most residents, means there are likely to be additional issues of importance for councils beyond the scope of this initial briefing.

This briefing will cover the LGA’s views on the Bill on issues relevant to local communities and councils.

Categorisation of services

Clause 81 places a duty on Ofcom to establish and publish a register of each regulated service that meets the various threshold conditions set out in Schedule 10. Services will be designated as a Category 1, 2A or 2B services based on the below factors:

  • Category 1 services (user to user): number of users of the user-to-user part of the service, and functionalities of that part of the service.
  • Category 2A services (search engine): number of users of the search engine, and any other factors relating to the search engine that the Secretary of State considers relevant.
  • Category 2B services (user to user): number of users of the user-to-user part of the service, functionalities of that part of the service, and any other factors relating to that part of the service that the Secretary of State considers relevant.

When making regulations the Secretary of State must take into account the above factors on the level of risk of illegal and harmful content on those services. This categorisation is critical as it determines what duties services will have to adhere to, with category 1 services adhering to additional duties.

LGA view

The LGA had previously encouraged careful consideration of the categorisation of sites and the potential implications of this. The Bill broadly outlines how services will be categorised; however, further clarity and consideration is needed as how this applies to specific services.

For example, the Bill only required category 1 services to act on content that is legal but harmful to adults. If this covers only those with a very large number of users and functionalities, some services that are capable of spreading harmful content would fall out of the scope of this duty.

We urge the Government to set out as soon as possible what companies will fall into what category and to broaden the categorisation to include other factors such as services reach, safety performance and business model.

Requirements to undertake risk assessments

Clause 31 requires all regulated services to conduct assessment to determine if their service is likely to be accessed by children, the ‘child user’ condition is met if a ‘significant number’ of children use or are likely to use that service. A provider is only entitled to conclude that it is not possible for children to access a service, or a part of it, if there are systems or processes in place (for example, age verification, or another means of age assurance). If a site is deemed accessible by children, they will need to adhere to a duty to protect children and carry out a harm risk assessment.

Clauses 10 requires user to user services to conduct children’s risk assessments and states that theses assessments need to consider factors such as encountering different definitions of harm, the nature and severity of potential harm encountered, impact on children in different age groups and children certain characteristics, impact of algorithms, level of functionality of platforms, design, and operation of the service and different ways in which service is used. Clause 25 also places similar duty to conduct a children’s risk assessment for search engines likely to be accessed by children.

Clause 12 requires category 1 user to user services to conduct an adult’s risk assessment and states that these assessments need to consider factors such as user base, encountering different types of harm and severity and nature of harm, impact on adults with certain characteristics, impact of algorithms, level of functionality of platforms, design, and operation of the service and different ways in which service is used.

LGA view

Whilst the LGA broadly welcomes Clause 31, we are concerned about the ‘child user condition’. We ask for further clarity on what is meant by a ‘significant’ number of children so as to ensure children are protected across as many services as possible.

Exposure to risk should reflect stages of child development and so proposals in the Bill for safety duties and risk assessments to reflect the different needs and vulnerabilities of different age groups are welcome. Learning to manage and respond to risk is an important part of growing up. It is important to ensure that schools, parents, youth workers and others are supported to understand the risks and are able to help children and young people to also understand them and learn to navigate this in all areas of their lives. The LGA also therefore welcomes that media literacy is a distinct consideration within both children and adults risk assessments. 

We had previously stated the need for the Bill to take into account cumulative harm. Evidence from both Facebook and the National Society for the Prevention of Cruelty to Children (the “NSPCC”) support this ask in relation to harms experienced by children and young people. Facebook researchers noted that teenagers struggling with the psychological effects of Instagram were struggling to log off of the app, feeling ‘addicted’ but unable to stop themselves from consuming more content. Cumulative misinformation, extremist material or cumulative abusive also has an impact on adult users.

The Bill provides that risk assessments for both children and adults will explicitly cover the nature and severity of harm that children and adults might suffer. The LGA would welcome explicit mention of the cumulative impact of harm in both risk assessments. This is part of a broader call for Ofcom to be clear on services responsibility to manage ‘low-level’ abuse and harm directed at children and adults.

Further to the issue of cumulative impact, the OSB does not discuss cross-platform approaches, despite the nature of much online harm occurring in this way – for example, young people playing games together on one platform, but talking to each other via a separate service, or meeting on a more ‘child-friendly’ site before moving to another that offers alternative contact options. We therefore welcome that the Government have stated that Ofcom’s overall risk assessment will cover risks associated with harms moving across different services subject to the safety duties. Services’ own risk assessments will also have to consider this. However, given the cross-platform risk posed to children, we would ask that the Government goes further and instruct Ofcom to specifically address cross-platform risks, and to place a clear requirement on platforms to co-operate and respond to cross-platform harms when discharging their safety duties.

Definitions of harm

Clause 53 to 56 defines what is harmful to both children and adults, sets out that the Secretary of State can define these harms and instructs Ofcom to prepare and publish reports on content that is harmful to children and to adults on user-to-user services and content that is harmful to children on search services at least every three years.

LGA view

The central premise of the Bill is to protect users from harm and associated risk. Whilst the Bill provides a definition of harm and sets out how harmful content will be prioritised for child and adult users; it remains unclear what harms platforms will need to protect users against. The LGA believes that content that encourages, promotes, or instructs users in harmful behaviour should be considered harmful content within the Bill. We would encourage the Government to set out during the passage of the Bill their proposed categories of primary priority content harmful to children and priority content harmful to adults.

The Bill provides powers to the Secretary of State to determine harmful content and is required to consult only with Ofcom before making such regulations. We urge the Government consult with independent experts, including user advocacy groups, in order to define and prioritise harmful content. The fast-moving nature of the internet means that content, trends, or new sites can gain traction amongst users quickly. Clause 56 requires Ofcom to prepare and publish reports on this at least every three years. Ofcom must therefore be given the appropriate resources needed to be agile and respond quickly as ‘harmful’ issues emerge.

Ofcom codes of practice

Ofcom must produce individual codes of practice for the following duties:

  • sections 9 and 24 (illegal content),
  • sections 11 and 26 (children’s online safety), section 13 (adults’ online safety), 
  • section 14 (user empowerment), 
  • section 15 (content of democratic importance), section 16 (journalistic content),
  • sections 17 and 27 (content reporting),
  • sections 18 and 28 (complaints procedures)

LGA view

The LGA supported the Online Safety Bill Joint Committee’s recommendation that the Bill should tackle design risks by a ‘responsibility on service providers to have in place systems and processes to identify reasonably foreseeable risks of harm and take proportionate steps to mitigate risks of harm’. The Committee recommended that Ofcom would be required to produce a mandatory ‘Safety by Design Code of Practice’ setting out steps for services to address risks relating to, but not limited to, ‘algorithms, auto-playing content, data collection, frictionless cross-platform activity and default settings on geolocation’. 

It is therefore disappointing that this has not been adopted. The LGA would encourage the Government to produce an overarching ‘safety by design’ code of practice that can be referenced and adopted within the individual codes of practice.

User verification and user empowerment duties

A provider of a category 1 service must offer all adult users of the service the option to verify their identity and offer all adult users the tools to prevent seeing harmful content, be warned about such harmful content and filter out non-verified users. This is set out in Clause 57 (user verification duty) and Clause 14 (user empowerment duty).

LGA view

The LGA recognises the delicate balance this legislation must maintain between preserving users’ freedom of expression and civil liberties whilst also protecting users from harmful content. There have, for example, been recent calls for a ban on anonymity on social media to tackle online abuse, with proponents of a ban highlighting that users can feel ‘protected’ by their anonymity and emboldened to say things they would not say in person. At the same time, the police can find it difficult to trace anonymous users who have committed current communication offences. The LGA has sympathy for these calls, with some councillors reporting receiving abuse from anonymous accounts.

However, the LGA also recognises the benefits that can come with maintaining options for using anonymous accounts, from whistleblowing to protecting the voice of those who are not safe to speak out using their own names, such as those suffering from domestic abuse or LGBTQIA+ young people living in unaccepting homes or communities.

Given the negative impact social media has been repeatedly shown to have on users, it is helpful that the Bill ensures users have more control of their use of social media, for example, through more choice over content shown and limiting notifications.

The LGA had previously called for the OSB to focus on preventing abusive content before its posted and ensuring appropriate responses to abusive content; this is a more appropriate approach to tackling online abuse and harassment than banning anonymous accounts. We, therefore, welcome the user verification and user empowerment duty within this Bill that apply to category 1 services as they provide choice to adult users on what content and users they want to engage with whilst also allowing users to remain anonymous should they want or need too.

New criminal communications offences

Clause 150 introduces a new harm-based communication offence, making it unlawful to intentionally send or post a communication that is likely to cause serious distress to a likely audience.

Clause 151 introduces a false communication offence, making it an offence for a person to send a communication they know to be false and are likely to cause non-trivial, emotional, psychological, or physical harm. Certain bodies are excluded from the offences in Clause 150 and 151, including recognised news publishers or providers of on-demand programme services or films made for cinema.

Clause 152 introduces a threatening communication offence, which prohibits the sending of “genuinely threatening communications…where communication are sent or posted to convey a threat of serious harm”.

LGA view

LGA broadly welcomes these offences as they are a useful provision to ensure that individuals posting harmful content are held to account. However, the successful use of these offences will be dependent on the police and CPS being given adequate resources, training, and comprehensive guidance.

We are, however, concerned about the unintended consequences of introducing a false communication offence (clause 151), particularly on what the burden of proof will be for this offence. For example, if a person shares false information or re-shares false information that they do not know to be untrue, its unclear if they will be liable for prosecution under this offence. We ask that the Government set out this threshold during the passage of the Bill. Regarding the threatening communication offence, the LGA asks that the government set out what types of threatening communication this offence would cover that are otherwise not covered by existing legislation.

The LGA remains concerned about the low-level abusive and false communications directed towards elected representatives that these offences might not capture. Low-level abuse online is a common experience for councillors and can significantly harm individuals and democracy when its cumulative impact is considered. In particular, abuse that pushes people out of politics or puts them off standing for election and smear campaigns that impact candidates’ reputation when running for election or re-election.

Protection of democratic content

Clauses 15 sets out duty for category 1 services to protect content of democratic importance. Category 1 services must take into account the importance of freedom of expression when designing proportionate systems and processes for taking decisions about content of democratic importance or about users who post such content. This includes decisions about whether to take the content down, restrict access to it, or act against a user of the service. For example, Category 1 services could adopt processes to identify democratically important content and ensure users have access to this, even where it might otherwise be removed.

This duty requires service providers to apply these systems and processes in the same way to a wide diversity of political opinion. This duty relates to both news published content and regulated content, which is or appears to be, specifically intended to contribute to democratic political debate in the United Kingdom. Examples of such content would be content promoting or opposing government policy and content promoting or opposing a political party.

LGA view

The LGA is concerned that the language throughout this clause is very broad. We had previously called for the Bill to set clear parameters around what content is “of democratic importance” – content related to elections, elected members and political processes must be subject to clear rules around accuracy and mis- and disinformation.

We welcome the commitment by the Government for Ofcom to set up an expert advisory committee on mis- and disinformation that looks at the impact of false information on democracy. However, given the harm that inaccurate information has had on public health throughout the pandemic, the LGA believes the Government should bring forward a stronger plan to tackle online misinformation than currently proposed.

As currently drafted, it is unclear if Clause 15 could unintentionally protect harmful disinformation that is classified as ‘political speech’. We ask that the Government and Ofcom engage fully with relevant groups such as the LGA and political parties to ensure consideration of unintended consequences is thought through when developing the relevant code of practice.

The LGA is asking social media platforms and search engines to introduce specific safeguards for those holding elected office, including fast track routes to report abuse, intimidation, and harassment.

Protection of journalist content

Clause 16 places a duty on Category 1 services to consider the importance of free expression when designing proportionate systems and processes for taking decisions about journalistic content, or about users who post such content. This includes decisions about whether to take the content down, restrict access to the content, or take action against a user of the service. The duty requires category 1 services to create a dedicated complaints procedure for decisions relating to journalistic content.

The duty defines “journalistic content” as news publisher content or regulated content that is generated for the purposes of journalism, and which is ‘UK-linked’. This includes, but is not limited to, content generated by news publishers, freelance journalists, and citizen journalists.

LGA view

The LGA has previously highlighted challenges around the Bill’s reference to protection of “journalistic content.” High quality journalism (from large organisations through to citizen journalists providing important local information) is an essential element of any democracy which must be protected; however, there must also be safeguards to ensure that this is not abused. For example, some extremist groups and individuals present their rhetoric as journalism and use live political issues as opportunities to stoke division and encourage harassment of others. Given the broad language within this clause, it remains unclear if individuals will be able to claim ‘journalistic protection’ for harmful content.

The use of bots

Clause 49 defines what is meant by regulated user generated content, user generated content, news published content, and outlines what is exempt. User generated content includes bots. A bot is defined as an autonomous program on the internet or another network that can interact with systems or users. This means that duties of care applied to Category 1 user to user services for adults and all regulated services likely to be accessed by children will be limiting the use of bots.

LGA view

A report titled computational propaganda published in 2018 by the University of Oxford found that in Brazil, both professional trolls and automated ‘bots’ have been “used aggressively to drown out minority and dissenting opinions during three presidential campaigns”. The LGA has previously called for consideration of the role of ‘bots’ in the spread of mis- and disinformation and the trolling of individuals, and to consider how they may influence our democratic processes.

We therefore ask that the Government to develop measures to prevent bots being set up purely to troll individuals and ask that the Government clarifies how the verification duty can hopefully be used to prevent individuals engaging with to with unverified accounts such as bots.

Duty to prevent children accessing pornographic content

Clause 68 requires service providers to restrict those under the age of 18 from being able to view regulated provider pornographic content on their service. One example of a measure which service providers could use to prevent children from accessing this content is age verification. The OSB repeals Part 3 of Digital Economy Act 2017.

The Government have stated that this section ensures that all services that would have been captured by both Part 3 of the Digital Economy Act and all the user-to-user and search services covered by the OSB will be required to protect children from pornography. This new duty will be enforced by Ofcom with providers being subject to the same enforcement measures as services subject to the safety duties.

LGA view

Ofsted’s 2021 review of sexual abuse in schools and colleges found that leaders were concerned about problems created by children and young people’s easy access to pornography. The review cited evidence that viewing pornography can shape unhealthy attitudes, including sexual aggression towards women, with more frequent consumption associated with victim-blaming attitudes.

The LGA has previously called for robust age verification controls for all commercial providers of online pornography, with the option for payment providers to withdraw their services from infringing sites, in line with part three of the Digital Economy Act 2017. We therefore welcome this clause but would ask that Ofcom to set out expected age verification methods to be used by services and that age verification is enforced as soon as possible. This would ensure methods are sufficiently robust while keeping up with the latest technology.

Cyber-flashing offence

Section 156 sets out the offence of sending etc photograph or film of genitals. This offence outlaws cyber-flashing.

LGA view

The LGA welcomes this offence, having previously called for cyber-flashing to be made a criminal offence. However, we would urge the Government to take a safeguarding approach for children and young people, and not a criminal approach

Extremism and hate speech

Section 103 lays out the notice to deal with terrorism content or CSEA content (or both) and Section 104 lays out matters relevant to a decision to give a notice under Section 103(1).

LGA view

The LGA welcomes steps to help remove illegal terrorism content from the online space. However we are also concerned about the impact of legal, but harmful, extremist content, particularly where algorithms often prioritise and promote posts that provoke outrage and polarisation. In many cases, content will stop short of being explicitly and overtly illegal, with extremists often adept at remaining on the right side of the law. The LGA called on providers to work with extremism experts to identify and ban individuals/groups behind coordinated and/or repeated publication of extremist content and proactively check whether they are on their sites and breaching rules.

Financial harm

Sections 34 and 35 include duties to prevent and remove fraudulent advertising, which apply to category 1 and 2a services.

LGA view

The LGA has previously called for online platforms to take more responsibility for what is advertised and sold via their sites given concerns about fraud and safety. We therefore welcome this first step which will require services to prevent fraudulent advertising on their sites and take swift action to address it when concerns are raised.