Large language models and generative AI - policy brief

This brief allows you to get a quick understanding of the House of Lords Communications and Digital Committee opened an inquiry into Large Language Models (LLMs).


Background

The House of Lords Communications and Digital Committee opened an inquiry into Large Language Models (LLMs) back in July 2023. The inquiry aimed to assess actions the UK needed to take over the next three years to ensure they can respond appropriately to risks and opportunities that they present.

The LGA responded to the initial call for evidence (July-September 2023) alongside The Society for Innovation, Technology and Modernisation (Socitm) and The Society of Local Authority Chief Executive (Solace).

Our response highlighted Local government see potential benefits from AI, especially large language models (LLMs), but acknowledges potential risks. However, there is pressure to adopt AI due to resource constraints and rising public expectations. Opportunities include improved service delivery, data analysis, and administrative tasks. Identified risks include lack of expertise, digital exclusion, and potential loss of trust if not implemented ethically. Our response called for a framework promoting ethical principles and transparency in AI use, alongside supportive and specific regulation, and guidance for local governments.

In addition to written evidence, the inquiry hosted multiple oral evidence sessions, September – November 2023, which brought together 41 leading thinkers on LLMs. Sessions covered various topics including how the AI safety institute will prioritise spending, security risks and early warning indicators, options for regulation, concerns around copyright infringement and potential policy responses.

In February 2024, the Communications and Digital Committee released its report on LLMs, drawing on both the oral and written evidence collected, outlining its recommendations to government.

Summary of the inquiry report into LLMs

The 93-page report predicts a paradigm shift driven by large language models (LLMs). LLMs are poised to become even more powerful, user-friendly, and integrated into our lives. However, navigating the opportunities and potential risks of LLMs will be a complex challenge for policymakers in the coming years, as the technology rapidly evolves.

The report offers a clear guide for governments and businesses, outlining strategies for LLM deployment and development, alongside generative AI as a whole. Striking a balance between innovation and risk mitigation is a key focus. Regulatory frameworks for LLMs, copyright considerations in an AI-generated world, and fostering healthy market competition are all addressed with recommendations for navigating these complexities.

The key themes and recommendations within the report are summarised below:

The report highlights both opportunities and risks:

  • Opportunities: LLMs can significantly benefit the economy and society. The report urges a more balanced government approach that fosters innovation while mitigating risks. It recommends increased funding for research and development, along with programs to cultivate talent and support UK start-ups in the LLM space.
  • Risks: Security concerns like cybercrime and disinformation are amplified by LLMs. The report identifies more speculative long-term risks as well. It recommends a multi-pronged approach, including collaboration between government and industry to establish a risk register. Additionally, the report calls for the AI Safety Institute to prioritise research into catastrophic AI risks and mitigation strategies.
  • Societal Risks: LLMs could exacerbate existing social issues like discrimination. The report suggests a collaborative effort between the AI Safety Institute, the government, and the Department for Science, Innovation and Technology (DSIT) to mitigate these risks.

Regulation and Governance:

  • Regulators: The report emphasises the need for well-resourced regulators to manage AI risks effectively. It recommends standardised powers for information gathering, audits, and penalties. Additionally, it highlights the risk of "regulatory capture" by the private sector and proposes enhanced governance measures to mitigate this.
  • Data Protection: The report identifies the need for clear data protection guidelines for LLM data use. It suggests collaboration between the Information Commissioner's Office (ICO) and DSIT to address this.
  • Copyright: The report argues that copyright law needs to be updated to address the challenges posed by LLMs. It emphasises responsible data use in LLM development.
  • Market Competition: The report warns against a limited-supplier LLM market and recommends fostering competition, particularly for UK businesses (SMEs). It suggests collaboration between government and the Competition and Markets Authority to achieve this.
  • LLM Developers: The report highlights there's uncertainty about the liability of developers for misuse of their models. It suggests government needs to go beyond voluntary measures and introduce mandatory safety tests, adaptable testing metrics and accredited standards.

Overall, the report offers a roadmap for the UK government to navigate the opportunities and challenges presented by large language models. The report encourages government to accelerate on plans outlined with the pro-innovation white paper.

What does this mean for Local Government?

This report makes several important asks to government in line with LGA’s original response. There is an overarching challenge to government to seize the opportunities AI and LLMs provide. The inquiry highlights the need for increased funding for research and development, along with support for small and medium-sized enterprises (SMEs) to capitalise on these opportunities. We believe additional research funding would be beneficial, creating a safe environment for local authorities to experiment with AI and LLMs, ultimately boosting the sector's preparedness.

We strongly endorse the inquiry's call for further investigation into AI risks, echoing our initial response. The emphasis on establishing a risk register to combat cyber threats and disinformation aligns with our position. We also appreciate the focus on broader societal risks that LLMs could exacerbate.

While a framework for government AI deployment exists, similar to the Nolan principles, specific guidance for local governments on applying these principles to service delivery remains absent. We anticipate more clarity on the framework when key regulators publish their approach in April 2024. Although the inquiry acknowledges the need to support regulators in mitigating AI risks, it lacks a specific call for sector-specific guidance on the safe and ethical use of AI and LLMs for local government. In fact, the inquiry report doesn't mention local government at all. As the regulatory landscape evolves, we urge a stronger focus on local government and service-specific provisions to address concerns that the sector will be left behind.

The inquiry's call for mandatory developer measures and clear data protection guidelines for LLMs aligns with our recommendations. In our response, we advocated for clear standards for safe and ethical AI development, placing more responsibility on developers. While the inquiry scrutinises developers, it offers less focus on how suppliers use AI and LLMs. Transparency around supplier use of AI and LLMs remains a significant concern for local governments and needs to be addressed more directly than the current pro-innovation framework allows.

The inquiry makes well-informed demands of government, holding them accountable for the pro-innovation approach and regulatory development. However, further guidance is needed for local government, explaining how existing regulations apply to different service areas. Clearer guidance will enhance local government's preparedness for AI adoption and ensure a unified approach to implementing AI and LLMs across local authorities.