Resetting the relationship between local and national government. Read our Local Government White Paper
The Department for Science Innovation and Technology (DSIT) have launched a consultation on their self-assessment tool AI Management Essentials (AIME). AIME is the latest in a series of interconnected tools released by DSIT aimed at supporting the public sector to deploy AI safely.
Background
The Department for Science Innovation and Technology (DSIT) have launched a consultation on their self-assessment tool AI Management Essentials (AIME). AIME is the latest in a series of interconnected tools released by DSIT aimed at supporting the public sector to deploy AI safely. The AIME tool is complementary to existing resources including the Algorithmic Transparency Recording Standard (ATRS) (see the Algorithmic Transparency Recording Standard Hub ) and the Model for Responsible Innovation, released earlier this year.
AIME is a resource that aims to assist organisations, particularly SMEs, in developing and implementing effective AI management strategies, ensuring compliance and adherence to best practice and ethical standards. AIME has brought together key principles from existing frameworks, regulations, and standards to form a questionnaire that allows AI suppliers to assess and improve their AI management processes and systems. The tool has drawn primarily on three core frameworks from international practice: ISO/IEC 42001, NIST Risk Management Framework and EU AI Act. It has been developed to assist organisations, particularly SMEs, in navigating the complex landscape of AI standards and frameworks. The resource aims to provide clear guidance and actionable steps to enhance AI management practices.
The consultation response is due at 23.55pm on 29 January 2025.
Summary of key points
- AI Management Essentials is a self-assessed questionnaire which is aimed at SMEs but is applicable to any organisation who develops, provides or uses AI.
- AIME forms a baseline from which organisations can identify improvements to their processes and systems which will promote compliant activities.
- There are no assurance mechanisms built into AIME, and it is not a certification.
- Completion of AIME is voluntary, however the government is exploring integrating it into public sector procurement processes.
- AIME is structured to cover three thematic areas: internal processes, managing risks, and communication.
A summary of the AI management essentials self assessment
AIME is a questionnaire which follows a series of yes/ no questions. The questionnaire is to be self-assessed, with no current plans on responses being externally validated. It is structured with three headings: internal processes, managing risks, and communication, covering ten key topics:
Internal processes
This section explores the overarching structures and principles related to the AI management system, and covers the following topics:
- AI System record: Maintaining a complete and up-to-date record of all AI systems being used and developed by an organisation
- AI policy: Holding a clear, suitable and accessible AI policy for the organisation
- Fairness: Ensuring that AI systems which directly impact individuals are fair, and are not producing outcomes based on bias.
Managing risks
This section explores the process and procedures by which risks are managed, mitigated and prevented, and covers the following topics:
- Impact assessment: Identify and document the possible impacts of deploying and using AI
- Risk assessment: Effectively managing any risks caused by the AI system
- Data management: Responsibly managing the use of data to train, fine-tune and develop AI solutions
- Bias mitigation: Protect against any unfair algorithmic and data biases
- Data protection: Implementing data protection by design and default approach in AI systems.
Communication
This section explores organisational communication about AI with external parties, and covers the following topics:
- Issue reporting: Reporting mechanisms are in place for any failures or negative impacts to be raised
- Third party communications: Interested parties are informed about how to safely use the AI system, and what its requirements are.
The comprehensive tool addresses 53 diverse questions and is designed for SMEs, but it can be utilised by any organisation that develops, provides, or uses AI, including councils. The self assessment questionnaire is accompanied by a set of guidance, highlighting how the tool can be best used. While participation in the AIME self-assessment is currently voluntary, the government is considering integrating it into relevant public sector AI procurement processes, such as purchasing frameworks. As outlined in the briefing, the AIME process will involve additional steps to gather responses from the questionnaire and provide feedback. The tool will evaluate each section of the self-assessment and present users with a concise summary of their AI management system’s overall health, derived from their self-assessed answers.
Whilst the tool is designed to highlight key considerations and indicate system improvements for the management of AI, the tool can also help councils understand the key questions they may want to ask when procuring and deploying AI. Furthermore, AIME will also contribute to setting a baseline of expected systems and processes that should be in place for vendors providing AI tools and systems.
What does this mean for local government?
Within a previous consultation response on the use of artificial intelligence in government we highlighted the challenges of an immature assurance ecosystem for AI and the difficulty of identifying genuinely beneficial and trustworthy products and vendors. The introduction of the AIME tool is a welcome development for councils, providing a valuable resource to benchmark AI vendors effectively. This tool is also beneficial for local government practitioners involved in procurement, as it can inform due diligence checks and ensure councils pose the right questions to vendors.
Additionally, we welcome that the tool has been built on existing internationally recognised standards and aligned with UK regulatory principles, this could help provide clarity particularly if made mandatory across public sector procurement in the future.
Despite the broadly positive potential for the AIME tool in local government challenges for implementation and adoption of a tool of this nature still exist.
- Voluntary adoption and data accuracy: As the tool is voluntary, there's no guarantee of widespread vendor participation. Moreover, the lack of formal verification mechanisms raises concerns about the accuracy of the information provided. While we acknowledge the significant challenges and resource requirements associated with providing assurance, more must be done to incentivise the adoption of AIME
- SME capacity and support: SMEs may lack the capacity and expertise to respond to compliance questions. Local government must consider how to support these businesses, especially in light of the Procurement Act 2023.
- Integration into procurement practices: Without being integrated into specific frameworks, the AIME tool's implementation could require significant additional resources from already overburdened local government teams.
Overall, the successful implementation of AIME could create a powerful tool enabling councils to assess whether AI suppliers have robust processes and systems in place for safe and responsible AI management. It could also serve as a crucial first step in establishing clear expectations for AI vendors, thereby driving higher industry standards and beginning to develop a baseline level of assurance.
The Cyber, Digital, and Technology team hosted a successful roundtable on 7 November with DSIT, bringing together 24 senior local government practitioners. The engaged group included council officers and representatives from fellow local government bodies Solace and Soctim.
We look forward to continuing to engage the sector on this crucial topic, which underpins the safe and responsible deployment of AI in local government. The insights gathered from the roundtable, along with further network engagement, will form the foundation of the LGA's response to the consultation.
The consultation response is due at 23.55pm on 29 January 2024. We encourage individual councils to respond online through the link on DSIT's webpage
Relevant LGA responses and key lines
The LGA has previously responded to several consultations on AI, which are relevant to this consultation. Please visit our consultations page to review our key messages on the responsible deployment of AI.
Our most recent consultation on AI, focussed on the use of artificial intelligence in government, which offers the most current perspective on the sector's AI-related needs.