Skip to main content
UK Government Generative AI Framework
website blog designs (1).png

The UK government has published its first edition of the generative AI (GenAI) framework document, which lists 10 principles that developers and government staff using the technology should take into account. The 10 common principles provide guidance on the safe, responsible and effective use of GenAI in government organisations.

David Knott, chief technology officer for government, said that the guidance offers practical considerations for anyone planning or developing a generative AI system.

GenAI has the potential to unlock significant productivity benefits. This framework aims to help readers understand generative AI, to guide anyone building GenAI solutions, and, most importantly, to lay out what must be taken into account to use generative AI safely and responsibly,” he stated in the forward, introducing the guidance report.

The report calls on government decision-makers to appreciate the limitations of the technology. For instance, large language models (LLMs) lack personal experiences and emotions and do not inherently possess contextual awareness, but some now have access to the internet, the Generative AI Framework for His Majesty’s Government (HGM) notes.

The technology also needs to be deployed lawfully, ethically and responsibly. The second principle in the guidance report urges government department decision-makers to engage early on with compliance professionals, such as data protection, privacy and legal experts. The report states: “You should seek legal advice on intellectual property equalities implications and fairness and data protection implications for your use of generative AI.”

Security is the third focus area followed by what the report’s authors call “the need to have meaningful human control at the right stage”.

The guidance report states: “When you use generative AI to embed chatbot functionality into a website, or other uses where the speed of a response to a user means that a human review process is not possible, you need to be confident in the human control at other stages in the product life-cycle.

“You must have fully tested the product before deployment, and have robust assurance and regular checks of the live tool in place. Since it is not possible to build models that never produce unwanted or fictitious outputs (i.e. hallucinations), incorporating end-user feedback is vital.”

The lifecycle of generative AI products is covered in the fifth principle, which looks at understanding how to monitor and mitigate generative AI drift, bias and hallucinations. The report recommends government department decision-makers have a robust testing and monitoring process in place to catch these problems.

The sixth and seventh principles cover choosing the right tools for the job and the need for open collaboration. The guidance also recommends that government departments work with commercial colleagues from the start.

The eighth principle covered in the report states: “Generative AI tools are new and you will need specific advice from commercial colleagues on the implications for your project. You should reach out to commercial colleagues early in your journey to understand how to use generative AI in line with commercial requirements.”

The need for having the right skills in place and an established assurance process complete the 10 principles. The report’s authors recommend that government departments put in place clearly documented review and escalation processes. This might be a generative AI review board, or a programme-level board.

Subscribe to "RTD" ROCTEL TECH DIGEST
Let's keep in touch!

Stay updated on the latest in tech news and our news and events! Sign up to receive our newsletter.