• QA
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

AI in Government: The rise of public-sector AI models

02 October 2025

The UK Government’s recent partnership with OpenAI represents a significant step in its ambition to be a global leader in artificial intelligence (“AI”). 

Strategic collaboration and policy context

The UK Government and OpenAI announced their strategic partnership on 21 July 2025, which will see the deployment of frontier AI technologies across public services, including the justice, education, and defence sectors. This new relationship with OpenAI by way of a Memorandum of Understanding complements the UK Government's AI Opportunities Action Plan, published in January 2025, which outlines a strategic framework for sovereign AI infrastructure, rapid adoption, and global leadership.

In parallel, the UK Government issued Procurement Policy Note (PPN) 017 (Improving transparency of AI use in procurement) in February this year, which encourages suppliers to disclose their use of AI in public procurement processes. The guidance reflects a growing awareness of the legal and ethical complexities associated with AI deployment in public sector decision-making.

Legal considerations in public procurement

As AI becomes increasingly embedded in procurement workflows, public bodies must now navigate an increasing range of new legal risks. The main risk areas that public bodies should look out for are:

  • Data protection

AI systems trained on procurement documents may inadvertently process personal data (possibly even sensitive personal data) or confidential information, raising potential issues under the Data Protection Act 2018 and the UK GDPR. PPN 017 advises against using sensitive documents as training data and recommends appropriate and proportionate due diligence when evaluating AI tools. Utilising preliminary market engagement as detailed in Sections 16 & 17 of the Procurement Act 2023 may be one approach that enables public sector bodies to gain vital insight into the AI tools available in the market and how such tools are deployed.

  • Algorithmic bias

Algorithmic bias has been flagged as another critical issue. If AI is used to assess tenders or evaluate supplier performance, authorities must ensure that systems are free from discriminatory outcomes. Failure to do so could result in breaches of the Equality Act 2010, exposing public bodies to legal challenge and reputational harm. Public bodies should consider the need to prepare Public Sector Equality Duty Impact Assessments prior to selecting an AI tool.

  • Market fairness

AI-enabled procurement could entrench incumbent suppliers if performance data is used to improve future bids, potentially undermining competition and limiting access for SMEs. In high-risk sectors such as defence or healthcare, additional scrutiny is required to ensure that AI systems are secure, auditable, and compliant with relevant standards.

Careful consideration must be given to these areas in order to minimise risk, both in terms of actual harm and reputational damage. Public bodies should also consider the use of the UK Government's AI Playbook, published in March 2025, to ensure that any use of AI is done with the necessary due diligence. For example, public bodies must ensure that any AI being used is secure, safe and resilient to cyber-attacks and complies with: the UK Government's Cyber Security Strategy; the Secure by Design principles; and the UK Government's Cyber Security Standard. Please see our article on the UK Government's AI Playbook for more information.

International perspectives: Switzerland’s Public LLM

While the UK pursues a hybrid model of commercial partnership and regulatory oversight, other governments are exploring alternative approaches. Switzerland, for example, has developed a publicly funded, open-source large language model (LLM) through federal sister universities ETH Zurich and EPFL. This model is designed to serve the public good, with multilingual capabilities, transparent training data, and applications in science, education, and climate research.

The Swiss initiative exemplifies what is known as a ProSocial AI framework, which prioritises ethical design, societal benefit, and democratic oversight. Unlike proprietary models governed by corporate policies, Switzerland’s LLM is open and reproducible, offering a potential blueprint for governments seeking to retain control over critical digital infrastructure. It also demonstrates how public-sector AI can be developed in compliance with European data protection standards, without compromising transparency or accountability.

Comparative governance models around the globe

Globally, governments are adopting varied approaches to AI governance. The European Union’s AI Act enforces a risk-based regulatory model, imposing strict requirements on high-risk applications. Conversely, the United States favours a more market-driven approach, with federal guidelines and state-level regulation. Singapore on the other hand, has developed a GenAI Framework focused on explainability and fairness, while Nordic countries such as Estonia and Finland are experimenting with citizen-centric platforms that integrate AI into public services.

These models reflect a broader debate about the role of government in AI development. Should public authorities rely on private sector expertise to drive innovation, or invest in sovereign capabilities that prioritise public interest? The UK’s partnership with OpenAI suggests a pragmatic middle ground by leveraging commercial technology while developing governance frameworks to mitigate risk.

Conclusion: Balancing innovation and compliance

As AI becomes more deeply embedded in public procurement, legal frameworks must evolve to ensure that innovation does not come at the expense of fairness, transparency, or compliance. Public bodies will need to strike a careful balance between efficiency and accountability, particularly as they navigate the complex interplay between proprietary technologies and public-sector obligations.

The choices made now about infrastructure, partnerships, and regulation will shape not only how AI is used in government, but how it is trusted by the public. Switzerland’s open-source model offers one vision of ethical AI development; the UK’s collaboration with OpenAI offers another. Both raise important questions about the future of public-sector procurement and the legal safeguards required to ensure that AI serves society, rather than undermines it.

We are market leading in advising public bodies on the use of AI. Contact us for more information.

Thank you to Gabriella Rasiah for contributing to the production of this article.

Further Reading