• GL
Choose your location?
  • Global Global
  • Australia
  • France
  • Germany
  • Ireland
  • Italy
  • Poland
  • Qatar
  • Spain
  • UAE
  • UK

UK government publishes public sector algorithmic transparency standard

08 December 2021

The Central Digital and Data Office has published an algorithmic transparency standard to help public sector organisations provide clear information about algorithmic tools they use to support decisions. Read our summary of the key points.

On 29 November the Central Digital and Data Office (CDDO) published a pilot version of an algorithmic transparency standard to help public sector organisations provide clear information about algorithmic tools they use to support decisions.  This forms part of the government's national data strategy and was developed in response to the Centre for Data Ethics and Innovation (CDEI)'s recommendation to place a mandatory transparency obligation on public sector organisations that use algorithms in a way that significantly affects individuals. We saw during the first wave of the pandemic how unpopular algorithm use was as an approach for deciding exam grades. While transparency may have helped, due to the strength of public feeling and the significant impact on pupils’ lives many years into the future, it may not have been sufficient, meaning that further action may be necessary. 

The standard is made up of 2 parts:

  1. Algorithmic transparency data standard
  2. Algorithmic transparency template and guidance to help public sector organisations provide information to the data standard.

The definitions sections stated that algorithmic tool is a deliberately broad term and includes products, applications and devices that support or solve a specific problem using complex algorithms.

Tools that are in scope

The guidance starts by setting out tools that are in scope.  While the government encourages public sector organisations to provide information about all the algorithmic tools they are using, during the initial phase it will prioritise publishing information about tools that either:

  • engage directly with the public, e.g. a chatbots; or
  • meet at least one criteria in each of these three areas:
  1. Technical specifications – does the tool involve one of the following?
  • complex statistical analysis;
  • complex data analytics; or
  • machine learning.
  1. Potential public effect – does the tool do one of the following?
  • have a potential legal, economic or similar impact on individuals or populations;
  • affect procedural or substantive rights; or
  • affect eligibility, receipt or denial of a programme, e.g. receiving benefits.
  1. Impact on your decisions – does the tool do either of the following:
  • replace human decision making; or
  • assist or add to human decision making, e.g. by providing evidence for decisions?

If a tool is in scope, you need to complete the algorithmic transparency template to provide information about how you use it.  The sections of the template correspond to the algorithmic transparency data standard.  You will need to provide the following information (the guidance provides more details of what to include):

Tier 1: short non-technical description:

  • how you're using the algorithmic tool; and
  • why you're using the algorithmic tool.

Tier 2: more detailed information:

  • who is accountable for deploying the tool;
  • what the tool is for, including:
    • its scope;
    • what it has been designed for/what it is not intended for;
    • your justification for using the tool; and
    • the tool's technical specifications;
  • how the tool affects decision making:
    • how the tool is integrated into the decision-making process, and what influence it has;
    • how humans have oversight of the tool; and
    • how you let members of the public review or appeal a decision;
  • list and describe the datasets you've used to train the model and on which the model is/will be deployed;
  • provide details of the impact assessments you have done, including a DPIA (data protection impact assessment); and
  • provide a detailed description of common risks for your tool, plus actions you've taken to mitigate those risks.

If your organisation uses any form of automated decision making, you may wish to speak to one of our data protection specialists, who can work with you to ensure that you comply with the relevant law and other legal requirements.  Please speak to your usual DWF contact, who can put you in touch with the most appropriate person.

Further Reading