• QA
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

Algorithms and transparency in the public sector

19 July 2021

The Centre for Data Ethics and Innovation (CDEI) has published a blog post which examines how the public sector can increase transparency around the use of algorithms in decision-making to build public trust.  Read our summary of the key points.

On 21 June 2021 the Centre for Data Ethics and Innovation (CDEI) published a blog post 'Engaging with the public about algorithmic transparency in the public sector'.  This follows a CDEI review in November 2020 into bias in algorithmic decision-making, during which the CDEI recommended that the government should impose a mandatory transparency obligation on all public sector organisations using algorithms when making significant decisions that affect individuals.  See our article UK government publishes public sector guidance on automated decision-making for more details on the guidance published following this review.

In the blog post, the CDEI states that its research has revealed low awareness or understanding of the use of algorithms in the public sector.  To address this, the CDEI has identified what information should be provided. To keep presentation of the information simple and easy to use, the CDEI recommends that it should be divided into two tiers:

Tier 1

The organisation should use active, up-front communication to tell individuals affected that the algorithm is in use and provide the following information at the point of, or in advance of, interacting with the algorithm:

  • Description of the algorithm;
  • Purpose of the algorithm; and
  • How to access more information or ask a question.

Tier 2

This information should be easily accessible if someone chooses to seek it out – this may be an expert or journalist on the individual's behalf:

  • Description
  • Purpose
  • Contact
  • Data privacy (see our comments below)
  • Data sets
  • Human oversight
  • Risks
  • Impact
  • Commercial information and third parties
  • Technicalities

While this blog post does not focus on data protection, data privacy is one of the issues flagged and the use of algorithms for decision-making raises issues in relation to the use of automated decision-making and artificial intelligence (AI):

  • Under the GDPR (both the EU and UK versions), there is an obligation to inform individuals about the use of automated decision-making, including meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing.  Note that this must be provided at the time of collecting the individual's personal data and communicated clearly to comply with the transparency principle. 
  • The GDPR also gives individuals the right to obtain an explanation of the decision reached after such assessment, to challenge the decision, and not to be subject to a decision based solely on automated processing which produces legal or other significant effects on them, subject to limited exceptions.
  • A data protection impact assessment (DPIA) will usually be required before starting to use automated decision-making.

In recent issues of DWF Data Protection Insights, we've reported on a number of related developments at UK and EU level.  See the March 2021 issue, where we discuss the Information Commissioner's Office AI and data protection risk mitigation and management toolkit, and our recent article Artificial intelligence: key updates.

If you would like advice on any aspect of data protection, including the use of algorithms or AI or how to conduct a DPIA, please contact one of our specialist lawyers, or ask your usual DWF advisor to put you in touch.

Further Reading