On 18 July 2022, the Secretary of State for Digital, Culture, Media and Sport released a policy paper that confirms the UK will not follow the EU's proposed approach to regulating AI, as set out in the draft EU 'AI Regulation'. Instead, the UK Government intends to adopt a lighter-touch 'pro-innovation' approach that is industry-agnostic, but with the overlay of sector-specific guidelines to tailor to different settings where AI may be used. Regulators, including the Information Commissioner's Office, Financial Conduct Authority, Competition and Markets Authority, Ofcom, etc., will be asked to implement these principles, e.g. by creating guidance or creating regulatory sandboxes. On a related note, the proposed UK approach to deviate from the EU position is further evident from the Data Protection and Digital Information Bill currently out for consultation, where it appears the UK Government believes it has found a better way to regulate matters concerning data protection that is disproportionately burdensome on organisations. Whether that belief is correct or not, particularly for multinational organisations operating in both the UK and the EU (of which there are many!), remains to be seen.
The AI proposals affect both organisations that develop AI and organisations that use AI. Therefore, an understanding of, at least, the principles set out below is important. The remit of the regulations appear to be broad and target extensive datasets, platforms, edge computing, etc.
The lighter-touch approach stems from high-level principles we have described below, upon which sector-specific regulation will built with an expectation on regulators to focus on “high risk concerns rather than hypothetical or low risks associated with AI". This seems to suggest a more practical approach, instead of contemplating the theoretical art of the possible.
- AI should be safe to use. The policy paper puts emphasis on safety mainly in the contexts of healthcare and critical infrastructure. It suggests that this principle would be primarily applicable to physical safety i.e., medical diagnoses, autonomous vehicles, but also e.g., to AI-driven cybersecurity algorithms which have the capability of automatically shutting down energy plants.
- AI should be technically secure, and function as designed. A proven record of proper functioning should be built in when considering the deployment of AI, and datasets used to train AI should be relevant, high quality, representative and contextualised.
- AI should be appropriately transparent and explainable, specifically in high-risk settings. But in low-risk contexts, transparency requirements ought not to apply given the need to consider proportionality and avoid unduly hindering innovation.
- AI should be fair, but the definition of 'fairness' should vary, depending on sector. Regulators should have autonomy to define fairness in specific contexts via their respective guidelines.
- Legal persons' responsibility for AI governance should be clarified. In other words, responsibility for recommendations and actions of AI should be clearly assignable to an identified or identifiable legal person - whether corporations or individuals.
- Routes to redress and contestability should be clear. Regulators should provide, in regulated situations, pathways to contest outcomes created by AI, especially where they result in harms, infringements of rights or otherwise carry unfair biases.
The proposals are out for public consultation until 26 September 2022. Following which in late 2022, the UK Government will publish a white paper on these topics. However, for clients operating in this space, or looking at data optimisation strategies that may involve AI, it makes sense to factor in and document the principles set out above into the analysis when performing data protection risk assessments (e.g. DPIAs).
For any questions relating to AI or the UK Government's proposed approach to reforming data protection regulation in the UK, contact Shervin Nahid or your usual contact in the DWF Data Protection & Cyber Security team.