• IE
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

AI in insurance – the need for governance

08 December 2023

Increased adoption of AI to drive efficiency requires robust governance frameworks to ensure responsible and ethical practices. AI governance should address issues of fairness. transparency, accountability, and explainability to build trust and mitigate potential risks.

In a recent address to insurance professionals at DWF's InsureInsight event, Andrew Jacobs, Partner and Head of Regulatory Consulting at DWF emphasised the transformative powers of AI for insurers while calling for a robust governance strategy.

AI is already contributing substantial value to many insurers by delivering savings and efficiencies while acting as a catalyst for growth, but the widespread adoption of GenAI is still in its early stages. In recognition of its undoubted future importance, several use cases continue to gain traction as insurers continue to explore potential areas of the value chain where a competitive advantage can be exploited.

The current landscape

As the UK seeks a central role in global AI regulation, its current position is centred on a pro-innovation approach that fosters responsible AI development and deployment. Recognising the enormous potential of AI while acknowledging the inherent risks, the UK seeks to create an environment where AI can flourish while safeguarding public trust and ethical principles. This approach is characterised by contextual principals and existing conduct-related regulation, tailored to specific sectors. However, it is acknowledged that there is presently something of a void, as the speed of AI evolution quickens and collaboration with stakeholders from all perspectives is necessary to ensure that a balanced and regulatory framework results in respect of AI. This should underpin some of the existing expectations of good governance under the UK Corporate Governance Code and the FCA's Senior Managers & Certification Regime, to name just a few current overarching frameworks through which prudent conduct relating to AI is incumbent in the expectations of senior managers in firms. 

The inaugural global AI Safety Summit recently held at Bletchley Park in the UK brought together 30 leading AI nations, technology companies, researchers, and other parties to foster a global dialogue among stakeholders on the potential risks and challenges associated with frontier AI, emphasising the need for a shared commitment to responsible AI development and pledge to collaborate on AI safety research and initiatives. This resulted in the signing of an historic shared communique, ‘The Bletchley Declaration’.

Within the EU, the AI Act establishes a comprehensive framework for regulating AI in the European Union and aims to ensure the safety, reliability, and fairness of AI systems. Recognising the emergence of generative AI as a sign of the rapid evolution of AI technology, the AI Act aims to retain the flexibility to adapt to upstream developments. However, certain characteristics of the legislation suggest additional standards and guidance may be necessary to facilitate its implementation in the insurance sector.

Within the UK, it has adopted an outcome-based approach built on regulating the outcomes of AI use, rather than the technology itself, ensuring that AI systems are fair, transparent, and accountable. These are outlined in the non-statutory publication from the Department for Science, Innovation & Technology. However, the government and the FCA are seeking a more prescribed framework for governance and the regulation of AI.  In speeches from the CEO and other senior figures in the FCA, rhetoric, responsibility for financial data is placed with Big Tech firms and companies designated as Critical Third Parties by the FCA and PRA will potentially FCA regulation to ensure stability. The Consumer Duty provides higher and clearer standards of consumer protection across financial services and mandates product and service design to secure good consumer outcomes and is looked at as being the foundations for the governance of AI, until a specific framework of governance and regulation results.

Core principles

Andrew compared and contrasted the expectations from the UK government, the European Bloc and The US Executive Order relating to the governance of AI and some common tenets were clear in terms of Transparency, Human Accountability, Safety, Security and Fairness. He pointed to the European Insurance and Occupational Pensions Authority (EIOPA), which published a report outlining AI governance principles for the European insurance sector which emphasises the need for a robust governance framework to ensure that AI is used ethically and in a way that protects consumer interests and promotes responsible innovation.

The report consolidated the findings of a consultative expert group comprising various stakeholders on the opportunities and risks of increased AI adoption in insurance across Europe. Among the results was the determination that insurers must be accountable for the decisions made by their AI systems, fully understand the reasoning behind those decisions and take responsibility for the outcomes. Furthermore, AI systems should not amplify or perpetuate existing biases or prejudices, ensuring fair and non-discriminatory outcomes.

A further output of the report was to recommend insurers be transparent about their use of AI and provide consumers with clear information about how AI is used in decision-making processes. It went on to point out the importance of human oversight for ensuring AI systems are employed safely and responsibly and that robust data governance practices are essential to protect the privacy and security of data used in AI models. Lastly, insurers should adopt a comprehensive approach to managing the risks associated with AI models, including identifying, assessing, and mitigating potential hazards.

Andrew concluded his presentation by highlighting the need to introduce the same sound governance principles for AI as for any other area of the business risk management. The regulatory landscape continues to evolve, but in the absence of a formal process, firms should take the lead in embedding processes suited to their unique operations.

For more on Generative AI the crucial cybersecurity and data privacy concerns that may arise through it's implementation, please see our article: The role of Generative AI in cybersecurity and privacy | DWF Group

DWF delivers market-leading solutions that optimise the claim management process, bringing together extensive insurance market and legal expertise with superior service and award-winning software to deliver bespoke strategies that manage risk, create greater efficiencies, and drive better decision-making.

Contact Head of Regulatory Consulting, Andrew Jacobs, to discuss how you can support future DWF InsureInsight events.

Further Reading