• IE
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

The role of Generative AI in cybersecurity and privacy

08 December 2023

Generative AI is poised to revolutionise the UK insurance sector, but its implementation raises crucial cybersecurity and privacy concerns. Striking a balance between innovation and risk mitigation is essential for insurers to harness the transformative power of generative AI responsibly.

The hot topic of generative AI was on the agenda at DWF’s final InsureInsight event of 2023, where senior industry professionals were presented with insights on the transformative opportunities of the technology by Jacob Palmer and Iman Karimi from Boston Consulting Group.

Jacob and Iman's combined expertise in insurance and AI position them as thought leaders in the industry, enabling them to navigate the complexities of GenAI applications and highlight the cybersecurity and privacy considerations.

A branch of artificial intelligence, GenAI takes things to the next level by augmenting human imagination and creating higher-order opportunities. Unlike traditional AI, which focuses on analysing and understanding existing data, generative systems possess the ability to produce entirely new data without explicit programming and enable a variety of use cases across multiple applications based on an intuitive, creative approach to problem-solving.

What GenAI can bring to the sector

GenAI has emerged as a transformative force in the insurance sector, offering the opportunity to automate and expedite tasks across many core and corporate functions including underwriting, claims management, fraud detection, and customer service. 

The ability to train large language models (LLMs) to analyse mass claims data enhances underwriting and fraud detection by uncovering anomalies and suspicious patterns to reduce risk exposure. GenAI can also be engaged to create synthetic data, replicating real-world datasets to train cybersecurity models, e.g., cybersecurity models, to better detect and respond to emerging threats without exposing sensitive customer data.

Solutions can be instrumental in developing personalised risk assessments and tailoring policies to individual customers' unique risk profiles. By analysing browsing behaviour, social media activity, and device usage, GenAI can identify potential vulnerabilities and recommend appropriate security measures. This approach enhances cybersecurity while fostering stronger customer relationships.

Implications and Mitigation

The very strengths that make GenAI a game-changer, such as its ability to process vast amounts of data and generate human-like text, also raise concerns regarding potential misuse and exploitation. Ensuring the ethical and responsible use of this technology is paramount, so continuous monitoring and adaptation are essential to staying ahead of evolving threats and mitigating organisational risk.

The vast amounts of data required to train LLMs raise privacy concerns, placing the onus on insurers to ensure that AI systems are deployed in a manner that upholds customer privacy and complies with GDPR. Safeguards are needed to ensure these datasets do not contain inherent biases, as those biases can be perpetuated in the generated outputs and lead to discriminatory practices in underwriting, pricing, and claims processing.

There are growing concerns over the use of deepfake AI technologies and the significant role GenAI will play in future cyber attacks. The technology’s ability to generate realistic synthetic content creates opportunities for cybercriminals to generate sophisticated phishing attacks and deceptive content to initiate fraudulent claims, potentially leading to unwarranted payouts and financial losses.

In addition to making sure internal controls are robust, insurers that rely on third-party vendors to develop and implement their GenAI solutions must conduct thorough due diligence and implement risk management practices to ensure there are no cybersecurity vulnerabilities or data privacy procedure gaps that fall short of industry standards.

Finally, GenAI models can lack a deep understanding of the underlying concepts and industry-specific language within learned data, which can result in outputs that are superficially credible but fundamentally flawed, particularly in complex domains like insurance. Models contextualised for insurance will be able to identify sources more accurately for subrogation or liability-related claims and better manage risk.

The technology should not be viewed as a substitute for human judgment and expertise. Overreliance on LLM outputs can lead to uninformed decisions with potentially severe consequences, so human oversight and critical thinking remain essential in decision-making processes.

For more on Generative AI and effective governance frameworks, please see our article: AI in insurance – the need for governance | DWF Group

DWF delivers market-leading solutions that optimise the claim management process, bringing together extensive insurance market and legal expertise with superior service and award-winning software to deliver bespoke strategies that manage risk, create greater efficiencies, and drive better decision-making.

Contact Global Head of the Insurance Sector Claire Bowler to discuss how you can support future DWF InsureInsight events.

Further Reading