Opinion: AI and insurance - who’s making the decisions?

By Brian Mullins, CEO, Mind Foundry
Brian Mullins, CEO of Mind Foundry, discusses decision-making responsibilities in an AI-driven space

Businesses in all sectors are integrating Artificial Intelligence (AI) systems to speed up operations and increase margins. This is most obvious in manufacturing or retail with the use of machines that build cars without a break, or retail stores where bots can tailor the whole experience. However in the insurance industry, leaders aren’t just handing one-off tasks to AI – they’re creating a framework for their teams to collaborate with AI to develop entirely new business models or different products and services. 

What does AI bring to insurance? 

The amount of information currently available on customers has reached unprecedented levels and humans can’t process it alone. AI enables insurers to uncover subtle patterns in data, giving human insurance agents insight into emerging trends, opportunities, and threats.

The benefits of these huge data sets are widespread, but most importantly they enable insurers to offer hyper-tailored and flexible products. For instance, an insurer might provide information on the safest route to take to work, or how and when a customer should drive to reduce risk. 

Taking this one step further, usage-based insurance (UBI) in car insurance uses the driver's real-time location to cover them on-demand, offering a price adjusted for the risk they incur at the moment. For example, companies like Marmalade allow their customers to add temporary users to their car insurance for a specific length of time.

In short, these trends are bringing insurers closer to their customers. 

The flip side of the ‘hyper-personal’

However, AI systems are not perfect, and although they provide benefits they are not without risk. As AI in insurance becomes more pervasive, its ability to perpetuate inequality and create long-term negative consequences for society has increased too.

Insurance is a profession affecting people's lives at scale, and decisions require careful judgment. If every decision is left solely to AI systems – which learn from imperfect existing datasets – problems can quickly become enhanced. If an AI system has learned that people of a certain age or race are more prone to risk, they may automatically refuse them cover, therefore deepening discrimination in society. Taking this one step further, what if these seemingly ‘one-off’ incidents happen over and over again? It could lead to a certain demographic or 

particular groups in society to become marginalised as they are repeatedly denied access to key services, thus, creating vulnerable populations which in turn could take generations to rectify. 

One insurtech company to grab the headlines recently is Lemonade, a New York-based insurance provider that offers hyper-tailored services, very low prices, and “instant” everything. In May 2021, the company tweeted (and later deleted) about how it used AI to detect signs of insurance fraud based on non-verbal cues given by the claimant in video recordings.

The use of facial recognition in this context is particularly concerning because it has been widely proven that AI tends to misclassify individuals with darker skin colours – and that’s before considering the potentially exclusionary impact on neurodiverse users or those with motor function conditions, who might not exhibit “typical” non-verbal behaviour. 

It remains to be seen how Lemonade will address these issues but it’s clear that while innovation is to be encouraged, a strong ethical framework should prevent technologies like this from being tested in the customer marketplace.

How should insurance companies incorporate AI then? 

Businesses should keep in mind that there is no such thing as a ‘perfect dataset’; AI systems are constantly learning from data that can be flawed with human bias, and that means we cannot – and should not – be outsourcing our decisions solely to AI systems; as their actions will inevitably reflect this bias. 

Human-AI collaboration is essential in helping mitigate the risks of AI. This involves developing AI systems that understand their limitations and when they need human guidance, as well as how to communicate with other agents and humans. This creates a much more intuitive and efficient relationship between humans and AI – and ultimately a more responsible and ethical outcome for the lives affected. 

Transparency must also be a priority for AI systems to operate responsibly. Currently, a lot of what AI does happens silently in the background, and its impact is felt before it even enters the systems we recognise and engage with. It is not only the individuals who work with the technology, but those whose life it also affects (who may not have technical knowledge) who should be able to understand how AI works and how decisions are made. 

Businesses should also avoid focusing only on the output of AI products, otherwise, they risk missing the potentially harmful systemic impact of the AI component – anything from learned racism to AI’s environmental impact. By adopting a first-principles approach, insurance companies can truly understand what their algorithms and models are trying to predict and determine and why, and how this can impact the customer.  

Let’s not forget, like, in many industries, AI offers insurance companies the opportunity to become more innovative, competitive and efficient. Insurance affects every person in some way, meaning the industry as a whole has a responsibility to take its tech implementations seriously. Whether related to bias, discriminatory practices, or climate change, we must be sure that responsible AI is developed in such a way that makes society and the world a better place.

About the author: Brian Mullins is the CEO of Mind Foundry - an Artificial Intelligence company that spun out of the University of Oxford.

Share

Featured Articles

Allstate: BCG Partner Harnesses Gen AI to Transform CX

Allstate and BCG are harnessing Gen AI via a new model to better understand customer needs and improve overall experiences within the insurance sector

Comarch Diagnostic Point: Next Gen European Health Insurance

Healthtech provider Comarch introduces Comarch Diagnostic Point, set to improve health insurance across European markets

MoneyLIVE Summit 2024: Qover Talks Embedded Insurance

In attendance at MoneyLIVE Summit 2024, we spoke to Qover’s Chief Revenue Officer, Parker Crockford, on the rise of embedded insurance

Ansel raises US$20m to combat financial healthcare barriers

Partner Ecosystems

Hastings Direct: Levelling up with Snowflake

Insurtech

The life and career of Defaqto CEO John Milliken

Customer Experience (CX)