Using AI to improve the insurance experience for good
Artificial intelligence (AI) is bringing about a revolution in the insurance industry – for use in customer service and claims handling, certainly, but also in areas like fraud prevention and analysis. Indeed, such is the importance of AI to the insurance sector that McKinsey, in its Insurance 2030 report published in 2021, described AI as having the potential to “transform every aspect of the industry”.
“Artificial intelligence (AI) offers benefits across the entire insurance sector,” says Meghana Nile, CTO for Insurance at Fujitsu. “Claims settlement is one area where automated technology is increasingly playing a significant role. For example, in auto-insurance, insurers can use AI to assess simple claims in just six seconds based on smartphone photos sent by the customer, compared to humans who take an average of six minutes and 48 seconds with the same information.
“Enhancement of the customer experience in the claims settlements is a big AI bonus too. Technology makes the process of buying a policy much simpler with fewer customers discouraged by the complexity of typical policy forms. And with simple queries fielded by AI, human agents have more resources to focus on more difficult service areas, which results in issues being dealt with much faster, smoothing out the experience.”
Nile also believes that AI is transforming fraud detection, monitoring potentially fraudulent activities continuously through data analysis and using third-party, unstructured data analysis to give insurers extra context into those patterns of behaviour.
Nigel Lombard, CEO and Founder of Peppercorn AI, summarises: “To date, insurers have mainly focused on using AI in customer service and claims processing. However, while the use of AI in insurance is still in its early days, adoption will accelerate over the next one or two years. Insurers that aren’t adopting the technology will be in trouble as they’ll be stuck with high expense ratios.”
Where are insurers missing out on AI?
Despite the rapid adoption of AI in insurance over the past few years, there are still areas where insurers are not fully embracing the technology and leaving themselves exposed to risk, inefficiency, or avoidable cost.
“We have only started to scratch the surface of what’s possible when it comes to applications of AI in the insurance industry,” Nigel Lombard continues. “In addition to customer service and claims processing, AI has the potential to support underwriting and fraud detection, which can drastically improve both loss ratios and expense ratios. This kind of activity has been relatively underutilised to date but has the potential to disrupt the legacy systems and traditional business models that traditional providers have had in place for decades.
“AI also has the potential to use predictive analytics to analyse demand, create new products, improve pricing precision, and even determine changes to customer risk, for example. Predictive AI will be the next step once AI becomes more mainstream, but this area is still in its infancy. Once AI becomes more widely adopted and models have captured sufficient levels of data, we will start to see real-world applications of predictive AI.”
When we think about AI, one of the main insurance-led applications that comes to mind is around customer service. The technology can be used to automate first-point-of-contact for customer enquiries, freeing up human customer service agents to handle more complex queries or to work on other tasks within the business requiring judgement or discretion, for example. According to Lombard, this represents a shift away from customer service interfaces that work for the insurer and a shift towards customer service that works for the customer, improving overall satisfaction.
Fujitsu’s Meghana Nile elaborates: “Customers want an omnichannel experience, which is much more achievable with the help of AI. It makes self-service claims processing much easier, dramatically improving customer experience. But insurance can feel like quite a personal experience to many and there are times where there will be more complex claims and customers expect the ‘human touch’.
“According to HubSpot, 40% of customers who couldn’t find someone to help them with their problem are still having issues with the product or service. So, it’s clear that when implementing AI, insurers must strike the balance between digital and human interaction; not everything should be done by a machine.
“Most important, however, is that AI in insurance is ethical. To be beneficial to both customers and the insurers, AI models have to be fair, transparent, and explainable. As AI evolves, becoming more complex, the companies that develop and provide the technology – and all stakeholders involved in AI – must practise ethics in each process.
“If insurers aren’t careful, unconscious bias will creep into AI if the algorithms are set up by a narrow group of people. If there’s a lack of diversity among data scientists – the experts that develop and test these AI models – then they’ll only further reinforce unconscious bias. And that is why we must consciously build solutions that constantly look out for these biases, preventing them from manifesting and causing harm.”
How important is data input for predictive modelling?
Those well-versed in AI will be familiar with the acronym ‘GIGO’, which stands for ‘garbage in, garbage out’. This refers to the principle that, if your AI algorithm is using poor data, it will return poor results. For example, if an insurer is using AI to identify problematic patterns of behaviour as part of its fraud prevention strategy, then bad data will diminish the algorithm’s ability to effectively spot fraud. This speaks to a much broader theme of bias within AI.
Peppercorn’s Nigel Lombard says: “Currently, risk analysis is a linear experience; it’s a one-size-fits-all approach that’s designed to favour the provider. AI on the other hand can collate volumes of data and identify behavioural patterns and trends, allowing providers to listen and react to their customers. In practice, this could mean tweaking the way a provider speaks to a customer based on what mood they’re in or creating new products following feedback, for example. Predictive modelling can take this one step further, but it’s entirely dependent on the quality of data inputted into the models.”
Meghana Nile adds: “While AI has its potential ethical risks if not used correctly, if applied right, it can be exceptionally powerful. AI can address potential bias in underwriting by identifying and eliminating any potential decision-making disparities due to race, gender, age, or ethnicity, and that’s what can make for fairer pricing.
“Another positive impact AI will have on premiums is its ability to detect fraud and identify high-risk customers. This ability enhances risk monitoring and, in turn, reduces pricing. With regulations like the Financial Conduct Authority (FCA) guideline of customer duty, this will steer the industry into taking a more holistic and analytical approach to pricing. Because AI can play a big role to estimate equitable and fair premiums, we’re likely to see its presence in insurance massively increase.”
Should insurance technology translate into lower premiums?
When implementing new technologies like AI, customer buy-in is extremely important. Yet, no matter how useful AI may be for an insurer – or even how much simpler it makes the customer experience – AI will have to lower premiums before customers fully embrace it. That is the cost of change, even change for the better: customers want to realise actual financial gain. That may take the form of reducing the incidence of fraud, and subsequently minimising loss to the insurer, or it might be something else.
Nigel Lombard explains that insurance customers often misjudge the amount of cover they need, taking out the wrong policy and leaving themselves underinsured or over-insured. This is one way that AI can help achieve savings.
“There is an opportunity for conversational AI to right this wrong,” he says. “By putting the customer in control of the conversation, they’re able to ask the right questions and AI can pick up on verbal triggers that ensure customers have the right cover in place. This can result in fairer pricing.
“Furthermore, by focusing on creating efficiencies, AI can also result in leaner operational costs and lower expense ratios, which can ultimately be passed back onto customers.”
The legal implications of AI for insurers
By Katie Simmonds, Managing Associate at Womble Bond Dickinson
“One of the issues with using AI is that it is opaque, meaning what we can't do sometimes is actually explain how the AI system works. This can create several potential risks and issues. The key dangers of becoming overly reliant on technologies is that you fail to be able to understand how you are using individuals' personal data or verify that an answer or response is 'right'. Even where the answer is 'right', there is a risk of perpetuating historic biases and discrimination into future decisions. For example, in health insurance, an individual may be unfairly blocked from certain policies. This could have knock-on effects for that individual, who may require the health insurance for a mortgage rate, potentially removing housing opportunities.
“For businesses, overly efficient use of location data could mean higher rates of rejection based on historical crime rates or anti-social behaviour. This postcode lottery begs the question of how anywhere that could meet these parameters could ever realistically level up, even with sensible mitigations? Whether it’s a third-party application or something that is bespoke, you will need a complete view and understanding of what is going in and what process leads to what comes out.
“Such is the nature of AI, it will be constantly learning, therefore this understanding must remain agile. It can only do exactly what you tell it to do, and poor instruction or understanding will not warrant a pass if the machine behaves in a way that is illegal. For this reason, we’re likely to see more appointments of a Chief AI Officer or similar role that will bridge the understanding between tech, ethics, and the legalities.”