If you were a cynic, you would say insurance is an easy nut to crack. Consumers want insurers to pay out when there’s a valid claim, they want an easy and stress-free process for claiming, and they want the money in their bank account quickly. Insurers have employed artificial intelligence (AI) to help them realise this consumer-friendly utopia, but there’s progress to be made on utilising machine learning (ML) and deep learning (DL).
Alex Johnson, Head of Insurance Solutions at Quantexa, says the main differential between ML and DL is the ability for the model to ‘think like a human’ and make informed decisions through self-learning, rather than relying on outcomes already available on which to base decisions. He says that insurers should ‘define the context’ of data in order to make better use of it.
The state of ML and DL within the insurance industry
“The richer the context generated as a feed into an ML/DL algorithm, the more accurate the decision you can obtain from each model,” Johnson says. “For example, in deciding whether to pay a claim, a handler would have to gather insight from multiple sources in an iterative way, related to more than just the claim itself. This may include looking at the policy history (such as adjustments), understanding information regarding circumstances, third-party involvements, locations, IOT signals, estimations, suppliers, payee information and so on. However, this is time consuming, costly, and subject to human error.
“What DL enables, when combined with contextual analytics, is the ability to automatically ingest, connect and analyse multiple siloed sources of information (both real-time and historic), aggregating the context and iterating a human thought process with a rapid and deep analytical simulation. This effectively imitates human decisions albeit faster and more accurately, and can also be applied to net new decisions with far less ‘model training’.”
Johnson acknowledges that traditional ML approaches have limitations in terms of the breadth of data context being analysed and require large numbers of ‘good’ historical outcomes – something that contextual analytics and DL can overcome.
John Ardelius, CTO and Co-Founder of Hedvig, agrees with claims as a good area of application: “Claims service automation is a clear use case for embracing ML, thanks to the ability to reduce manual tasks by up to 50%. By using ML to unlock smarter pricing and underwriting based on non-trivial combinations of features and patterns, insurers can decrease their loss ratio by several percentage points. Integrating ML across onboarding processes means smarter flows, which can increase conversion rates as well as cross-sales, increasing portfolio premiums.”
Ardelius pinpoints another prominent area of focus: “We’re seeing great potential for improved pricing strategies and service offerings as a result of improved pattern recognition powered by large quantities of structured data, such as fraud detection and customer segmentation.”
Have insurers been too slow to embrace ML and DL?
Clearly, then, to take advantage of the possibilities within ML and DL, insurers must get their house in order when it comes to data.
Quantexa’s Alex Johnson continues: “For the overwhelming majority of insurers, the key challenge is generating a contextual, connected view of this data – or, in other words, a single customer profile. Insurance organisations that can fully leverage ML and DL techniques are typically those that have looked to resolve a holistic customer view first.”
In its latest State of AI in Financial Services report, Quantexa found that a third of respondents in financial institutions ranked data readiness and the ability to integrate internal and external data sources into a single source of truth as key challenges for AI adoption.
“Despite the opportunities in front of the industry, insurance’s adoption of ML and DL has been fairly slow,” says Hedvig’s John Ardelius. “Incumbent players are still struggling to make the best use of their data due to its distributed and unstructured nature.”
According to Ardelius, increased adoption will result in more opportunities for self-service and tailoring of insurance products to smaller segments of users, but warns that there is a need for collaboration and sharing of anonymised structured data across companies.
What can insurers do to accelerate uptake of ML and DL?
“There are a few key actions that insurance firms might take to improve data readiness,” says Alex Johnson. “The critical first step is to ensure that single views of both retail personal lines and commercial lines customers are available across all brands, channels, and internal data systems.
“Then an integration of the external data assets, which are used throughout the value-chain by operational teams, should be embedded within this in order to enrich the internal data and build a bigger picture.
“Next is to ensure that the full context is uncovered by using solutions that automatically build upon relationships and network associations through knowledge graphs across the data. This will provide a solid foundation to ensure that AI models are maximising their accuracy.
“As well as this, there is a cultural aspect to the adoption of analytical techniques. Insurance companies should also ask themselves if their C-suite has the appetite for a transformative AI ‘moon shot’ or if there is more focus on using AI to capture low-hanging fruit – easy-to-accomplish applications that will deliver short-term value. If a company doesn’t have the necessary data science and analytics capability in-house, then they will need to enlist a network of service providers. On the other hand, if a firm expects to be implementing longer-term AI projects, it will need to recruit expert talent.”
Arguably, insurers have been guilty of embracing AI but treating ML and DL like bolt-ons, rather than integral parts of their business. They have utilised AI to automate processes that would otherwise require some level of human intelligence, but are not actively exploiting ML and DL to gain greater insights.
Johnson continues: “Insurers looking to gain real-world results from their AI application should begin by creating a single customer view. Data pertaining to individuals, organisations or real-world concepts should be joined together in a process called Entity Resolution. Match rates at insurance companies can be as poor as 50% due to data quality issues or an attempt to match complex entities, such as business hierarchies, across poorly formatted or sparsely populated datasets. Fortunately, Entity Resolution techniques can improve the accuracy of this process to 98-99% and manual data cleansing by 85%+. Spending time preparing and cleaning data sets will set insurers up to take advantage of AI and its subsets.”