AI is a victim of its own hype in Insurance

By Josh Odmark, Co-Founder and CTO, Pandio
Josh Odmark from Pandio explores the development of AI within insurance and why it might not be achieving its full potential...

The P&C insurance industry continues its march towards digital transformation in order to meet rising customer expectations, requiring incumbent insurers to vet myriad existing and emerging technologies to find ones that meet organisational needs. The sheer overwhelming choices make it a challenge to decide where to invest limited IT budget and resources. Artificial intelligence (AI) is a prime example of this challenge, a technology that is both fundamental to the future of the industry and difficult to execute properly at scale. 

According to The Economist Intelligence Unit, banks and insurance companies are expecting to increase AI investments 86% by 2025. However, implementation steps will be the difference between success and a wasted investment. Many startups and companies proudly tout solutions as AI and machine learning within insurance, when really, they are often dabbling in analytics which involves much more linear data processing. This distinction is important because, according to Accenture, 84% of business executives believe they won’t achieve their business strategy without scaling AI. Below are the considerations insurers must address in order to operationalize true AI that can match the lofty expectations of the technology.

Level of AI maturity in insurance

Understanding true AI versus what certain companies believe to be AI is a crucial first step. Presently, most of the P&C industry today is investing in piecemeal predictive analytics projects, which can include machine learning and AI components, but are still siloed use cases often supporting a single business function (e.g. underwriting). Predictive modeling also has a more linear data pathway than true AI models. Current machine learning models are often trained in batches every few months or years, whereas true AI operationalization (where the model is ever-evolving) involves supporting a constant feedback loop comprising data ingestion from disparate systems and formats, training the model in real-time, evaluating decisions and then prediction/inference.

This process involves orders of magnitude more data and cloud resources than the most analytics projects. Using AI for underwriting is one thing, but ingesting drone imagery of every major freeway in California to help determine driving behaviors is another. These complex and data-intensive AI applications can only be achieved by the largest companies with the most capital and resources. If insurers try to use the same platforms and methodologies for today’s analytics projects to achieve larger AI projects, the infrastructure will eventually burst at the seams. To reach the next level of AI success, they must address common roadblocks that usually surface at the start of a project.

Overcoming roadblocks to AI

The common roadblocks to operationalising AI are data issues, a dearth of specialised talent, and operational expenditure outweighing the projected long-term ROI. While these may appear to be disparate challenges, they are all connected to the use of improper infrastructure. Most platforms today used for analytics, such as Apache Kafka, are not built to support the ever-increasing amounts of data needed for multiple AI models or advanced use cases. Consider that the current AI ecosystem requires many vendors including cloud providers, event streaming, pub/sub, message queuing. This creates many additional points of failure and requires more highly specialized, expensive and scarce MLops talent to overcompensate. Instead, insurers must use platforms that unify distributed messaging for pub/sub, event streaming and message queuing, such as Apache Pulsar, which is more conducive to scaling AI workloads. 

Streamlining workloads also means less of a need for scarce data scientists, MLops talent, and developers to sustain a project. The other aspect of operationalizing true AI is that there is a point where humans simply can’t be the one to manage model training. Leveraging neural networks (similar to a computerized brain) will be crucial to handling the logistics, orchestrating and ensuring data is captured and distributed appropriately to-and-from the model.

The correct way to approach AI initiatives is not getting caught up in “How I can get this done now?”, but rather, “How can I sustain AI five years in the future?”. AI projects cannot be easily lifted and shifted to new platforms once they’ve begun, so making sure they can routinely capture new and existing data from the variety of insurance systems and process them in a scalable manner is the difference between failure and success. Staying ahead of burgeoning data growth is the key to realising AI initiatives.

This article was contributed by Josh Odmark, Co-Founder and CTO of Pandio

Share

Featured Articles

Allianz Announces Partnership With Clearspeed

Emerging scams like moped fraud and shallow fakes pose new challenges to insurers, so more sophisticated detection systems are crucial

Milliman Arius: Reserve Analysis with an End-to-End Solution

Insurers face risks and errors with current reserve analysis methods – and Arius provides the answer

Allstate: BCG Partner Harnesses Gen AI to Transform CX

Allstate and BCG are harnessing Gen AI via a new model to better understand customer needs and improve overall experiences within the insurance sector

Comarch Diagnostic Point: Next Gen European Health Insurance

Insurtech

MoneyLIVE Summit 2024: Qover Talks Embedded Insurance

Insurtech

Ansel raises US$20m to combat financial healthcare barriers

Partner Ecosystems