Three ways the insurance industry can adopt AI responsibly
This article was originally published on Property Casualty 360
Like all modern businesses, the insurance industry’s continued adoption of artificial intelligence is now a key component for standing out and rising above competitors.
According to a recent survey published by KPMG, financial services is where the rate of AI adoption is the highest among all industries (+37% in one year). Using machine learning and natural language processing helps insurers in several ways, including policyholder needs discovery, sales management and underwriting automation.
However, increasing skepticism about data protection, privacy and bias all require the industry to implement a responsible approach to using AI. Here are three things insurers should consider to establish responsible AI adoption.
When implementing AI, it is important to understand exactly how models are formed and how they are used. If you think about it, it is particularly important in an industry like insurance, based on fundamental principles such as equality, mutualization and care toward each other. It isn’t always easy to decode the language of data scientists, but if you’re implementing an AI-first product, you’ll need to be certain that you’re able to explain how and why it is being done. Be sure you’re able to answer simple questions like:
- Who designed this?
- What is the technology’s main purpose?
- Why such a decision?
You’ll also need to be able to understand what data was used to train the model and verify the authenticity of the dataset.
Not only will a deep understanding of AI models be crucial, but it’s also important to combat biases. As insurers are increasingly scaling AI, there is a greater potential for bias, which must be actively mitigated on a consistent basis. These biases can result in marginalized groups being considered less creditworthy or unfair pricing.
Insurers have the paramount responsibility to guarantee data sets are legitimate and analyze whether or not algorithms were coded in a way that may inadvertently perpetuate discrimination among citizens.
Who is responsible for ensuring the insurance industry remains transparent in its use of AI? It will take the effort of multiple stakeholders, including data scientists, government, business owners, agencies and more, to regulate the industry and hold players accountable.
As regulators and compliance teams continue to analyze the use of AI across the insurance industry, there has been an increase in the number of laws that have been put into place. This includes the Insurance Distribution Directive in Europe, the Explainable Artificial Intelligence project in the U.S., and the California Consumer Privacy Act. There is also a general AI Code of Conduct created by the National Association of Insurance Commissioners.
These top-down approaches are able to protect consumer data and increase public trust in AI. While government regulation is key to monitoring the responsible use of AI, it isn’t the only solution available to companies that are looking to help regulate the responsible use of its AI. It is important to look within your organization to ensure it is building a culture of transparency and fostering a sense of trust for employees to come forward and report biases.
Finally, while AI can free up employees to apply their skills elsewhere in the company, insurers should keep them as an essential part of the equation. According to a report by Accenture, only 35% of insurers have inclusive design or human-centric design principles in place to support human-machine collaboration. In order to get the most out of your AI, human involvement will have to remain in many aspects of the company. Customer-experience teams will need to be thoroughly educated on the AI technology that is available to help them with their job and understand which processes can be streamlined. Bottom line: Human expertise plus AI is stronger than human expertise alone or AI alone.
The insurance industry has historically been the first user of data and statistics and is today moving fast towards widespread adoption of cutting-edge AI. For example, the number of insurers reporting that they use AI to build risk models for decision making and to reduce manual input has doubled in the last two years. Sixty percent of companies are targeting such capabilities by 2021. AI is increasingly becoming integrated into daily insurance workflows. Companies that are able to rapidly adopt this technology in a responsible way will have the upper hand.
Christophe Bourguignat is co-founder and CEO of Zelros, makers of an AI-driven insurance distribution platform. The opinion’s expressed here are the author’s own.