Xceedance/Insurtech Insights/Blog Posts/AI and Insurance: Key Considerations for Successful Adoption and Use

AI and Insurance: Key Considerations for Successful Adoption and Use

Artificial Intelligence and its related technologies are revolutionizing the insurance industry – and that pace is set to increase dramatically. Here, Monalisa Samal of Xceedance, which offers insurance consulting, technology, and management services, analyzes AI’s impact on the industry and examines ethical considerations for businesses to successfully adopt.

In my 15 years in the field of property analytics, I have seen the world change drastically.  In the past, rule-based systems contained most engineering and underwriting guidelines, with checks carried out automatically against the rules to correct the data on any rule violation. The result was accurate data that helped underwriters to price risks appropriately. But now, artificial intelligence is replacing these rule-based systems.

AI is a more evolved way of using computer algorithms to enable intelligent decision-making. In the insurance industry, AI can deliver superior value across the lifecycle. It starts with customer service chatbots, continues with efficient underwriting, and ends with automated claims management and detection of fraudulent activities.

The data insurance organizations see every day includes hundreds of unstructured submission emails that contain vast amounts of information. Additionally, there are hosts of third-party data in social media posts, telematics, weather, and news. Collecting and analyzing all that data to develop a clear view of the account and associated risks is not easy. Yet every insurer wants to make sense of this combined data to improve underwriting processes and accuracy. AI is essential in accomplishing that goal. Deep learning, machine learning, and natural language processing enable machines to mimic the cognitive functions of the human brain, and carriers, insurers, and reinsurers need to understand those technologies and respond to the new ways of doing business. For instance, how service providers onboard clients is different today than just a few years ago. One example is meeting notes captured by an AI virtual assistant rather than an individual.

Connected devices and open-source data now provide us with more opportunities to offer personalized service and real-time resolution. We can now evaluate every component of a typical policy separately. The transition from traditional policies to usage-based pricing and on-demand, short-term insurance coverage is made possible with those advancements.

With the proliferation of AI, people often ask if human interaction will cease in our industry. As a sci-fi movie fan, I highlight AI is a work partner and does not replace humans. We will soon see the rule-based delegation of the day’s work between humans, helping achieve goals and complete tasks quickly. The role of human emotions, intelligence, and differentiating ability will remain a distinctive factor in making crucial judgments – and that makes the case of AI ethics more pertinent.

The COVID-19 pandemic and the consequent disruption to business caught insurers off guard, with 87 percent of those responsible for insurance operations saying it caused shortcomings in their organization’s digital capability, according to Deloitte’s 2021 Insurance Outlook. When governments worldwide imposed lockdowns, consumers increasingly turned to digital channels to shop for insurance solutions, and major insurance companies reacted by improving their digital operations. Home and auto coverage consumers have witnessed this shift more than commercial buyers, but changes are now impacting all market areas. According to Deloitte, 95 percent of insurers expect to accelerate their digital transformation efforts. AI platform revenues within insurance will grow by 23 percent, to $3.4 billion, between 2019 and 2024, according to GlobalData projections.

Big data, machine learning, and AI capabilities will significantly impact all aspects of the industry, from distribution to underwriting, including policy pricing, binding, and distribution. Achieving maximum technology ROI will require careful consideration of how technology solutions are designed and ensure appropriate algorithms are employed. Interestingly, it is not just a technology issue. Environmental, social, and governance (ESG) issues have become critically important recently, with investors and consumers looking for confirmation that companies are committed to addressing those issues. One example of ESG becoming a key business consideration is investment and divestment in the energy industry. Activist groups and others are pressuring insurance carriers to increase coverage for renewable energy projects while simultaneously non-renew coverage for some fossil fuel-focused companies. To be effective in the long term, AI products must be designed in ethically and socially responsible ways, and there needs to be a clear understanding of reputational risks and opportunities relating to ESG criteria.

An example of this critical consideration comes from a TED Talk video I saw several years ago. In the video, the presenter showed a dog with a background of snow and sky. When a deep learning model reviewed the image, it categorized the dog as a wolf. The model was trained to infer that snow and sky indicated a wolf considering a creature of that size and shape. This phenomenon is known as artificial stupidity in machine language. When we apply this example to critical decision-making, it is easy to see how machine learning systems can sometimes enhance existing bias in data. Hence, the need for ethical or transparent algorithm designs is so critical.

Despite advances in AI, many are fearful of the technology, especially relating to data security and privacy concerns. While AI can make many activities more effortless, it requires large amounts of data and the consent of human users. There is a tension between sacrificing privacy for convenience. AI has a stigma due to its complexity and misconceptions about applying it and questions about what controls individuals have over their data. Companies must show they both understand and appreciate those concerns. Organizations wanting to gain and maintain the trust of investors, consumers, and employees should consider sharing how they collect and use information. Education will play a fundamental role in tackling and overcoming confusion around AI, which can, in turn, enable the industry to embrace this technology effectively. However, clearly articulated declarations on the gathering and using AI collected data can pave the way forward most effectively. If employees and customers know how their data is used, there is less opportunity for confusion or mistrust.

There is tremendous potential for AI to increase efficiencies and usher in economies of scale in the global insurance industry.  Good governance will be a constant concern as we advance, and testing the understanding of investors, consumers, and employees will be critical to successfully implementing AI for the benefit of business and society. The framework for developing AI should always ensure transparency, with continual monitoring of the available data’s quality, accuracy, and completeness. 

Monalisa Samal is senior vice president, risk management and innovation at Xceedance

September 22, 2021