Artificial intelligence (AI) and related technologies are creating new opportunities and challenges for organizations across the insurance value chain. Incumbents are adopting AI-driven automation at different speeds, and new entrants are attempting to use AI to gain an advantage over the incumbents. This research explored four case studies of insurers’ digital transformation. The findings suggest that a technology focused perspective on insurance business models is necessary and that the transformation is at a stage where we can identify the prevailing approaches. The findings identify the prevailing five insurance business models that utilize AI for growth: (1) focus on a smaller part of the value chain and disaggregate, (2) absorb AI into the existing model without changing it, (3) incumbent expanding beyond existing model, (4) dedicated insurance disruptor, and (5) tech company disruptor adding insurance services to their existing portfolio of services (Zarifis & Cheng 2022).

Figure 1. Updated model of five business models in insurance with disruptors split into two types

In addition to the five business models illustrated in Figure 1, this research identified two useful avenues for further exploration: Firstly, many insurers combined the two first business models. For some products, often the simpler ones, such as car insurance, they focused and disaggregated. For other parts of their organization, they did not change their model, but they absorbed AI into their existing model. Secondly, new entrants can be separated into two distinct subgroups: (4) disruptor focused on insurance and (5) disruptor focused on tech but adding insurance.

Reference

Zarifis A., & Cheng X. (2022). AI Is Transforming Insurance With Five Emerging Business Models. In Encyclopedia of Data Science and Machine Learning (pp. 2086–2100). IGI Global. https://www.igi-global.com/chapter/ai-is-transforming-insurance-with-five-emerging-business-models/317609 (open access)

By Alex Zarifis and the TrustUpdate.com team

The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.

Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.

Dr Alex Zarifis discussing Artificial Intelligence at Loughborough University

Recent research  at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.

Mitigate the lower trust with explicit AI

The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.

Mitigate the higher privacy concerns with explicit AI

The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.

References

European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital

Zarifis A., Kawalek P. & Azadegan A. (2020). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, pp.1-19. https://doi.org/10.1080/15332861.2020.1832817 (open access)

This article was first published on TrustUpdate.com: https://www.trustupdate.com/news/are-trust-and-privacy-concerns-barriers-to-using-health-insurance-that-explicitly-utilizes-ai/