New Fintech and Insurtech services are popular with consumers as they offer convenience, new capabilities and in some cases lower prices. Consumers like these technologies but do they trust them? The role of consumer trust in the adoption of these new technologies is not entirely understood. From the consumer’s perspective, there are some concerns due to the lack of transparency these technologies can have. It is unclear if these systems powered by artificial intelligence (AI) are trusted, and how many interactions with consumers they can replace. There have been several adverts recently that emphasize that their company will not force you to communicate with AI and will provide a real person to communicate with are evidence of some push-back by consumers. Even pioneers of AI like Google are offering more opportunities to talk to a real person an indirect acknowledgment that some people do not trust the technology. Therefore, this research attempts to shed light on the role of trust in Fintech and Insurtech, especially if trust in AI in general and trust in the specific institution play a role (Zarifis & Cheng, 2022).

Figure 1. A model of trust in Fintech/Insurtech

This research validates a model, illustrated in figure 1, that identifies the four factors that influence trust in Fintech and Insurtech. As with many other models of human behavior, the starting point is the individual’s psychology and the sociology of their environment. Then, the model separates trust in a specific organization and trust in a specific technology like AI. This is an important distinction: Consumers have beliefs about the organization they bring with them and other pre-existing beliefs on AI. Their beliefs on AI might have been shaped by experiences with other organizations.

Therefore, the validated model shows that trust in Fintech or Insurtech is formed by the (1) individual’s psychological disposition to trust, (2) sociological factors influencing trust, (3) trust in either the financial organization or the insurer and (4) trust in AI and related technologies.

This model was initially tested separately for Fintech and Insurtech. In addition to validating a model for trust in Fintech and Insurtech separately, the two models were compared to see if they are equally valid or different. For example, if one variable is more influential in one of the two models, this would suggest that the model of trust in one of them is not the same as in the other. The results of the multigroup analysis show that the model is indeed equally valid for Fintech and Insurtech. Having a model of trust that is suitable for both Fintech and Insurtech is particularly useful as these services are often offered by the same organization, or even the same mobile application side by side.

Reference

Zarifis A. & Cheng X. (2022) ‘A model of trust in Fintech and trust in Insurtech: How Artificial Intelligence and the context influence it’, Journal of Behavioral and Experimental Finance, vol. 36, pp. 1-20. Available from (open access): https://doi.org/10.1016/j.jbef.2022.100739

This research was featured by Duke University:

Zarifis A. (2022) ‘Trust in Fintech and trust in Insurtech are influenced by Artificial Intelligence’, Duke University (Global Financial Economics Center). Available from: https://sites.duke.edu/thefinregblog/2022/11/11/trust-in-fintech-and-trust-in-insurtech-are-influenced-by-artificial-intelligence/

By Alex Zarifis and the TrustUpdate.com team

The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.

Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.

Dr Alex Zarifis discussing Artificial Intelligence at Loughborough University

Recent research  at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.

Mitigate the lower trust with explicit AI

The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.

Mitigate the higher privacy concerns with explicit AI

The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.

References

European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital

Zarifis A., Kawalek P. & Azadegan A. (2020). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, pp.1-19, Available from (open access): https://doi.org/10.1080/15332861.2020.1832817

This article was first published on TrustUpdate.com: https://www.trustupdate.com/news/are-trust-and-privacy-concerns-barriers-to-using-health-insurance-that-explicitly-utilizes-ai/