Fintech is changing the services to consumers, and their relationship with the organizations that offer them. This change is neither top-down nor bottom-up, but is being driven by many different stakeholders in many different parts of the world, making it hard to predict its final form. This research identifies five business models of Fintech that are ideal for AI adoption, growth and building trust (Zarifis & Cheng, 2023).
The five models of Fintech are (a) an existing financial organization disaggregating and focusing on one part of the supply chain, (b) an existing financial organization utilizing AI in the current processes without changing the business model, (c) an existing financial organization, an incumbent, extending their model to utilize AI and access new customers and data, (d) a startup finance disruptor only getting involved in finance, and finally (e) a tech company disruptor adding finance to their portfolio of services.
Figure 1. The five Fintech business models that are optimised for AI
The five Fintech business models give an organization five proven routes to AI adoption and growth. Trust is not always built at the same point in the value chain, or by the same type of organization. The trust building should usually happen where the customers are attracted and on-boarded. This means that while a traditional financial organization must build trust in their financial services, a tech focused organization builds trust when the customers are attracted to other services.
This research also finds support that for all Fintech models the way trust is built, should be part of the business model. Trust is often not covered at the level of the business model and left to operation managers to handle, but for the complex ad-hoc relationships in Fintech ecosystems this should be resolved before Fintech companies start trying to interlink their processes.
Alex Zarifis
Reference
Zarifis A. & Cheng X. (2023) ‘The five emerging business models of Fintech for AI adoption, growth and building trust’. In Zarifis A., Ktoridou D., Efthymiou L. & Cheng X. (ed.) Business digital transformation: Selected cases from industry leaders, London: Palgrave Macmillan, pp.73-97. https://doi.org/10.1007/978-3-031-33665-2_4
Artificial intelligence (AI) and related technologies are creating new opportunities and challenges for organizations across the insurance value chain. Incumbents are adopting AI-driven automation at different speeds, and new entrants are attempting to use AI to gain an advantage over the incumbents. This research explored four case studies of insurers’ digital transformation. The findings suggest that a technology focused perspective on insurance business models is necessary and that the transformation is at a stage where we can identify the prevailing approaches. The findings identify the prevailing five insurance business models that utilize AI for growth: (1) focus on a smaller part of the value chain and disaggregate, (2) absorb AI into the existing model without changing it, (3) incumbent expanding beyond existing model, (4) dedicated insurance disruptor, and (5) tech company disruptor adding insurance services to their existing portfolio of services (Zarifis & Cheng 2022).
In addition to the five business models illustrated in Figure 1, this research identified two useful avenues for further exploration: Firstly, many insurers combined the two first business models. For some products, often the simpler ones, such as car insurance, they focused and disaggregated. For other parts of their organization, they did not change their model, but they absorbed AI into their existing model. Secondly, new entrants can be separated into two distinct subgroups: (4) disruptor focused on insurance and (5) disruptor focused on tech but adding insurance.
The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.
Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.
Recent research at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.
Mitigate the lower trust with explicit AI
The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.
Mitigate the higher privacy concerns with explicit AI
The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.
References
European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital
Zarifis A., Kawalek P. & Azadegan A. (2020). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, pp.1-19. https://doi.org/10.1080/15332861.2020.1832817 (open access)
This article was first published on TrustUpdate.com: https://www.trustupdate.com/news/are-trust-and-privacy-concerns-barriers-to-using-health-insurance-that-explicitly-utilizes-ai/
Trust is necessary whenever there is risk. This means it is more important in some contexts than others. While trust has been researched for many decades, it became a more prominent concern with the introduction and expansion of the Internet. The loss of face to face interaction raised the perceived risk and the importance of trust. Once solutions were found, to reduce the risk and build trust, this became a smaller challenge.
Insurtech is another phenomenon where concern about trust is increasingly important so trust must be explored. Indeed, trust emerges as a problem whenever there is a new widely-adopted technology, like blockchain, 5G or AI. For example, chatbots or virtual assistants that utilize AI are widely used to interact with the person purchasing insurance or making a claim (Zarifis et al. 2020). From the consumer’s perspective there are some concerns. It is unclear if they are trusted and how many interactions with the consumer they can replace.
In this blog I outline the possible constituent factors to support trust in Insurtech. I start with the psychology and sociology of trust, then discuss trust in other areas and trust in AI and data technologies. I then draw these issues together to propose a model of trust in Insurtech.
2) The psychology and sociology of trust
There is literature on trust in many different areas such as business, collaboration and education, but the foundations are usually psychology and sociology. Each specific context such as business or more specifically Insurtech bring with them some idiosyncratic twists on the common themes from psychology and sociology.
Each person has a different physiology and experiences that shape their psychological disposition. Therefore, many models of trust start with this variable (McKnight et al. 2002). In most cases, creating a general model of trust that ignores the different individual disposition is hard to support with the data. Having personally tried to explore and validate models of trust I can confirm that it is usually hard to take this variable out and still have a model that is supported by the data. To put it simply, on the one extreme some people’s default approach is to trust while on the other extreme some people’s default is to mistrust. Most of us are somewhere in the middle. Across various contexts, the psychology of trust is similar as it does not come from the context but from the individual. In other words, someone inclined to trust is this way across several contexts.
The sociological factors influencing trust are not as consistent as the psychological ones because they are influenced by the context to some degree. They are however often similar across similar contexts. These factors can come from the broader society or more specific subsets of society more closely related to the specific context. While we are distinguishing between the psychology and sociology of trust, it is important to clarify that these two shape each other over time and this interaction depends on the specific instance of an interaction.
3) Trust in other areas
One prominent model of trust in e-commerce, widely considered to be the seminal paper bringing trust theory into e-commerce and information systems, showed how dispositions to trust combined with contextual factors created trust (McKnight et al. 2002). Once trust was brought into e-commerce and information systems it has been adapted to several contexts, so that it captures the consumer’s perspective accurately. My more recent research on trust has identified that in a multichannel retail environment including physical stores, 2D websites and 3D websites, trust can be built and transferred between channels (Zarifis 2019). Trust in blockchain based transactions like Bitcoin were found to combine those from e-commerce with some specific characteristics of this technology such as the digital currency, the intermediary and the level of regulation and self-regulation (Zarifis et al. 2014).
The examples we have seen so far involve a payment which puts a monetary value at risk. Trust is also necessary in other contexts however where there is no monetary value involved. For example in online collaboration it evolves over several stages and the interaction can be shaped with specific activities to reinforce it (Cheng et al. 2013). Another example where trust is important despite no monetary value being exchange is education. For example in virtual and semi-virtual teams, non-homogenous groups need to be supported more so that they can build and sustain a stable trust (Cheng et al. 2016).
4) Trust in AI and data technologies
Figure 1. The 3 levels of visibility of technologies from the consumer’s perspective
The introduction outlined why trust in Insurtech is important and how trust evolves. However, the consumer engaging in Insurtech already has some experience and beliefs in its constituent technologies. As we have seen in the second section the consumer’s trust evolves depending on what technologies they interact with. For example, while purchasing insurance online with a chatbot may be a new experience, they may have interacted with chatbots before. Someone who uses a virtual assistant in their home and experiences the interaction, and how their data is used, will have some beliefs on this issue. While AI dominates the headlines other data technologies are also important. Each technology raises different issues. For example, blockchain technologies were designed to build trust but there are people that distrust them more than the existing alternatives. For some, blockchain technologies and a decentralized ledger reduce risk, while for others a traditional database controlled by one organisation is less risky.
Therefore, we must understand the consumer’s perspective on the constituent technologies of Insurtech. Unfortunately, this is made harder by the different visibility of each of these technologies. Some are fully visible, like a chatbot, others are not visible, but consumers know they are there, and others are mostly unknown to the consumer. The three levels of visibility are illustrated in figure 1. The technologies that are visible to the consumer and understood by them, can be seen as the ‘tip of the iceberg’ of what is actually used in the process of purchasing insurance or making a claim.
5) Trust in Insurtech
Figure 2. A model of trust in Insurtech
The role of technology in insurance is increasing and this is reflected in the increasing popularity of the term Insurtech. This term only emerged recently but it is now widely used in the insurance and technology sectors. AI driven automation, utilizing additional technologies such as big data, Internet of Things (IoT), blockchain and 5G is making the role of technology even more central than it was before. What is trust in Insurtech and is it different to other forms of trust? The first step to answering this question is to attempt to identify its constituent parts. My starting point is that Insurtech is formed by (1) Individuals psychological disposition to trust, (2) Sociological factors influencing trust, (3) Trust in the insurer and (4) Trust in the related technologies (e.g. AI). This relationship is illustrated in figure 2. Further research is needed to empirically test and validate this model. It must also be explored if additional factors like law and regulation act like separate variables or moderate these relationships. The long journey of insurers, their consumers and AI has just started and trust in each other is needed for it to be harmonious.
References
Cheng X, Fu S, Sun J, et al (2016) Investigating individual trust in semi-virtual collaboration of multicultural and unicultural teams. Comput Human Behav 62:267–276. doi: 10.1016/j.chb.2016.03.093
Cheng X, Macaulay L, Zarifis A (2013) Modeling individual trust development in computer mediated collaboration: A comparison of approaches. Comput Human Behav 29:1733–1741.
McKnight H, Choudhury V, Kacmar C (2002) Developing and Validating Trust Measures for e-Commerce: An Integrative Typology. Inf Syst Res 13:334–359.
Zarifis A (2019) The Six Relative Advantages in Multichannel Retail for Three-Dimensional Virtual Worlds and Two-Dimensional Websites. In: Proceedings of the 11th ACM Conference on Web Science, WebSci 2019. Boston, MA, pp 363–372
Zarifis A, Efthymiou L, Cheng X, Demetriou S (2014) Consumer trust in digital currency enabled transactions. Lect Notes Bus Inf Process 183:241–254. doi: 10.1007/978-3-319-11460-6_21
Zarifis A, Kawalek P, Azadegan A (2020) Evaluating If Trust and Personal Information Privacy Concerns Are Barriers to Using Health Insurance That Explicitly Utilizes AI. J Internet Commer. doi: 10.1080/15332861.2020.1832817