New research!

Fintech is changing the services to consumers, and their relationship with the organizations that offer them. This change is neither top-down nor bottom-up, but is being driven by many different stakeholders in many different parts of the world, making it hard to predict its final form. This research identifies five business models of Fintech that are ideal for AI adoption, growth and building trust (Zarifis & Cheng, 2023).

The five models of Fintech are (a) an existing financial organization disaggregating and focusing on one part of the supply chain, (b) an existing financial organization utilizing AI in the current processes without changing the business model, (c) an existing financial organization, an incumbent, extending their model to utilize AI and access new customers and data, (d) a startup finance disruptor only getting involved in finance, and finally (e) a tech company disruptor adding finance to their portfolio of services.

Figure 1. The five Fintech business models that are optimised for AI

The five Fintech business models give an organization five proven routes to AI adoption and growth. Trust is not always built at the same point in the value chain, or by the same type of organization. The trust building should usually happen where the customers are attracted and on-boarded. This means that while a traditional financial organization must build trust in their financial services, a tech focused organization builds trust when the customers are attracted to other services.

This research also finds support that for all Fintech models the way trust is built, should be part of the business model. Trust is often not covered at the level of the business model and left to operation managers to handle, but for the complex ad-hoc relationships in Fintech ecosystems this should be resolved before Fintech companies start trying to interlink their processes.

Alex Zarifis

Reference

Zarifis A. & Cheng X. (2023) ‘The five emerging business models of Fintech for AI adoption, growth and building trust’. In Zarifis A., Ktoridou D., Efthymiou L. & Cheng X. (ed.) Business digital transformation: Selected cases from industry leaders, London: Palgrave Macmillan, pp.73-97. https://doi.org/10.1007/978-3-031-33665-2_4

New research!

Digital transformation is being driven by AI that is acting as a catalyst for business advancement. We looked at eight cases of digital transformation and found nine key themes. We looked at cases of digital transformation in finance, tourism, transport, entertainment and social innovation (Zarifis et al. 2023).

Figure 1. The tightly coiled ‘spring’ of digital transformation leader’s innovation, and the followers

The first of the nine main themes identified here is: (1) Digital transformation leaders will constantly innovate, while digital transformation laggards will have a stop-start approach. Digital transformation leaders will rapidly innovate going through regular iterative evolutions of their technologies, moving through repeated cycles of agile developments metaphorically forming a ‘spring’. New innovations and in-house skills are built up in this process of constant innovation. Continuing with the metaphor this tightly coiled ‘spring’ will store ‘energy’ propelling the organization forward. Digital transformation laggards will have a stop-start approach copying certain solutions of the leaders but not keeping up. Metaphorically a far less tightly coiled ‘spring’.

The other eight themes identified are: (2) There are no simple answers, or a single way to go forward, with digital transformation. (3) Each sector of the economy has its own opportunities, challenges and must find its own path forward. (4) Changes in one sector of the economy, such as the financial sector, will send a ripple of change across other sectors of the economy. (5) Change needs a shared vision, and digital transformation needs leaders to create the shared vision. (6) Digital transformation needs trust and cooperation on every level: Teams, organizations, governments and super-organizations like the EU. (7) People will still have a role: Staff, customers and other stakeholders are still important. (8) There is a dark side of digital transformation that may have not been fully revealed to us yet. (9) Digital transformation should happen hand in hand with sustainability and resilience.

Those are the nine main themes of digital transformation identified based on the cases we looked at. A leader of digital transformation must disassemble the technology, processes, business models and strategies, involved and then put together their own collage of what they want to achieve, and their own montage of the journey there.

Dr Alex Zarifis

Reference

Zarifis A., Efthymiou L. & Cheng X. (2023) ‘Sustainable digital transformation in finance, tourism, transport, entertainment and social innovation’. In Zarifis A., Ktoridou D., Efthymiou L. & Cheng X. (ed.) Business digital transformation: Selected cases from industry leaders, London: Palgrave Macmillan, pp.1-16. https://doi.org/10.1007/978-3-031-33665-2_1

Mobile apps utilize the features of a mobile device to offer an ever-growing range of functionalities. These apps access our personal data, utilizing both the sensors on the device, and big data from several sources. Nowadays, Artificial Intelligence (AI) is enhancing the ability to utilize more data, and gain deeper insight. This increase in access and utilization of personal information offers benefits, but also challenges to trust. The reason we are re-evaluating trust in this scenario, is because we need to re-calibrate for the increasing role of AI.

This research explores the role of trust, from the consumer’s perspective, when purchasing mobile apps with enhanced AI. Models of trust from e-commerce are adapted to this specific context. The model developed was tested, and the results support it.

Figure 1. Consumer trust and privacy concerns in mobile apps with enhanced AI

The intention to use the mobile app is impacted by (1) propensity to trust, (2) institution-based trust, (3) trust in the mobile app, and (4) the perceived sensitivity of personal information, are found to impact

The first three of those four, are broken down further into their constituent parts. (1) Propensity to trust is based on a person’s (1a) trusting stance in general, and (1b) their general faith in technology. (2) Institution-based trust is strengthened firstly by (2a) structural assurance and (2b) situational normality. Structural assurance of the internet includes guarantees, regulation, promises and related laws. The users evaluation of situational normality can be formed by app reviews. Out of the whole model the institution based factors are the weakest.

Trust in the mobile app (3) is more complex, it is based on five variables. These are (3a) trust in vendor, (3b) trust in app functionality, (3c) trust in genuineness of app, (3d) how human the technology appears to be, and (3e) trust in personal data use.

Those are the main findings of this research. The model is helpful because it can guide the stakeholders involved in mobile apps in how to build trust. By using the model they can identify what they need to communicate better, and what they need to change in the apps, or somewhere else in the ecosystem.

Reference

Zarifis A. & Fu S. (2023) ‘Re-evaluating trust and privacy concern when purchasing a mobile app: Re-calibrating for the increasing role of Artificial Intelligence’, Digital, vol.3, no.4, pp.286-299. https://doi.org/10.3390/digital3040018 (open access)

#trust #information privacy #artificial intelligence #mobile commerce #mobile apps #big data

New Fintech and Insurtech services are popular with consumers as they offer convenience, new capabilities and in some cases lower prices. Consumers like these technologies but do they trust them? The role of consumer trust in the adoption of these new technologies is not entirely understood. From the consumer’s perspective, there are some concerns due to the lack of transparency these technologies can have. It is unclear if these systems powered by artificial intelligence (AI) are trusted, and how many interactions with consumers they can replace. There have been several adverts recently that emphasize that their company will not force you to communicate with AI and will provide a real person to communicate with are evidence of some push-back by consumers. Even pioneers of AI like Google are offering more opportunities to talk to a real person an indirect acknowledgment that some people do not trust the technology. Therefore, this research attempts to shed light on the role of trust in Fintech and Insurtech, especially if trust in AI in general and trust in the specific institution play a role (Zarifis & Cheng, 2022).

Figure 1. A model of trust in Fintech/Insurtech

This research validates a model, illustrated in figure 1, that identifies the four factors that influence trust in Fintech and Insurtech. As with many other models of human behavior, the starting point is the individual’s psychology and the sociology of their environment. Then, the model separates trust in a specific organization and trust in a specific technology like AI. This is an important distinction: Consumers have beliefs about the organization they bring with them and other pre-existing beliefs on AI. Their beliefs on AI might have been shaped by experiences with other organizations.

Therefore, the validated model shows that trust in Fintech or Insurtech is formed by the (1) individual’s psychological disposition to trust, (2) sociological factors influencing trust, (3) trust in either the financial organization or the insurer and (4) trust in AI and related technologies.

This model was initially tested separately for Fintech and Insurtech. In addition to validating a model for trust in Fintech and Insurtech separately, the two models were compared to see if they are equally valid or different. For example, if one variable is more influential in one of the two models, this would suggest that the model of trust in one of them is not the same as in the other. The results of the multigroup analysis show that the model is indeed equally valid for Fintech and Insurtech. Having a model of trust that is suitable for both Fintech and Insurtech is particularly useful as these services are often offered by the same organization, or even the same mobile application side by side.

Reference

Zarifis A. & Cheng X. (2022) ‘A model of trust in Fintech and trust in Insurtech: How Artificial Intelligence and the context influence it’, Journal of Behavioral and Experimental Finance, vol. 36, pp. 1-20. https://doi.org/10.1016/j.jbef.2022.100739 (open access)

This research was featured by Duke University:

Zarifis A. (2022) ‘Trust in Fintech and trust in Insurtech are influenced by Artificial Intelligence’, Duke University (Global Financial Economics Center). Available from: https://sites.duke.edu/thefinregblog/2022/11/11/trust-in-fintech-and-trust-in-insurtech-are-influenced-by-artificial-intelligence/

By Alex Zarifis and the TrustUpdate.com team

The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.

Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.

Dr Alex Zarifis discussing Artificial Intelligence at Loughborough University

Recent research  at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.

Mitigate the lower trust with explicit AI

The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.

Mitigate the higher privacy concerns with explicit AI

The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.

References

European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital

Zarifis A., Kawalek P. & Azadegan A. (2020). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, pp.1-19. https://doi.org/10.1080/15332861.2020.1832817 (open access)

This article was first published on TrustUpdate.com: https://www.trustupdate.com/news/are-trust-and-privacy-concerns-barriers-to-using-health-insurance-that-explicitly-utilizes-ai/