Dr Alex Zarifis

Several countries’ economies have been disrupted by the sharing economy. However, each country and its consumers have different characteristics including the language used. When the language is different does it change the interaction? If we have a discussion in English and a similar discussion in German will it have the same meaning exactly, or does language lead us dawn a different path? Is language a tool or a companion holding our hand on our journey?

This research compares the text in the profile of those offering their properties in England in English, and in Germany in German, to explore if trust is built, and privacy concerns are reduced, in the same way.

Figure 1. How landlords build trust in the sharing economy

The landlords make an effort to build trust in themselves, and the accuracy of the description they provide. The landlords build trust with six methods: (1) The first is the level of formality in the description. More formality conveys a level of professionalism. (2) The second is distance and proximity. Some landlords want to keep a distance so it is clear that this is a formal relationship, while others try to be more friendly and approachable. (3) The third is ‘emotiveness’ and humour, that can create a sense of shared values. (4) The fourth method of building trust is being assertive and passive aggressive, that sends a message that the rules given in the description are expected to be followed. (5) The fifth method is conformity to the platform language style and terminology that suggests that the platform rules will be followed. (6) Lastly, the sixth method to build trust is setting boundaries that offer clarity and transparency.

Privacy concerns are not usually reduced directly by the landlord as this is left to the platform. The findings indicate that language has a limited influence and the platform norms and habits have the largest influence. We can say that the platform has choreographed this dance sufficiently between the participants so that different languages have a limited influence on the outcome.

Reference

Zarifis A., Ingham R. & Kroenung, J. (2019) ‘Exploring the language of the sharing economy: Building trust and reducing privacy concern on Airbnb in German and English’, Cogent Business & Management, vol.6, iss.1, pp.1-15. Available from (open access): https://doi.org/10.1080/23311975.2019.1666641

Dr Alex Zarifis

Ransomware attacks are not a new phenomenon, but their effectiveness has increased causing far reaching consequences that are not fully understood. The ability to disrupt core services, the global reach, extended duration, and the repetition of these attacks has increased their ability to harm an organization.

One aspect that needs to be understood better is the effect on the consumer. The consumer in the current environment, is exposed to new technologies that they are considering to adopt, but they also have strong habits of using existing systems. Their habits have developed over time, with their trust increasing in the organization in contact directly, and the institutions supporting it. The consumer now shares a significant amount of personal information with the systems they have a habit of using. These repeated positive experiences create an inertia that is hard for the consumer to move out of. This research explores whether the global, extended, and repeated ransomware attacks reduce the trust and inertia sufficiently to change long held habits in using information systems. The model developed captures the cumulative effect of this form of attack and evaluates if it is sufficiently harmful to overcome the e-loyalty and inertia built over time.

Figure 1. The steps of a typical ransomware attack

This research combines studies on inertia and resistance to switching systems with a more comprehensive set of variables that cover the current e-commerce status quo. Personal information disclosure is included along with inertia and trust as it is now integral to e-commerce functioning effectively.

As you can see in the figure the model covers the 7 factors that influence the consumer’s decision to stop using an organization’s system because of a ransomware attack. The factors are in two groups. The first group is the ransomware attack that includes the (1) ransomware attack effect, (2) duration and (3) repetition. The second group is the E-commerce environment status quo which includes (4) inertia, (5) institutional trust, (6) organizational trust and (7) information privacy.

Figure 2.  Research model: The impact of ransomware attacks on the consumer’s intentions

The implications of this research are both theoretic and practical. The theoretic contribution is highlighting the importance of this issue to Information Systems and business theory. This is not just a computer science and cybersecurity issue. We also linked the ransomware literature to user inertia in the model.

There are three practical implications: Firstly, by understanding the impact on the consumer better we can develop a better strategy to reduce the effectiveness of ransomware attacks. Secondly, processes can be created to manage such disasters as they are happening and maintain a positive relationship with the consumer. Lastly, the organizations can develop a buffer of goodwill and e-loyalty that would absorb the negative impact on the consumer from an attack and stop them reaching the point where they decide to switch system.

Dr Alex Zarifis presenting research on ransomware

References

Zarifis A., Cheng X., Jayawickrama U. & Corsi S. (2022) ‘Can Global, Extended and Repeated Ransomware Attacks Overcome the User’s Status Quo Bias and Cause a Switch of System?’, International Journal of Information Systems in the Service Sector (IJISSS), vol.14, iss.1, pp.1-16. Available from (open access): https://doi.org/10.4018/IJISSS.289219

Zarifis A. & Cheng X. (2018) ‘The Impact of Extended Global Ransomware Attacks on Trust: How the Attacker’s Competence and Institutional Trust Influence the Decision to Pay’, Proceedings of the Americas Conference on Information Systems (AMCIS), pp.2-11. Available from: https://aisel.aisnet.org/amcis2018/Security/Presentations/31/

By Alex Zarifis and the TrustUpdate.com team

The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.

Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.

Dr Alex Zarifis discussing Artificial Intelligence at Loughborough University

Recent research  at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.

Mitigate the lower trust with explicit AI

The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.

Mitigate the higher privacy concerns with explicit AI

The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.

References

European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital

Zarifis A., Kawalek P. & Azadegan A. (2020). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, pp.1-19, Available from (open access): https://doi.org/10.1080/15332861.2020.1832817

This article was first published on TrustUpdate.com: https://www.trustupdate.com/news/are-trust-and-privacy-concerns-barriers-to-using-health-insurance-that-explicitly-utilizes-ai/