My start as a lecturer was rather unusual. Most of the others from my PhD cohort at the University of Manchester progressed seamlessly to lectureships without having to change their topic. My first role was to take several face-to-face modules on different topics, convert them to online modules, retrain the lecturers to teach online and get the programmes validated for online delivery.
This meant I fell behind in my specialisation that was e-commerce and trust, but it gave me an appreciation of the importance of studying different teaching approaches and adapting to different contexts. There is a huge variety of approaches with different advantages. For example, one reputable university has three compulsory assignments per week to ‘force’ students to engage whilst another has one assignment per module to avoid over-assessment.
There are certainly no simple answers or one model that always works. The only solution is to constantly learn different teaching approaches, understand our context and try to marry the two as best we can.
I have tried to understand the impact of context on education for many years. I have researched how collaborative patterns improve online collaboration among students  and how to develop a course for cross-border e-commerce . More recently I have explored the potential of Artificial Intelligence in education , how to improve student satisfaction online  and the impact of online learning for students during the pandemic .
When I was doing my PhD, I had a professor that told me I had to read three papers a day. I do not achieve this most days, but it is a clear target for me to aspire towards. I believe a clear target for us to aim for as lecturers is to read one paper or book chapter on education a week. I hope we do not have another pandemic but if we do our homework and improve our craft, we should be ready for whatever new context we face.
1. Cheng, X., Wang, X., Huang, J., & Zarifis, A. (2016) ‘An Experimental Study of Satisfaction Response: Evaluation of Online Collaborative Learning’, International Review of Research in Open and Distributed Learning, 17, 60–78. Available from (open access): http://www.irrodl.org/index.php/irrodl/article/view/2110
4. Efthymiou, L., Zarifis, A. (2021) ‘The International Journal of Management Education Modeling students ’ voice for enhanced quality in online management education’, The International Journal of Management Education, 19, 100464. Available from: https://doi.org/10.1016/j.ijme.2021.100464
5. Zuo, Y., Cheng, X., Bao, Y., Zarifis, A. (2021) Investigating user satisfaction of university online learning courses during the COVID-19 epidemic period. In: Proceedings of the 54th Hawaii International Conference on System Sciences. pp. 1139–1148 . Available from (open access): http://hdl.handle.net/10125/70751
Thank you to the University of Nicosia, especially Chara Zymara and Kasiani Pari, for featuring my article:
The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.
Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.
Recent research at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.
Mitigate the lower trust with explicit AI
The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.
Mitigate the higher privacy concerns with explicit AI
The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.
European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital
Zarifis A., Kawalek P. & Azadegan A. (2020). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, pp.1-19, Available from (open access): https://doi.org/10.1080/15332861.2020.1832817
This article was first published on TrustUpdate.com: https://www.trustupdate.com/news/are-trust-and-privacy-concerns-barriers-to-using-health-insurance-that-explicitly-utilizes-ai/
Trust is necessary whenever there is risk. This means it is more important in some contexts than others. While trust has been researched for many decades, it became a more prominent concern with the introduction and expansion of the Internet. The loss of face to face interaction raised the perceived risk and the importance of trust. Once solutions were found, to reduce the risk and build trust, this became a smaller challenge.
Insurtech is another phenomenon where concern about trust is increasingly important so trust must be explored. Indeed, trust emerges as a problem whenever there is a new widely-adopted technology, like blockchain, 5G or AI. For example, chatbots or virtual assistants that utilize AI are widely used to interact with the person purchasing insurance or making a claim (Zarifis et al. 2020). From the consumer’s perspective there are some concerns. It is unclear if they are trusted and how many interactions with the consumer they can replace.
In this blog I outline the possible constituent factors to support trust in Insurtech. I start with the psychology and sociology of trust, then discuss trust in other areas and trust in AI and data technologies. I then draw these issues together to propose a model of trust in Insurtech.
2) The psychology and sociology of trust
There is literature on trust in many different areas such as business, collaboration and education, but the foundations are usually psychology and sociology. Each specific context such as business or more specifically Insurtech bring with them some idiosyncratic twists on the common themes from psychology and sociology.
Each person has a different physiology and experiences that shape their psychological disposition. Therefore, many models of trust start with this variable (McKnight et al. 2002). In most cases, creating a general model of trust that ignores the different individual disposition is hard to support with the data. Having personally tried to explore and validate models of trust I can confirm that it is usually hard to take this variable out and still have a model that is supported by the data. To put it simply, on the one extreme some people’s default approach is to trust while on the other extreme some people’s default is to mistrust. Most of us are somewhere in the middle. Across various contexts, the psychology of trust is similar as it does not come from the context but from the individual. In other words, someone inclined to trust is this way across several contexts.
The sociological factors influencing trust are not as consistent as the psychological ones because they are influenced by the context to some degree. They are however often similar across similar contexts. These factors can come from the broader society or more specific subsets of society more closely related to the specific context. While we are distinguishing between the psychology and sociology of trust, it is important to clarify that these two shape each other over time and this interaction depends on the specific instance of an interaction.
3) Trust in other areas
One prominent model of trust in e-commerce, widely considered to be the seminal paper bringing trust theory into e-commerce and information systems, showed how dispositions to trust combined with contextual factors created trust (McKnight et al. 2002). Once trust was brought into e-commerce and information systems it has been adapted to several contexts, so that it captures the consumer’s perspective accurately. My more recent research on trust has identified that in a multichannel retail environment including physical stores, 2D websites and 3D websites, trust can be built and transferred between channels (Zarifis 2019). Trust in blockchain based transactions like Bitcoin were found to combine those from e-commerce with some specific characteristics of this technology such as the digital currency, the intermediary and the level of regulation and self-regulation (Zarifis et al. 2014).
The examples we have seen so far involve a payment which puts a monetary value at risk. Trust is also necessary in other contexts however where there is no monetary value involved. For example in online collaboration it evolves over several stages and the interaction can be shaped with specific activities to reinforce it (Cheng et al. 2013). Another example where trust is important despite no monetary value being exchange is education. For example in virtual and semi-virtual teams, non-homogenous groups need to be supported more so that they can build and sustain a stable trust (Cheng et al. 2016).
4) Trust in AI and data technologies
Figure 1. The 3 levels of visibility of technologies from the consumer’s perspective
The introduction outlined why trust in Insurtech is important and how trust evolves. However, the consumer engaging in Insurtech already has some experience and beliefs in its constituent technologies. As we have seen in the second section the consumer’s trust evolves depending on what technologies they interact with. For example, while purchasing insurance online with a chatbot may be a new experience, they may have interacted with chatbots before. Someone who uses a virtual assistant in their home and experiences the interaction, and how their data is used, will have some beliefs on this issue. While AI dominates the headlines other data technologies are also important. Each technology raises different issues. For example, blockchain technologies were designed to build trust but there are people that distrust them more than the existing alternatives. For some, blockchain technologies and a decentralized ledger reduce risk, while for others a traditional database controlled by one organisation is less risky.
Therefore, we must understand the consumer’s perspective on the constituent technologies of Insurtech. Unfortunately, this is made harder by the different visibility of each of these technologies. Some are fully visible, like a chatbot, others are not visible, but consumers know they are there, and others are mostly unknown to the consumer. The three levels of visibility are illustrated in figure 1. The technologies that are visible to the consumer and understood by them, can be seen as the ‘tip of the iceberg’ of what is actually used in the process of purchasing insurance or making a claim.
5) Trust in Insurtech
Figure 2. A model of trust in Insurtech
The role of technology in insurance is increasing and this is reflected in the increasing popularity of the term Insurtech. This term only emerged recently but it is now widely used in the insurance and technology sectors. AI driven automation, utilizing additional technologies such as big data, Internet of Things (IoT), blockchain and 5G is making the role of technology even more central than it was before. What is trust in Insurtech and is it different to other forms of trust? The first step to answering this question is to attempt to identify its constituent parts. My starting point is that Insurtech is formed by (1) Individuals psychological disposition to trust, (2) Sociological factors influencing trust, (3) Trust in the insurer and (4) Trust in the related technologies (e.g. AI). This relationship is illustrated in figure 2. Further research is needed to empirically test and validate this model. It must also be explored if additional factors like law and regulation act like separate variables or moderate these relationships. The long journey of insurers, their consumers and AI has just started and trust in each other is needed for it to be harmonious.
Cheng X, Fu S, Sun J, et al (2016) Investigating individual trust in semi-virtual collaboration of multicultural and unicultural teams. Comput Human Behav 62:267–276. doi: 10.1016/j.chb.2016.03.093
Cheng X, Macaulay L, Zarifis A (2013) Modeling individual trust development in computer mediated collaboration: A comparison of approaches. Comput Human Behav 29:1733–1741.
McKnight H, Choudhury V, Kacmar C (2002) Developing and Validating Trust Measures for e-Commerce: An Integrative Typology. Inf Syst Res 13:334–359.
Zarifis A (2019) The Six Relative Advantages in Multichannel Retail for Three-Dimensional Virtual Worlds and Two-Dimensional Websites. In: Proceedings of the 11th ACM Conference on Web Science, WebSci 2019. Boston, MA, pp 363–372
Zarifis A, Efthymiou L, Cheng X, Demetriou S (2014) Consumer trust in digital currency enabled transactions. Lect Notes Bus Inf Process 183:241–254. doi: 10.1007/978-3-319-11460-6_21
Zarifis A, Kawalek P, Azadegan A (2020) Evaluating If Trust and Personal Information Privacy Concerns Are Barriers to Using Health Insurance That Explicitly Utilizes AI. J Internet Commer. doi: 10.1080/15332861.2020.1832817
I am very happy and grateful that Loughborough University chose to showcase my research ‘Working from home (WFH): Management styles must evolve to work effectively during the coronavirus lockdown’ on the university website, newsfeed, staff resource page and Imago website, a big thank you to Peter Warzynski and Nadine Skinner:
Working from home (WFH): Management styles must evolve to work effectively during the coronavirus lockdown (9 April 2020)
Managers who are having to adapt and lead virtual teams should adopt a more people-focused style of leadership, according to new research.
Dr Alex Zarifis, of Loughborough University, has published a new paper which outlines the most effective ways of overseeing staff now that most people find themselves logging in from home.
The impact of coronavirus means that face-to-face interactions and office routines are no longer a feature of everyday working life.
This means that the style of management, known as transactional leadership, that until the lockdown had worked successfully, is no longer the most effective way of handling teams.
Instead, says Dr Zarifis, managers should adopt an approach known as transformational leadership – which puts staff member’s needs first.
The main characteristics of this method are:
· Focus on people’s needs, not the tasks
· Focus on motivating and inspiring
· Encourage innovation
Dr Zarifis said: “Transactional leadership focuses on tasks while transformational leadership focuses on people.
“This research found that in challenging times, with a high degree of uncertainty, people focused leadership is better.
“Put simplistically, teams that were forced to become virtual – due to COVID-19 – need a visionary leader, not an administrator.”
He added: “A leader of a virtual team should focus on people’s needs, motivate and be flexible – encouraging innovation.”
The main findings from the paper:
Focus on people’s needs, not the tasks
Understand the situation of each individual and support them in the way they need. If a team member has difficulty working because their children are at home, this would not be within the realms of responsibility of a transactional leader.
But it must be for a transformational leader.
Focus on motivating and inspiring
Accept that monitoring and controlling are often the priorities of functional management and transactional leadership may be less achievable and effective in virtual teams.
Instead act more like a project manager and a transformational leader.
As a transformational leader focus ion motivating and inspiring the shared vision instead of controlling.
What is a shared vision during a COVID-19 outbreak? This could be to stay safe and meet the requirements of our role.
Accept that working during COVID-19 is a project and act more like a project manager. A functional manager should accept this change and accept that they cannot act like a transactional manager during this non-routine period and must be flexible.
Managers whose instincts are to control every aspect of the work must learn to take a step back. Instead encourage innovation.
The current context of change must be accepted and utilised.
In the past year, Elon Musk and Tesla have fascinated the world with new innovations like the Tesla Cybertruck. There is excitement about most new Tesla products, but one hugely important one has been largely overlooked. With far less fanfare and no stage performance by Musk, Tesla started offering car insurance last September. In the long run, this is going to have a major impact on most of our lives – perhaps even greater than Tesla’s more eye-catching innovations.
Tesla Insurance is only available for Tesla vehicles in some states of the US at present. It will expand the number of territories gradually over time. But as with the Tesla Cybertruck, the company first wants to see how the business holds up to whatever is thrown at it and whether it cracks under pressure.
For those eligible for Tesla Insurance, the company claims to offer premiums 20% to 30% lower than rivals. Yet even if you are in an area where you can request a quote, Tesla won’t necessarily make you an offer. It sometimes still refers drivers to a traditional insurance partner instead. It may be that Tesla chooses the clearer, less risky cases and sends more complex ones to insurers with more experience and appetite to handle them.
So why is Tesla selling car insurance? For one thing, it has the real-time data from all its drivers’ behaviour and the performance of its vehicle technology, including camera recordings and sensor readings, so it can estimate the risk of accidents and repair costs accurately. This reliance on data may well mean it never branches into selling insurance to drivers of other manufacturers’ cars.
At the moment, Tesla is offering insurance premiums calculated with aggregated anonymous data. In future it could roll out more customised services, like the ones offered by insurers using telematic black boxes, to offer drivers (cheaper) quotes based on how they actually drive.
Another motivation for Tesla is that some insurers charge a relatively high premium for Tesla cars. One reason is that they still don’t have much historic information about the cost of repairs of electric vehicles. By vertically integrating insurance into its offering, Tesla brings down the price of owning its products.
At the same time, insurance is a barrier to many innovations that Tesla is targeting for the future. With the insurance taken care of, it will be easier to sell self-driving vehicles or send people to Mars (with sister company SpaceX). Like many things Elon Musk does, this both solves a short-term problem and fits the longer-term strategy. It’s a little like how Tesla focused on producing luxury vehicles first to finance the infrastructure for selling cheaper cars like the Tesla Model 3.
How insurance is changing
Tesla has one more reason for offering insurance, which is that the sector is changing: a tech company disrupting it fits the zeitgeist perfectly. My research at Loughborough University has looked into this disruption. I evaluated 32 insurance providers around the world including Tesla and found that artificial intelligence, big data, the internet of things, blockchain and edge computing were all rewiring insurance, both literally and metaphorically.
Broadly speaking, the work of the insurer is shifting from local human expert underwriters to automation driven by big data and AI. The existing industry players that I evaluated essentially fell into three categories. Some had recognised they cannot compete with tech companies. They were focusing on interacting with customers, branding and marketing, while outsourcing everything else to companies with the relevant skills.
Other insurers were trying to add new technologies to their existing business model. For instance, some are using chatbots that apply machine learning and natural language processing to offer live customer support. Yet another group had more fully embraced the new technological capabilities. For example, life insurers like Vitality and Bupa now encourage customers to use wearable monitoring devices to offer them guidance on improving their health and avoiding accidents.
Alongside all these were the new breed of insurers, with Tesla perhaps the best example. Others include Chinese giants Alibaba and Tencent. Just like Apple and Google are making incursions into banking and finance, these are tech-savvy companies with many existing customers who are adding insurance to their portfolio of services. In every case, the capabilities of AI and big data-driven automation have acted as a catalyst.
What it means for drivers
In the short term, Tesla drivers can look forward to insurance that is arguably more seamless and convenient and may well be cheaper – particularly if they clock up fewer miles and drive safely. (Drivers should still compare prices with other insurers: the likes of Progressive and GEICO are among those that insure Tesla vehicles.)
In the longer term, this is a sign that insurance – like banking, road tax and many services – will be driven by real-time data. It will probably change our behaviour for the better. We will probably drive slower, eat healthier food and exercise more – even if libertarians will be uneasy.
This shift will challenge our attitudes towards personal information privacy. Some of us will value the benefits of being open and transparent with our personal information, while others might seek solutions that keep their data with them. Edge computing has potential here, since it allows some data processing to be done on your device so that your personal data doesn’t need to be sent to a central server.
So Tesla and Elon Musk have not just added another revenue stream to their many successful endeavours. They are also helping to fundamentally change the way that we interact with insurance providers. In the future, insurers will be more like a partner on our journey both by car and on foot – both on Earth and beyond.
The article by Dr Alex Zarifis on ‘Why is Tesla selling insurance and what does it mean for drivers?’ was covered by several media including the following: