Tim Berners-Lee, the creator of the world wide web, has released an important new book about the problems we face online and how to solve them. It is called This is for Everyone, meaning that the internet should be for all.
The philosophy espoused in the book is that the internet should not be a tool for the concentration of power among an elite. He wants the internet to function in a way that maximises the benefit to society.
His central idea, as he has written before, is that people should own their data. Personal data is any data that can be linked to us, such as our purchasing habits, health information and political opinions.
Everyone owning their data is a radically different approach to what we have today where big tech companies own most of it. This change is needed for two reasons.
The first is specifically about people’s right to privacy, so we don’t all feel like we live in a glass box with everything we do being monitored and having an effect on our careers and the prices we pay for services such as insurance. If AI is steered to make more money for an insurer it will do that, but it will not necessarily treat people fairly.
The second reason is that in a world being shaped by AI and data, if we do not own our data, we will have no power and no say in our future. For most of human history, workers’ labour was needed, and this gave them some power to pursue a fairer deal for themselves.
Most of us have the power to deny our valuable labour if we feel we are not treated fairly, but this may not have the same effect in the future. For many of us, in the highly automated AI driven world we are moving towards, our labour will not always be needed. Our data, however, will be very valuable, and if people own their data, they will still have a voice. When a tech giant owns our data, it holds all the cards.
None of these ideas are new, but as with the creation of the world wide web, Berners-Lee excels in bringing the best ideas together into one coherent, workable vision.
Many people have pet-hates about the internet, some dislike how algorithms sometimes promote controversial views, and others don’t like handing over more personal information for a service than what is necessary. His ability to see the bigger picture is due to the knowledge he has, having had a front row seat to the development of the world wide web from the start.
But what would this look like?
In practice, owning our data would mean having a data wallet app on our phone which internet companies might request access to. The internet companies could offer a small payment, or make their service free in exchange for the access. The individual could choose to manage access themselves on a case-by-case basis, or delegate the management of the data to a trusted third party such as a data union.
Berners-Lee recommends two possible solutions to break free from the oligopolistic situation we are in. The first is for government to intervene and create the regulation that would maximise the social good of the internet limiting the power of big tech.
This is highly unlikely in the United States where big tech is fully supported by the state. While a court in the US recently decided that Google had acted illegally to keep its monopoly status in search, it was not broken up under monopoly laws because it would be “messy”.
Elsewhere, though, for instance in the EU and Australia, there is a concerted effort to limit the negative outcomes for society of the internet. The EU constantly updates its general data protection regulation so that it offers some protection to citizens’ privacy, while in Australia a world-first social media ban has been passed for children under 16.
Berners-Lee’s vision would require governments to go further. He has repeatedly asked for governments to regulate big tech warning that failing to do so would lead to the internet being “weaponised at scale” to maximise profit not social good. The regulation would seek to broaden competition beyond a small number of giant tech companies.
Beyond state intervention, Berners-Lee presents other ways forward. Perhaps, he contends, people themselves can begin building better alternatives. For example, more people could use social media such as Mastodon.social that is decentralised and does not promote polarising views.
As he sees it, a key part of the problem is that we become tied into platforms run by the giants. Owning our data would go some way to having a fairer relationship. Instead of being locked into an increasingly small number of big tech firms this would open the door to new platforms offering a better deal.
Berners-Lee created the Open Data Institute that tries to bring agreement on new online standards. He is promoting what he calls socially linked data and co-founded Inrupt that offers an online wallet to store all our personal data. This could include our passport, qualifications, and information about our health.
This decentralised model would give people the ability to analyse their data locally within the wallet to gain insights on their finances and health, without giving their data away. They would have the option to share their data, but this would now be from a position of strength.
Access would be given to a specific organisation, to use specific personal data, for a specific purpose. AI, even more so than the internet, gives power to whoever has the data. If the data is shared, so will the power.
Unlikely, but you never know
Despite proposing solutions, his vision is the underdog here. The chances of it prevailing in the face of big tech power are limited. But it is a powerful message that better alternatives are possible. This message can motivate citizens and leaders to push for a fairer internet that maximises social good.
The future of the internet and the future of humanity are interwoven. Will we actively engage to shape the future we want, or will we be helpless passive consumers? The worsening or “enshitification” of services has become an almost inevitable part of the innovation cycle. Many of us now wonder when, not if the service we receive will start to degrade dramatically once we are locked in.
There is dissatisfaction but this has not yet led to people changing their habits, possibly because there have not been better alternatives. Berners-Lee made the world wide web a success because his solution was more decentralised than the alternatives. People are now seeing the results of the overcentralised internet, and they want to go back to those decentralised principles.
Berners-Lee has offered an alternative vision. To succeed it would need to support from both consumers and states. That may seem unlikely, but once, so did the idea that the world would be connected via a single online information sharing platform.

Article published in The Conversation, republished under Creative Commons licence.

Reference
Zarifis A. (2025) ‘Tim Berners-Lee wants everyone to own their own data – his plan needs state and consumer support to work’, The Conversation. Available from: https://doi.org/10.64628/AB.yq5sssjr3

Financial technology often referred to as Fintech, and sustainability are two of the biggest influences transforming many organizations. However, not all organizations move forward on both with the same enthusiasm. Leaders in Fintech do not always prioritize operating in a sustainable way. It is, therefore, important to find the synergies between Fintech and sustainability.

One important aspect of this transformation many organizations are going through is the consumersʹ perspective, particularly the trust they have, their personal information privacy concerns, and the vulnerability they feel. It is important to clarify whether leadership in Fintech, with leadership in sustainability, is more beneficial than leadership in Fintech on its own.

This research evaluates consumers’ trust, privacy concerns, and vulnerability in the two scenarios separately and then compares them. Firstly, this research seeks to validate whether leadership in Fintech influences trust in Fintech, concerns about the privacy of personal information when using Fintech, and the feeling of vulnerability when using Fintech. It then compares trust, privacy concerns and vulnerability in two scenarios, one with leadership in both Fintech and sustainability, and one with leadership just in Fintech without sustainability.

Figure 1. Leadership in Fintech, trust, privacy and vulnerability, with and without sustainability

The findings show that, as expected, leadership in both Fintech and sustainability builds trust more, which in turn reduces vulnerability more. Privacy concerns are lower when sustainability leadership and Fintech leadership come together; however, their combined impact was not found to be sufficiently statistically significant. So contrary to what was expected, privacy concerns are not reduced more effectively when there is leadership in both together.

The findings support the link between sustainability in the processes of a Fintech and being successful. While the limited research looking at Fintech and sustainability find support for the link between them by taking a ‘top‐down’ approach and evaluating Fintech companies against benchmarks such as economic value, this research takes a ‘bottom‐up’ approach by looking at how Fintech services are received by consumers.

An important practical implication of this research is that even when there is sufficient trust to adopt and use Fintech, the consumer often still feels a sense of vulnerability. This means the leaders in Fintech must not just think about how to do enough for the consumer to adopt their service, but they should go beyond that and try to build trust and reduce privacy concerns to the degree that the consumer’s belief that they are vulnerable is also reduced.

These findings can inform a Fintech’s business model and the services it offers consumers.

Reference

Zarifis A. (2024) ‘Leadership in Fintech builds trust and reduces vulnerability more when combined with leadership in sustainability’, Sustainability, 16, 5757, pp.1-13. https://doi.org/10.3390/su16135757 (open access)

Featured by FinTech Scotland: https://www.fintechscotland.com/leadership-in-fintech-builds-trust-and-reduces-vulnerability/

New research!

Central Bank Digital Currencies (CBDC) are digital money issued, and backed, by a central bank. Consumer trust can encourage or discourage the adoption of this currency, which is also a payment system and a technology. CBDCs are an important part of the new Fintech solutions disrupting finance, but also more generally society. This research attempts to understand consumer trust in CBDCs so that the development and adoption stages are more effective, and satisfying, for all the stakeholders. This research verified the importance of trust in CBDC adoption, and developed a model of how trust in a CBDC is built (Zarifis & Cheng 2023).

Figure 1. Model of how trust in a Central Bank Digital Currencies (CBDC) is built in six ways

There are six ways to build trust in CBDCs. These are: (1) Trust in government and central bank issuing the CBDC, (2) expressed guarantees for the user, (3) the positive reputation of existing CBDCs active elsewhere, (4) the automation and reduced human involvement achieved by a CBDC technology, (5) the trust building functionality of a CBDC wallet app, and (6) privacy features of the CBDC wallet app and back-end processes such as anonymity. The first three trust building methods relate to trust in the institutions involved, while the final three relate to trust in the technology used. Trust in the technology is like the walls of a new building and institutional trust is like the buttresses that support it.

This research has practical implications for the various stakeholders involved in implementing and operating a CBDC but also the stakeholders in the ecosystem using CBDCs. The stakeholders involved in delivering and operating CBDCs such as governments, central banks, regulators, retail banks and technology providers can apply the six trust building approaches so that the consumer trusts a CBDC and adopts it.

Dr Alex Zarifis

Reference

Zarifis A. & Cheng X. (2023) ‘The six ways to build trust and reduce privacy concern in a Central Bank Digital Currency (CBDC)’. In Zarifis A., Ktoridou D., Efthymiou L. & Cheng X. (ed.) Business digital transformation: Selected cases from industry leaders, London: Palgrave Macmillan, pp.115-138. https://doi.org/10.1007/978-3-031-33665-2_6

Mobile apps utilize the features of a mobile device to offer an ever-growing range of functionalities. These apps access our personal data, utilizing both the sensors on the device, and big data from several sources. Nowadays, Artificial Intelligence (AI) is enhancing the ability to utilize more data, and gain deeper insight. This increase in access and utilization of personal information offers benefits, but also challenges to trust. The reason we are re-evaluating trust in this scenario, is because we need to re-calibrate for the increasing role of AI.

This research explores the role of trust, from the consumer’s perspective, when purchasing mobile apps with enhanced AI. Models of trust from e-commerce are adapted to this specific context. The model developed was tested, and the results support it.

Figure 1. Consumer trust and privacy concerns in mobile apps with enhanced AI

The intention to use the mobile app is impacted by (1) propensity to trust, (2) institution-based trust, (3) trust in the mobile app, and (4) the perceived sensitivity of personal information, are found to impact

The first three of those four, are broken down further into their constituent parts. (1) Propensity to trust is based on a person’s (1a) trusting stance in general, and (1b) their general faith in technology. (2) Institution-based trust is strengthened firstly by (2a) structural assurance and (2b) situational normality. Structural assurance of the internet includes guarantees, regulation, promises and related laws. The users evaluation of situational normality can be formed by app reviews. Out of the whole model the institution based factors are the weakest.

Trust in the mobile app (3) is more complex, it is based on five variables. These are (3a) trust in vendor, (3b) trust in app functionality, (3c) trust in genuineness of app, (3d) how human the technology appears to be, and (3e) trust in personal data use.

Those are the main findings of this research. The model is helpful because it can guide the stakeholders involved in mobile apps in how to build trust. By using the model they can identify what they need to communicate better, and what they need to change in the apps, or somewhere else in the ecosystem.

Reference

Zarifis A. & Fu S. (2023) ‘Re-evaluating trust and privacy concern when purchasing a mobile app: Re-calibrating for the increasing role of Artificial Intelligence’, Digital, vol.3, no.4, pp.286-299. https://doi.org/10.3390/digital3040018 (open access)

#trust #information privacy #artificial intelligence #mobile commerce #mobile apps #big data

Dr Alex Zarifis

Several countries’ economies have been disrupted by the sharing economy. However, each country and its consumers have different characteristics including the language used. When the language is different does it change the interaction? If we have a discussion in English and a similar discussion in German will it have the same meaning exactly, or does language lead us dawn a different path? Is language a tool or a companion holding our hand on our journey?

This research compares the text in the profile of those offering their properties in England in English, and in Germany in German, to explore if trust is built, and privacy concerns are reduced, in the same way.

Figure 1. How landlords build trust in the sharing economy

The landlords make an effort to build trust in themselves, and the accuracy of the description they provide. The landlords build trust with six methods: (1) The first is the level of formality in the description. More formality conveys a level of professionalism. (2) The second is distance and proximity. Some landlords want to keep a distance so it is clear that this is a formal relationship, while others try to be more friendly and approachable. (3) The third is ‘emotiveness’ and humour, that can create a sense of shared values. (4) The fourth method of building trust is being assertive and passive aggressive, that sends a message that the rules given in the description are expected to be followed. (5) The fifth method is conformity to the platform language style and terminology that suggests that the platform rules will be followed. (6) Lastly, the sixth method to build trust is setting boundaries that offer clarity and transparency.

Privacy concerns are not usually reduced directly by the landlord as this is left to the platform. The findings indicate that language has a limited influence and the platform norms and habits have the largest influence. We can say that the platform has choreographed this dance sufficiently between the participants so that different languages have a limited influence on the outcome.

Reference

Zarifis A., Ingham R. & Kroenung, J. (2019) ‘Exploring the language of the sharing economy: Building trust and reducing privacy concern on Airbnb in German and English’, Cogent Business & Management, vol.6, iss.1, pp.1-15. https://doi.org/10.1080/23311975.2019.1666641 (open access)

Dr Alex Zarifis

Ransomware attacks are not a new phenomenon, but their effectiveness has increased causing far reaching consequences that are not fully understood. The ability to disrupt core services, the global reach, extended duration, and the repetition of these attacks has increased their ability to harm an organization.

One aspect that needs to be understood better is the effect on the consumer. The consumer in the current environment, is exposed to new technologies that they are considering to adopt, but they also have strong habits of using existing systems. Their habits have developed over time, with their trust increasing in the organization in contact directly, and the institutions supporting it. The consumer now shares a significant amount of personal information with the systems they have a habit of using. These repeated positive experiences create an inertia that is hard for the consumer to move out of. This research explores whether the global, extended, and repeated ransomware attacks reduce the trust and inertia sufficiently to change long held habits in using information systems. The model developed captures the cumulative effect of this form of attack and evaluates if it is sufficiently harmful to overcome the e-loyalty and inertia built over time.

Figure 1. The steps of a typical ransomware attack

This research combines studies on inertia and resistance to switching systems with a more comprehensive set of variables that cover the current e-commerce status quo. Personal information disclosure is included along with inertia and trust as it is now integral to e-commerce functioning effectively.

As you can see in the figure the model covers the 7 factors that influence the consumer’s decision to stop using an organization’s system because of a ransomware attack. The factors are in two groups. The first group is the ransomware attack that includes the (1) ransomware attack effect, (2) duration and (3) repetition. The second group is the E-commerce environment status quo which includes (4) inertia, (5) institutional trust, (6) organizational trust and (7) information privacy.

Figure 2.  Research model: The impact of ransomware attacks on the consumer’s intentions

The implications of this research are both theoretic and practical. The theoretic contribution is highlighting the importance of this issue to Information Systems and business theory. This is not just a computer science and cybersecurity issue. We also linked the ransomware literature to user inertia in the model.

There are three practical implications: Firstly, by understanding the impact on the consumer better we can develop a better strategy to reduce the effectiveness of ransomware attacks. Secondly, processes can be created to manage such disasters as they are happening and maintain a positive relationship with the consumer. Lastly, the organizations can develop a buffer of goodwill and e-loyalty that would absorb the negative impact on the consumer from an attack and stop them reaching the point where they decide to switch system.

Dr Alex Zarifis presenting research on ransomware

References

Zarifis A., Cheng X., Jayawickrama U. & Corsi S. (2022) ‘Can Global, Extended and Repeated Ransomware Attacks Overcome the User’s Status Quo Bias and Cause a Switch of System?’, International Journal of Information Systems in the Service Sector (IJISSS), vol.14, iss.1, pp.1-16. Available from (open access): https://doi.org/10.4018/IJISSS.289219

Zarifis A. & Cheng X. (2018) ‘The Impact of Extended Global Ransomware Attacks on Trust: How the Attacker’s Competence and Institutional Trust Influence the Decision to Pay’, Proceedings of the Americas Conference on Information Systems (AMCIS), pp.2-11. Available from: https://aisel.aisnet.org/amcis2018/Security/Presentations/31/

By Alex Zarifis and the TrustUpdate.com team

The capabilities of Artificial Intelligence are increasing dramatically, and it is disrupting insurance and healthcare. In insurance AI is used to detect fraudulent claims and natural language processing is used by chatbots to interact with the consumer. In healthcare it is used to make a diagnosis and plan what the treatment should be. The consumer is benefiting from customized health insurance offers and real-time adaptation of fees. Currently the interface between the consumer purchasing health insurance and AI raises some barriers such as insufficient trust and privacy concerns.

Consumers are not passive to the increasing role of AI. Many consumers have beliefs on what this technology should do. Furthermore, regulation is moving toward making it necessary for the use of AI to be explicitly revealed to the consumer (European Commission 2019). Therefore, the consumer is an important stakeholder and their perspective should be understood and incorporated into future AI solutions in health insurance.

Dr Alex Zarifis discussing Artificial Intelligence at Loughborough University

Recent research  at Loughborough University (Zarifis et al. 2020), identified two scenarios, one with limited AI that is not in the interface, whose presence is not explicitly revealed to the consumer and a second scenario where there is an AI interface and AI evaluation, and this is explicitly revealed to the consumer. The findings show that trust is lower when AI is used in the interactions and is visible to the consumer. Privacy concerns were also higher when the AI was visible, but the difference was smaller. The implications for practice are related to how the reduced trust and increased privacy concern with visible AI are mitigated.

Mitigate the lower trust with explicit AI

The causes are the reduced transparency and explainability. A statement at the start of the consumer journey about the role AI will play and how it works will increase transparency and reinforce trust. Secondly, the importance of trust increases as the perceived risk increases. Therefore, the risks should be reduced. Thirdly, it should be illustrated that the increased use of AI does not reduce the inherent humanness. For example, it can be shown how humans train AI and how AI adopts human values.

Mitigate the higher privacy concerns with explicit AI

The consumer is concerned about how AI will utilize their financial, health and other personal information. Health insurance providers offer privacy assurances and privacy seals, but these do not explicitly refer to the role of AI. Assurances can be provided about how AI will use, share and securely store the information. These assurances can include some explanation of the role of AI and cover confidentiality, secrecy and anonymity. For example, while the consumer’s information may be used to train machine learning it can be made clear that it will be anonymized first. The consumer’s perceived privacy risk can be mitigated by making the regulation that protects them clear.

References

European-Commission (2019). ‘Ethics Guidelines for Trustworthy AI.’ Available from: https://ec.europa.eu/digital

Zarifis A., Kawalek P. & Azadegan A. (2020). ‘Evaluating if Trust and Personal Information Privacy Concerns are Barriers to Using Health Insurance that Explicitly Utilizes AI’, Journal of Internet Commerce, pp.1-19. https://doi.org/10.1080/15332861.2020.1832817 (open access)

This article was first published on TrustUpdate.com: https://www.trustupdate.com/news/are-trust-and-privacy-concerns-barriers-to-using-health-insurance-that-explicitly-utilizes-ai/