Developing strong relationships is a basic human need, they are essential to our long term wellbeing. Studies have shown that those of us who are lucky enough to have stable, continuous friendships are generally healthier and live longer. Unfortunately, in a world where we are more connected, many feel more isolated and are therefore seeking dialogue with something more accessible, technology. This second paper of our trust trilogy will therefore focus on the emerging psychological issues from recent developments in digital technology that should impact our thinking around the design of more sustainable AI systems.
Many developers exploit the basic human need for friendships by anthropomorphising their products, as establishing personality traits with which the user is familiar accelerates the process of building trust. Indeed, someone is more likely to trust a robot that has human characteristics than one that doesn’t. For the developer this may seem an appealing approach, it is however a double edged sword. Utilising ethopoeia to build a system ‘persona’ can in one sense, create trust but in another, risk losing it all together. If trust in someone or something you have befriended is lost through failing to meet expectations, then one's emotional ‘fall’ is far greater. Although this may not seem at all rational, in the hands of an inexperienced user, the emotional element of trust can be far stronger due to a lack of understanding of the technology. This forms the basis of a major issue for developers, overtrust. Once compromised, this can build adversarial feelings in the mind of the user, actively causing them to sideline the system or even blocking implementation.
The recent Horizon scandal is a clear example of too much trust being placed in digital technology that has not undergone sufficient testing. There are many more lessons we need to learn even when tragedies like this demonstrate the danger of stakeholder overtrust in technology. Designers must incorporate this human weakness in their approach particularly when implementing services in which we are required to trust, such as those provided by the government or the health service.
At the heart of this problem lies the human tendency to trust in technology where our own knowledge is lacking as we assume others have tested on our behalf. Overtrust is more likely to occur at the edges of our own capabilities and in safety critical scenarios, which are often the most dangerous. Whilst it is clear that AI systems cannot lie, their conclusions can be wrong and so the widespread implementation of such products without careful consideration has the potential for disaster. In the rush to exploit AI, products are arriving in the market in an immature state, something clearly illustrated by the number of them withdrawn due to observed limitations. Vendors need to be explicit as to the capability of their products, but more importantly their limits as otherwise our trust in some large digital brands will be severely undermined.
One of the most concerning issues arising from the use of generative AI is the issue of hallucinations. As humans we learn by using information and observations. If we are fed the wrong data then we may unwittingly not perceive the truth, creating a hallucination rather than a lie. AI is no different. Such systems are simply looking for patterns in data, so if that source of information is compromised it is unlikely that the intended purpose of the system will be achieved. Such false perceptions of reality are the source of major concerns for those observing how AI is used, particularly when decisions taken or influenced by such content could be life threatening. The more critical those decisions, the greater the care we need to take in design and that starts with data.
The challenge for AI is the amount of data required to develop real confidence in analysis and predictions that form the basis of decision making. AI systems such as large language models will primarily scrape the internet for data which is inherently biased by those populating the information source. If unchecked, this can lead to highly undesired outcomes from any analysis performed. A situation made critically worse as the volume of disinformation is exponentially increasing. Alternatively, if too little information exists, the AI system cannot have a full awareness of potential outcomes and again, outcomes will be compromised. This situation is exacerbated within commercial organisations who are highly sensitive to data being publicly released which compromise the protection of their intellectual property. Such precautions can force systems to operate on a far more restricted dataset of internal information for training algorithms, which can be trusted, but exhibit proximity bias. These approaches will only amplify internal bias and lead to reduced reliability in decision making.
Spotting what is and isn’t real is challenging at the best of times, but with recent advances in the development of imagery from AI, the picture becomes even more confusing. Historically, art or photographs have been faked to mislead observers into believing the creators desired view of reality. Until now this ability was limited to those skilled in the field, but today this capability is in the hands of everyone, undeniably it is a marketing feature of most advanced mobile phones. Unlike the written word, from which our senses find it is hard to immediately decipher fact from fiction, our visual senses are far better tuned to spotting fakes. Developers of immersive environments and humanoid robots have become well aware of a phenomena known as the ‘uncanny valley’ where the closer something becomes to reality, the more our senses are heightened to identify something is not correct. This can cause observers discomfort or even revulsion towards an artefact, and what is problematic for the AI designer, it is wired into our DNA, it cannot be bypassed. So, seeing is not always believing.
If, however, we are fooled by such techniques and faked images do cause offence or even worse, then issues of liability are going to continue to be raised. As a consequence, social media companies are now investing heavily to address this challenge, but many are sceptical that this approach will work. One thing they do agree on, is that spotting fake outcomes from generative AI tools such as ChatGPT, is a bridge too far. That, it seems, is something we will have to adopt old school methods to solve, we will simply need to check, check and check again any information we receive. Our next article will look at how better design can help us address these issues.
In this series, Ian Risk, Non-Executive Director of Intellium AI, is joined by Psychologist Charlotte Preen to assess the importance of the human approach to establishing trust with this emerging technology. This article was produced using solely human endeavours, you will have to believe us on that one.