January 23, 2025

Building patient trust in AI

Ian Wharton
Founder, Chief Executive Officer

Claudio Ciborra, a distinguished Professor of Information Systems at the London School of Economics, said that anytime we bring new technology into our lives, “it arrives as an ambiguous stranger.”

He conceptualised how these ‘strangers’ either integrate harmoniously with the host's environment or become unruly if they conflict with existing values and identities.

He could have been writing about AI in healthcare.

Over the past four years at Aide Health, we have placed structured, auditable conversational AI in the hands of patients to help them better manage multiple chronic conditions. We often reflect on Ciborra’s writing. At its core, this idea is about trust. There is a leap of faith in using novel technology, followed by an ongoing evaluation to see if it aligns with our values. If it doesn’t, it remains an outsider.

For patients, AI has the potential to fulfil its promises only if its interaction with the host is trustful. This task is full of contradictions in healthcare. Beyond the integrity of the infrastructure being a question of trust (e.g. data security), the following are two of the most important but lesser-discussed topics.

1. Patient willingness to confide in AI

A study by the University of Southern California (USC)[1] explored patient willingness to disclose personal information to virtual interviewers compared to humans. The research found that participants were more forthcoming with sensitive information when interacting with a digital agent. People were more honest about their symptoms and even more likely to show intense signs of sadness when they believed a human wasn’t on the other side of the conversation.

The researchers concluded that the outcome favouring the virtual human was due to a lack of perceived judgment. This has been demonstrated outside the use of AI. Studies repeatedly show[2] that many patients withhold medically relevant information from their clinician due to fear of judgment or not wanting to let them down.

We see these signals at Aide Health. Built on Google technology, our platform has short, daily conversations with patients to help them adhere to their treatment and uncover the reasons why they might be struggling. In one real-world NHS evaluation of Aide across 200 people with asthma[3], 25.6% willingly told Aide they stopped taking their chronic medication when they felt their condition was under control. If known, this behaviour, which can significantly increase the risk of exacerbations in chronic disease, can help clinicians engage in more meaningful discussions or provide education. A behaviour not so easily divulged when face-to-face with a human.

The USC study shows that marrying AI into the clinical setting could improve the accuracy and quality of patient-reported information. Trust due to a presumed lack of judgement from AI may help remove obstacles of embarrassment or fear and uncover vital findings that are otherwise difficult to uncover.

2. Attitudes towards high-stakes decisions

Researchers at the University of Arizona present the opposing view of our hospitality with AI[4]. Their work examined the factors influencing the preference and acceptance of this technology in healthcare.

The study showed that people fear AI’s coldness more than the potential for human error. Respondents often described AI as being "detached", and while accepting the system's precision may be greater than the fallible human, concerns were raised that AI cannot comprehend the emotional dimensions of care. This was more pronounced when empathy and rapport were critical, such as delivering diagnoses.

The coldness was further amplified in populations with distinct cultural expectations of care. The study found that certain minority groups expressed heightened scepticism towards AI-driven diagnostics, citing concerns of bias or lack of personalisation. Contrary to most discourse on AI in health today, this was not solely a result of how the AI was trained. The research suggests these attitudes are shaped significantly by how the traditional healthcare system has supported each patient previously. These ‘fingerprints’ influence perceptions of trust and efficacy in the novel technology at the individual level.

Trust in AI cannot be divorced from the patient’s prior experience.

A way forward

Both the USC and Arizona studies point to ‘detachment’ as a critical characteristic of AI in healthcare. Detachment is not inherently a weakness. It is better understood as a form of neutrality. In the USC study, neutrality (lack of judgment) created psychological safety and increased patient openness. In the Arizona study, neutrality (coldness) left patients feeling excluded.

That exclusion is critical because shared decision-making (SDM) is a widely accepted cornerstone of good healthcare. It focuses on accurate and comprehensible evidence sharing, eliciting values, mutual agreement, and patient autonomy, which can all lead to better health outcomes[5]. Patients are more likely to follow through with decisions they actively participate in. SDM also mitigates "preference misdiagnosis," where clinicians (or, in this case, systems) incorrectly assume or fail to learn what’s important to the patient, leading to decisions that may not be in the patient's best interests[6].

Most modern studies on AI in healthcare agree that the augmentation of clinicians, rather than substitution, is the path forward. While an approach we believe in at Aide Health, this needs more depth. What we may require is a concept of 'adaptive neutrality' to shape that depth. Deliberately designed into the system in some areas, such as self-reporting sensitive information, while ensuring it isn’t accidentally fostered in others, such as the moment of treatment choice. A graceful switching between the two modes, depending on the context. SDM could help ensure neutrality is intentionally applied.

Ciborra’s writing is a sound metaphor for the use of AI in healthcare. It is a reminder of the simultaneous potential for enrichment or disruption, depending on the dynamics of hospitality. It helps us consider the messier, real-world use of new technology rather than the idealistic use. For example, the cultural competence required to understand that even though the technology is novel, a patient’s interaction with the healthcare system likely isn’t, and with that may come additional challenges to navigate.

The use of AI in healthcare is a bell that cannot be unrung. It is already patient-facing, prescribed or otherwise. Health outcomes are, therefore, already somewhat dependent on not only how good but also how trusted these systems are. Acknowledging the arrival of Ciborra’s “ambiguous stranger” can help design for the application of AI so it doesn't become unruly but instead understands and respects the values and identity of the host.

References

  1. Jonathan Gratch, Gale M. Lucas, Aisha Aisha King, and Louis-Philippe Morency. 2014. It's only a computer: the impact of human-agent interaction in clinical interviews. In Proceedings of the 2014 International Conference on Autonomous Agents and multi-agent Systems (AAMAS '14). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 85–92. https://ict.usc.edu/pubs...
  2. Ajayi, K.V., Olowolaju, S., Bolarinwa, O.A. et al. Association between patient-provider communication and withholding information due to privacy concerns among women in the United States: an analysis of the 2011 to 2018 Health Information National Trends Survey. BMC Health Serv Res 23, 1155 (2023). https://doi.org/10.1186/s12913-023-10112-7
  3. Aide Health, Suffolk Primary Care, 2024. Saving time and reducing hospitalisation risk with digital long-term condition platform. https://www.aide.health/case-studies/suffolk
  4. Robertson C, Woods A, Bergstrand K, Findley J, Balser C, Slepian MJ. Diverse patients’ attitudes towards Artificial Intelligence (AI) in diagnosis. PLOS Digit Health. 2023;2(5):e0000237. doi:10.1371/journal.pdig.0000237. Available from: https://journals.plos.org/digitalhealth...
  5. Carmona C, Crutwell J, Burnham M, Polak L. Shared decision-making: summary of NICE guidance BMJ 2021; 373 :n1430 doi:10.1136/bmj.n1430 https://www.bmj.com/content/373/bmj.n1430
  6. Mulley A G, Trimble C, Elwyn G. Stop the silent misdiagnosis: patients’ preferences matter BMJ 2012; 345 :e6572 doi:10.1136/bmj.e6572 https://www.bmj.com/content/345/bmj.e6572

Other articles you might like