Dr Isabel Straw / UKRI Research Fellow in Artificial Intelligence / Emergency Medicine Doctor
While most articles for Adventure Medic come from far away places, this one concerns another frontier altogether – Artificial Intelligence in medicine and its implications for the patients we care for, at home and overseas. Isabel Straw is an Emergency Medicine doctor, studying for a PhD at University College London on the ‘Artificial Intelligence and Healthcare’ programme. Previously, she worked for the United Nations, and in Syria, with the Syrian American Medical Society. In this fascinating essay, Isabel gives us an insight into the world of medical AI, including some of its ethical dilemmas, as well as her incredible career so far.
The year is 2030 and you are the Intensive Care Registrar on call. Having received your third referral for a patient requiring high-level support on ITU, you look nervously at your patient list. In recent years, the hospital you work at has integrated an algorithmic system for predicting the likely mortality of patients, ascribing a survival likelihood score to each patient depending on their pathway of care. The three patients that have been referred to you differ very little in their physiological and clinical background, yet the machine learning model has identified Patient Three as the most likely survivor if offered the only remaining ITU bed. Over the past decade, Artificial Intelligence (AI) has proliferated across social sectors, and it is common knowledge that it often supersedes human ability to evaluate and predict events. You are also aware of the complex history of AI bias and the recent news reports regarding the various discrimination cases against certain demographic groups by AI algorithms (see glossary). If you challenge the AI decision you must be able to justify this to your senior. If you act on the AI’s decision you must be able to explain your reasoning to the families of the patients who will not receive an ITU bed.
In my current role, which sits at the intersection of Artificial Intelligence, healthcare inequalities and clinical medicine, my work is concerned with the ethics of these evolving systems. AI is rapidly disseminating throughout society, often with little public awareness, despite its increasing impact on our daily lives. In this article, I intend to give you an insight into the world of medical AI and highlight why medical AI ethics matters. I will share with you how I ended up in my role, through the prism of books, experiences, and the role models that led me here.
Stuart Maconie describes himself as a ‘Child of the Welfare state’, a term which rang true for me having grown up with the welfare state playing a central role in my life.1 For those who have been shaped by the state, the political is always personal. Both my medical career and subsequent journey into medical AI have been shaped by my passion for social justice. My interest in politics, economics, and philosophy has been driven by the same sentiments which drew me to medicine – to reduce suffering and address harm. The political philosophy of the incumbent government and their subsequent decisions on public policy shape the distribution of diseases within the population. Throughout medical history, doctors have played a role in socio-political discourse, and it was Rudolf Virchow who said – ‘Medicine is a social science and politics is nothing else but medicine on a large scale’. 2,3 Whilst we may learn of Virchow’s notable contributions to pathophysiology at medical school, he was also a political advocate for his patients and promoted the role of doctors in the wider social context.2
At first glance, the A&E (Accident and Emergency) department seems to be a setting of social levelling; for, no matter the patient’s background, income, privilege, or status, everyone is equal in the NHS. This notion, combined with the A&E’s flattened professional hierarchy, drew me to choose it as a discipline. However, you cannot be an A&E doctor without noticing the impact of poverty, environment, family and demographic factors on health trajectories. Michael Marmot’s infamous writings (‘The Health Gap’, ‘Status Syndrome’) formed part of my early education regarding the social determinants of health. I learned how life expectancy depended on postcode, how education level influenced disease mortality and how the steepness of the social ladder negatively affects the health of all.4,5 Moreover, it was through the works of Angela Saini and Emily Cleghorn that I discovered that healthcare inequalities can’t only be ascribed to socioeconomic factors, but that the practice of medicine itself perpetuates historic harms that maintain social inequities.6,7 The books ‘Superior’, ‘Inferior’, and ‘Unwell Women’ describe the scientific sexism, racism and classism that are rooted in medical history and continue to harm patients from different demographic backgrounds.6,7,8
Thanks to funding from the Fulbright and Thouron scholarship programmes, I was able to gain a better understanding of healthcare inequalities and the history of medicine, by pursuing a Master’s in Public Health (MPH) in the USA. During this time, I was exposed to multidisciplinary scholars and topics that are less commonly covered in UK medical schools – critical race theory, the politics of women’s health, the role of geopolitical power structures in healthcare, and health and human rights. The anthropological lens through which I learned to view medicine increased my awareness of the role that medicine plays in exacerbating social divides, as opposed to providing equitable care as I once thought. To gain further experience in international policy and global health inequalities, I pursued clinical experience in humanitarian settings in Europe and the Middle East. Here, I observed the role of international organisations in providing the necessities for health and life when nation-states had failed. Following this, I had the opportunity to work at the United Nations and it was here that I first encountered the field of AI.
I arrived at the UN in 2019 when they were developing the first Recommendation on the Ethics of Artificial Intelligence.9 Philosophers from around the world were drafting the first document that detailed the dangers that AI posed for different population groups due to bias, discrimination, and exclusion. My role was to examine AI through the lens of medicine and contextualise issues from other social sectors into a biomedical setting. For example, various AI policing systems with high false-positive rates overly profile innocent citizens who tend to be black, resulting in the further criminalization of the black community. Recently, a similar case was reported in medicine, in which a healthcare algorithm had a high error rate for black patients and under-referred them for hospital care. My MPH studies had taught me that clinical medicine was already a conduit for historic power imbalances, and at the UN, it became apparent that without effective research and regulation, medical AI would further embed these harms into impenetrable digital systems.
After the UN, I returned to the USA and completed the last year of my MPH in the computer science school. I learned to code and published papers that demonstrated biases in psychiatric AI algorithms.10 Our research revealed discriminatory assumptions that existed within machine learning datasets and model architecture used to identify risks of suicide, self-harm, and homicide in people of concern.9 From emotion mining that disrupts our concepts of confidentiality, to the impact of neurotechnology on autonomy, and the implications of governmental AI systems on population inequalities, the ethical issues with medical AI were becoming rapidly more apparent, For more information on these topics, our team published a series of papers on bias in medical AI algorithms and the evolving use of AI in psychiatry which are available in the bibliography.10,11,12
The danger of digital algorithms is that they embed an existing belief system into an invisible decision-making process.13,14 As Cathy O’Neil explains in her book ‘Weapons of Math Destructions’, ‘algorithms are opinions embedded in maths’.13 Throughout history, the biomedical sciences have been shaped by the social context, so much so that the remnants of power structures are built into the foundations of our institutions. The scientific racism that underlies different respiratory criteria for black and white children is rooted in fallacies created by plantation owners centuries ago.15,7 The medical sexism that excluded female bodies from medical education continues to result in delayed referrals, misdiagnosis, and increased mortality for women across a range of conditions.6,12,16,17 Further, the history of medical eugenics, scientific classism, the pathologizing of the queer community, and Eurocentric medical culture all disadvantage patients who have historically been excluded from social power.18,19,17,15,12,6,20,21,8,7,22 Algorithms developed in the present are built from our cultural history, and without intervention, these systems weave harms from the past into the fabric of our future.
The field of medical AI is a rapidly evolving space that requires diverse perspectives, especially those with an understanding of medical history and anthropology. At present, I am continuing my research into medical algorithmic bias at University College London (UCL) on the ‘Artificial Intelligence and Healthcare’ PhD programme. My PhD research focuses on the use of supervised and unsupervised machine learning methods for evaluating and mitigating biases in automated medical systems. Over the summer I returned to the United Nations to continue working on the Recommendation on the Ethics of AI, and as of November 2021, this document has been released as the first international instrument adopted by member states for the ethical regulation of AI.23 In addition to my AI research, I continue to work part-time as a clinician in A&E, where our safeguarding team are investigating the role of technology in cases of domestic violence in the hospital. The intersection of technology and violence is a separate topic to this essay, but for those interested check out the ‘Gender and IoT’ team at UCL STEaPP, who investigates the impact of spyware, electronic surveillance, and smart home devices on victims of violence.
This article began by discussing an AI-based dilemma for an ITU junior doctor. The case is one of a series of scenarios included in a medical AI curriculum currently being developed into a training programme for junior doctors in North London. Other scenarios include a machine learning algorithm that can predict the likelihood (and likely date) of a miscarriage for pregnant mothers from the day of conception. Another example discusses the implications of bias in a child abuse prediction algorithm implemented in a local GP practice – based on an existing model described by Eubanks in his book ‘Automating Inequality’. Through these new educational programmes, we aim to equip healthcare practitioners with the tools to critique these models and advocate for their patients. At present, these systems are growing in power and scope, and few professionals outside of the data science community understand their constituents enough to challenge them.
A career in medicine offers a rich diversity of possible career pathways and AI ethics will apply to all these routes. The field of global health must address the challenges of drone-delivered aid, humanitarian aid forecasting, and AI predictive systems that regulate the international journeys taken by refugees.24,25,26,27 Further, the ‘Decolonise AI’ movement seeks to address the power imbalances that digital tech introduces between high- and low-income countries, with the Global South being largely underrepresented in the development of these consequential systems.36 Paediatric specialists will witness the impact of augmented realities, the metaverse and digital relationships on children, in addition to seeing new ethical issues associated with the digitisation of children’s bodies.28 Women’s health specialists will need to understand the rise of technology-facilitated abuse, community practitioners will need to get comfortable with medical smart home surveillance, and hospital specialists will need to understand the proliferating machine learning algorithms that create patient predictions from masses of clinical data.11,29,30,31 Beyond being medics, we will all have to reckon with the social shifts and personal challenges arising from the growth of neurotechnology, automated governmental systems and algorithms that can supposedly predict your character, potential and life trajectory.9,13
In medicine, all patients suffer a degree of error that is proportional to their distance from the defined medical average; the physiological average that underpins prognostic tools, the biochemical average that determines diagnostic thresholds, and the anatomical average which populates our textbooks. This medical average evolved from historical and social power constructs and the legacy of these political dynamics materialises in all aspects of medicine. For doctors who wish to address AI harms, but are less technologically inclined, the most powerful work begins by challenging the ground truth of medicine. Data scientists use current practice as the foundation of any healthcare algorithm. By evaluating biases and discrimination within the foundations of existing clinical care, we can all contribute to mitigating the harms of digital systems. For example, is it acceptable that we use unisex thresholds for cardiac biomarkers in managing heart disease? PubMed has some answers.32 Do our existing screening tools contribute to racist, sexist or classist healthcare disparities, for example when diagnosing autism? 33 Research into medical bias is an opportunity to be curious, to challenge our assumptions and to reflect on the limitations of our own education. Medical training offers very little time for the extra-curricular, but research such as this can form a part of continuing professional development and fits well within the quality improvement projects that we are expected to undertake. Beyond this, educating others and having an awareness of the history of our discipline will allow us to see how biases emerge in our own practice and how we can avoid integrating such harms into digital medicine.
Individuals who are interested in the more technical side have a wealth of resources at their disposal. To develop the coding skills that will equip you to evaluate medical algorithms, short courses, University Masters degrees and online platforms such as Codeacademy can provide an introduction. At UCL we are developing a curriculum on medical AI for undergraduates. Students could advocate for this at their own Universities. There is a mass of data ethics podcasts on Spotify, evolving academic platforms such as The Lancet Digital Health and the Artificial Intelligence in Medicine journals, and documentaries such as ‘Coded Bias’ and ‘The Great Hack’ which highlight the wider social issues of AI. In medicine, the patient-doctor relationship has always been based on trust. If AI is to be beneficent in healthcare the doctor-AI relationship must share this foundational principle. Through education and awareness, self-reflection, skill-building, research, and knowledge sharing, we can all play a role in creating digital medicine that works for everyone.
Glossary of Terms
Artificial intelligence stands for computer systems that behave intelligently, which means here: They solve tasks that normally require intelligence, such as understanding and expressing language, image recognition, decision-making or translations.34
Machine learning creates computer systems that use data to learn how to perform tasks. Instead of a developer who specifies instructions line by line in the form of programming code, the software independently updates its code after the first trigger and optimizes it for a better result.34
An unambiguous specification of a process describing how to solve a class of problems that can perform calculations, process data and automate reasoning.35 An algorithm can be thought of like a cooking recipe, it is a series of instructions/steps that tell a computer what to do with data.
- Maconie, S., The Nanny State Made Me: A Story of Britain and How to Save it. 2020: Ebury Publishing.
- Anis, J.w.a.t.S., Virchow misquoted, part-quoted, and the real McCoy. Journal of Epidemiology Community Health, 2006. 60: p. 671.
- Dunn, L., Rudolf Virchow – Now You Know His Name. 2012: Createspace Independent Publishing Platform.
- Marmot, M., Status Syndrome: How Your Social Standing Directly Affects Your Health. 2004, London: Bloomsbury Publishing.
- Marmot, M., The Health Gap. 2015, USA: Bloomsbury Publishing.
- Cleghorn, E., Unwell Women: Misdiagnosis and Myth in a Man-Made World. 2021: Dutton.
- Saini, A., Superior: The Return of Race Science. 2020: Harper Collins Publishers.
- Saini, A., Inferior: How Science Got Women Wrong-and the New Research That’s Rewriting the Story. 2017, Boston: Beacon Press.
- UNESCO, U.N. Artificial Intelligence: Recommendation on the ethics of artificial intelligence. 2020 [cited 2021; Available from: https://en.unesco.org/artificial-intelligence/ethics.
- Isabel Straw, C.C.-B., Artificial Intelligence in mental health and the biases of language based models. PLoS ONE, 2020. 15(12).
- Straw, I., The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future. (1873-2860 (Electronic)).
- Straw, I., The automation of bias in medical Artificial Intelligence (AI): Decoding the past to create a better future. Artificial Intelligence in Medicine, 2020. 110.
- O’Neil, C., Weapons of math destruction. 2017, Harlow, England: Penguin Books.
- Noble, S.U., Algorithms of oppression. 2018, New York: NYU Press.
- Braun, L., Race Correction and Spirometry: Why History Matters. Chest, 2021. 159(4): p. 1670-1675.
- Curry, S., Inferior by Angela Saini – a powerful exploration of women’s ‘inferiority’, in The Guardian. 2017, Ocamm’s corner.
- Hamberg, K., Gender bias in medicine. (1745-5065 (Electronic)).
- Dahal, P.A.-O.X., et al., Gender disparity in cases enrolled in clinical trials of visceral leishmaniasis: A systematic review and meta-analysis. (1935-2735 (Electronic)).
- (BHF), B.H.F., Misdiagnosis of heart attacks in women. Heart Matters Magazine: Women and Heart Disease, 2021.
- Lock, M.N.V., The Normal Body, in An Anthropology of Biomedicine. 2018, Wiley & Sons, Inc.: Oxford, UK. p. 29 – 49.
- Inserro, A., Flawed Racial Assumptions in EGFR Have Care Implications in CKD. 2021.
- Nancy Krieger, E.F., Man-made medicine and women’s health: the biopolitics of Sex/Gender and Race/Ethnicity. Int J Health Serv, 1994: p. 265-283.
- UNESCO, U.N., UNESCO member states adopt the first ever global agreement on the Ethics of Artificial Intelligence, U.N. UNESCO, Editor. 2021: Online.
- Petra Molnar, L.G., Bots at the gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System. 2018, The Citizen Lab and University of Toronto: Online.
- Andres, J., Wolf, C. T., Barros, S. C., Oduor, E., Nair, R., Kjærum, A., Tharsgaard, A. B. & Madsen, B. S. , Scenario-based XAI for Humanitarian Aid Forecasting. ACM: CHI Extended Abstracts 2020: p. 1-8.
- Foundation, U.T.R., Artificial intelligence in global health: defining a collective path forward. 2019: Online.
- Microsoft. AI for Humanitarian Action. 2021; Available from: https://www.microsoft.com/en-us/ai/ai-for-humanitarian-action.
- Sandvik, K.B., Wearables for something good: aid, dataveillance and the production of children’s digital bodies. Information, Communication & Society, 2020. 23(14): p. 2014-2029.
- Thiébaut, R., and Sébastien Cossin Artificial Intelligence for Surveillance in Public Health. Yearbook of Medical Informatics, 2019. 28: p. 232-34.
- Tanczer, L.M., López-Neira, I., & Parkin, S. , I Feel Like We’Re Really Behind the Game’: Perspectives of the United Kingdom’s Intimate Partner Violence Support Sector on the Rise of Technology-Facilitated Abuse. Journal of Gender-Based Violence, 2021. 5: p. 431-450.
- Tanczer, L.M., et al. Emerging risks in the IoT ecosystem: Who’s afraid of the big bad smart fridge? in Living in the Internet of Things: Cybersecurity of the IoT – 2018. 2018.
- Sobhani, Kimia, et al. ‘Sex Differences in Ischemic Heart Disease and Heart Failure Biomarkers’. Biology of Sex Differences, vol. 9, no. 1, Sept. 2018, p. 43. PubMed, https://doi.org/10.1186/s13293-018-0201-y.
- Navarro-Pardo, Esperanza, et al. ‘Diagnostic Tools for Autism Spectrum Disorders by Gender: Analysis of Current Status and Future Lines’. Children, vol. 8, no. 4, Mar. 2021, p. 262. PubMed Central, https://doi.org/10.3390/children8040262.
- Olckers, Ayran. ‘Artificial Intelligence: AI Terms Simply Explained’. Medium, 15 Mar. 2020, https://towardsdatascience.com/artificial-intelligence-ai-terms-simply-explained-745c4734dc6c.
- ‘AI Glossary’. Appen, Available online at https://appen.com/ai-glossary/. Accessed 24 Feb. 2022.
- Roche, Cathy, et al. ‘Artificial Intelligence Ethics: An Inclusive Global Discourse?’ ArXiv:2108.09959 [Cs], 1, Aug. 2021. arXiv.org, http://arxiv.org/abs/2108.09959.