Skip to content →

Automated Disparities: AI in Healthcare

Written by Harshini Sasikumar | Edited by Alexander Alva

Photo by Google DeepMind

Imagine a world without artificial intelligence (AI). From virtual assistants to simple tools like spell checks, AI is necessary to navigate daily tasks that would otherwise be daunting and challenging. Given its increasing prevalence and versatility, to see such technology used in healthcare is not surprising. AI-powered technologies, such as chatbots and predictive algorithms, have revolutionized the healthcare industry by providing fast and accurate diagnoses, enhancing documentation, and boosting efficiency [1]. However, despite the many benefits of using AI in healthcare, there remains a growing concern about the potential for algorithmic biases.

Algorithmic bias refers to the tendency for algorithms—which instruct AI—to generate results that may be prejudiced against certain individuals or groups [2]. The issue can happen in all stages of development: data collection, processing, training, and implementation [4]. To clarify, algorithms are programmed by humans with biases, which in turn could affect the output. A more common way bias enters AI programs is through poor training sets. Algorithms often practice or train with extensive amounts of data, and, if the data given only focuses on specific populations, the results can be skewed [2]

Often, such biases are found in predictive and generative AI, which are programs that rely on machine learning. Both predictive and generative AI rely on training sets, which help make predictions using past data or generate novel data based on what the program has learned. Predictive models are common in healthcare because they are used to foresee “the probability of having or developing a particular disease or specific outcome” [4]. Several conditions, such as falls in elderly patients, mortality in ICUs, and cardiovascular disease, can be predicted using AI algorithms. Unfortunately, predictive AI is extremely prone to biases, as training sets often lack diversity in minority race, gender, and sexuality. Inaccurate results produced by these can amplify existing biases and widen health disparities.

To illustrate, an algorithm predicting patients’ healthcare needs attributed higher risk scores to white patients compared to black patients. As a result, white patients went on to receive more personalized care [3]. When this data disparity was resolved and accounted for, the proportion of black patients receiving extra care increased from 17.7% to 46.5% [4]. Another way algorithms can perpetuate racial bias involves identifying melanoma on different skin colors. It is challenging to uncover skin cancer in patients with dark skin, and, if non-representative training data is used, health needs could be overlooked [3]. Similar patterns of misrepresentation are seen with other minorities, such as women and LGBTQ+ populations, thus heightening health disparities over time.

Controlling AI bias is necessary, since either free usage or total elimination could cause great harm [2]. Bias may be necessary at a certain level to help algorithms become aware of it, learn why it is detrimental, and attempt to avoid it. It is important to maintain necessary precautions, such as using diverse data, educating clinicians and patients, enhancing transparency, and learning to identify bias in all stages of development [3, 4]. By adequately balancing AI bias and promoting fairness, we can ensure that AI-powered healthcare is accessible and beneficial for all.

References:

1. Berry, Melissa. “Understanding the advantages and risks of AI usage in healthcare.” Thomson Reuters, 2023, www.thomsonreuters.com/en-us/posts/technology/ai-usage-healthcare/. Accessed 12 Apr. 2024.

2. Ferrara, Emilio. “Eliminating bias in AI may be impossible – a computer scientist explains how to tame it instead.” The Conversation, 2023, www.theconversation.com/eliminating-bias-in-ai-may-be-impossible-a-computer-scientist-explains-how-to-tame-it-instead-208611. Accessed 12 Apr. 2024.

3. Ueda, D., Kakinuma, T., Fujita, S., Kamagata, K., Fushimi, Y., Ito, R., Matsui, Y., Nozaki, T., Nakaura, T., Fujima, N., Tatsugami, F., Yanagawa, M., Hirata, K., Yamada, A., Tsuboyama, T., Kawamura, M., Fujioka, T., Naganawa, S. (2023). Fairness of artificial intelligence in healthcare: review and recommendations. Japanese Journal of Radiology, 42:3-15.

4. Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X., Moukheiber, M., Khanna, A. K., Hicklen, R. S., Moukheiber, L., Moukheiber, D., Ma, H., Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLoS Digital Health, 2(6):e0000278. 

Published in Medicine

Skip to toolbar