Priority Medical

Biases in health-care algorithms | Philly Health Insider

Published on
Biases in health-care algorithms | Philly Health Insider
  • Biases in healthcare algorithms are a significant concern as they perpetuate existing inequities in healthcare, particularly affecting underrepresented communities due to issues such as non-diverse training data and lack of transparency.
  • Real-world examples of AI bias in healthcare include inaccuracies in cardiovascular risk scoring, skin cancer detection, and eligibility for chronic disease management programs, highlighting the need for more inclusive training datasets.
  • Experts and regulatory efforts emphasize promoting healthcare equity, transparency, and community engagement in algorithm development, alongside accountability measures, to mitigate biases and improve equitable patient outcomes.

Join Our Newsletter

Get the latest news, updates, and exclusive content delivered straight to your inbox.

Introduction

In the ever-evolving landscape of modern medicine, one of the most significant challenges facing healthcare providers today is the issue of biases in healthcare algorithms. These algorithms, which are increasingly relied upon to make critical decisions about patient care, are not immune to the same biases that affect human decision-making. From cardiovascular risk scoring to skin cancer detection, these biases can have profound and far-reaching consequences for patients, particularly those from underrepresented communities. In this article, we delve into the world of healthcare algorithms and explore the extent of the problem, its causes, and most importantly, the steps being taken to mitigate these biases.

What is AI Bias in Healthcare?

AI bias in healthcare refers to the application of algorithms that compound existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation, thereby amplifying inequities in health systems. This bias is not just a technical issue but also deeply rooted in human perception and data generalizability.

Causes of AI Bias

  1. Data Generalizability Issues

    • The data used to train healthcare algorithms often lacks diversity, leading to biased outcomes. For instance, a cardiovascular risk scoring algorithm trained on predominantly Caucasian data was significantly less accurate when applied to African American patients.
  2. Built-in Human Biases

    • The developers of these algorithms carry their own biases, which are reflected in the problems they choose to solve and the data they select for training. This results in algorithms that may not reflect the actual incidence, urgency, or potential value of various health conditions across different populations.
  3. Lack of Transparency and Explainability

    • Many AI models are complex "black boxes" that make it difficult to understand how they arrive at their outputs. This lack of transparency makes it challenging to identify and correct biases that may be embedded in the algorithms.

Real-World Examples of AI Bias

  1. Cardiovascular Risk Scoring

    • A widely used cardiovascular risk scoring algorithm was shown to be much less accurate for African American patients compared to Caucasian patients. This disparity is largely attributed to the fact that approximately 80% of the training data represented Caucasians.
  2. Cancer Detection

    • Algorithms for detecting skin cancer, trained primarily on data from light-skinned individuals, are significantly less accurate in detecting skin cancer in patients with darker skin.
  3. Radiomics

    • Chest X-ray-reading algorithms trained primarily on male patient data were less accurate when applied to female patients. This highlights the need for more inclusive training datasets to ensure fairness in healthcare AI.
  4. Kidney Transplant Eligibility

    • A commercial algorithm used for risk-stratifying patients for chronic disease management programs effectively required Black individuals to be sicker than White individuals to qualify for such services. This perpetuates existing racial disparities in healthcare.

The Impact on Minoritized Communities

The use of biased healthcare algorithms can have devastating consequences for minoritized communities. These algorithms may result in:

  • Inappropriate Care: Biased algorithms can lead to incorrect diagnoses and inadequate treatments, which can be particularly harmful for patients whose conditions are already underdiagnosed or undertreated due to systemic biases.
  • Delayed Care: For instance, a biased algorithm used to estimate kidney function resulted in higher estimates for Black patients compared to White patients, leading to delays in organ transplant referrals for Black patients.
  • Disproportionate Resource Allocation: Biased algorithms can allocate resources unfairly, exacerbating existing health inequities. For example, a study found that algorithms used to determine eligibility for chronic disease management programs required Black individuals to be sicker than White individuals to qualify for services.

How Are Experts Addressing AI Bias?

To mitigate these biases, experts are advocating for several strategies across all stages of an algorithm’s life cycle:

  • Promoting Health and Healthcare Equity: Ensuring that algorithms are developed, trained, and deployed with the aim of promoting health and healthcare equity, rather than perpetuating existing inequities.
  • Transparency and Explainability: Making the inner workings of algorithms transparent and explainable to identify and address biases early on.
  • Authentic Community Engagement: Engaging patients and communities authentically throughout the development and deployment of algorithms to ensure that their needs are represented.
  • Explicit Identification of Fairness Issues: Explicitly identifying fairness issues and trade-offs in algorithm development to avoid perpetuating biases.
  • Accountability for Equity and Fairness: Ensuring accountability for equity and fairness in outcomes from healthcare algorithms, through mechanisms like regular performance evaluations and community feedback.

Guiding Principles for Mitigating Algorithmic Bias

In 2023, a diverse panel of experts convened by the Agency for Healthcare Research and Quality (AHRQ) and the National Institute on Minority Health and Health Disparities (NIMHD) identified five key principles for mitigating algorithmic bias:

  1. Promote Health and Healthcare Equity: This principle emphasizes that health care algorithms should be designed to promote health and healthcare equity during all phases of their life cycle.
  2. Ensure Transparency and Explainability: Health care algorithms and their use should be transparent and explainable to ensure that biases are identified and addressed.
  3. Engage Patients and Communities: Authentic engagement with patients and communities is crucial to earn trust and ensure that their needs are represented.
  4. Explicitly Identify Fairness Issues: Fairness issues and trade-offs should be explicitly identified during the development and deployment phases to avoid perpetuating biases.
  5. Accountability for Equity and Fairness: There should be accountability for equity and fairness in outcomes from health care algorithms, through regular performance evaluations and community feedback.

Regulatory Efforts

Regulatory bodies are also taking steps to address AI bias in healthcare:

  • Algorithmic Accountability Act: The U.S. Algorithmic Accountability Act, which aims to require companies to assess their AI systems for risks of unfair, biased, or discriminatory outputs, has garnered significant support.
  • European and Chinese Regulations: Similar regulations have been proposed or are in development across Europe and China, highlighting a global recognition of the need to address algorithmic bias.

Conclusion

The integration of AI into healthcare systems holds tremendous potential for improving patient outcomes. However, it is essential to address the biases embedded in these algorithms to ensure equitable care for all patients. By following best practices for transparency, inclusivity, and accountability, the healthcare industry can push against the implicit biases that shape human decision-making and create more equitable access to healthcare.

References Accuray. (n.d.). Overcoming AI Bias: Understanding, Identifying and Mitigating Algorithmic Bias in Healthcare. Retrieved from https://www.accuray.com/blog/overcoming-ai-bias-understanding-identifying-and-mitigating-algorithmic-bias-in-healthcare/ Yale School of Medicine. (2023). Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines. Retrieved from https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelines/ JAMA Network Open. (2023). Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. Retrieved from https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2812958 Nature. (2023). Bias in AI-based models for medical applications. Retrieved from https://www.nature.com/articles/s41746-023-00858-z PMC - NCBI. (n.d.). Algorithmic Discrimination in Health Care. Retrieved from https://pmc.ncbi.nlm.nih.gov/articles/PMC9212826/