Post Image

Artificial intelligence (AI) is revolutionizing the way clinicians think of patient care. However, healthcare algorithms that power AI may involve bias towards underrepresented communities and may magnify existing racial inequality in medical science.

In the Journal of Global Health algorithmic bias has been described as the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation.

The National Institute on Minority Health and Health Disparities (NIMHD) and the Agency for Healthcare Research and Quality (AHRQ) recently developed a conceptual framework in JAMA Network Open in 2023 that outlines key guidelines for doing away with algorithmic bias.

In this article, we will explore how this bias happens in healthcare and the potential way-outs from this issue.

How Does Algorithmic Bias in Health Care Occur?

A lot of healthcare algorithms rely on data, yet biases against underrepresented groups may arise if the data aren’t representative of the entire community. These biases can lead to inappropriate care since they are based on false assumptions about specific patient populations.
Health disparities that already exist for some populations due to characteristics such as age, gender, color, or ethnicity may be exacerbated by healthcare algorithms and AI bias.

Lack of Diversity in Representation

The lack of diversity in the data used to train computer systems is one factor contributing to bias in healthcare algorithms and AI. When developing AI computer systems, it’s critical to use patient data with a variety of demographic characteristics to make sure the algorithm functions properly for everyone.

 Feedback Loops

If algorithms are taught on data that represents biased decisions made by people, biases may be reinforced. This has the potential to cause bias in the future.

Implicit Human Biases

The presumptions made by those who develop AI and healthcare algorithms might also introduce bias. Developers might believe, for instance, that certain symptoms are more prevalent in non-Hispanic White women than in women of color. This may result in Black and African American women who experience those symptoms from algorithms yielding biased or erroneous findings.

 Historical Data

Algorithms acquire knowledge from past data. The algorithm may reproduce and even magnify biases in this data if they exist.

An article published in Science revealed that algorithms commonly used by some prominent health systems today are racially biased.  

Healthcare professionals use this information to recommend certain people for medical care, thus this has direct and potentially harmful implications for patients.

Biased data sets for AI algorithms

Many discriminatory algorithms that require racial or ethnic minorities to be significantly sicker than white people to receive the same diagnosis, treatment, or resources have been found by experts. These are models created for a variety of specializations, including kidney transplantation and cardiac surgery, among others.

When it comes to biases inadvertently entering the process, data is crucial. One Canadian company, for example, created an algorithm for auditory testing for neurological disease. It detects speech patterns and performs data analysis to identify Alzheimer’s disease in its early stages. The test’s accuracy rate was over 90%, although the data set only included samples from native English speakers, leading to false positives for non-native English speakers.

Similarly, an algorithm was built to diagnose maligning skin lesions from photos. Since skin cancer is more common in people with white skin, more information was available specifically for that skin type. When identifying people with light skin, the method was 11%–19% more accurate; however, when diagnosing people with darker skin, it was 34% less accurate.

 The above examples highlight biases occurring due to misrepresentation of patient population data while training an algorithm. Hence, while designing, it is paramount to consider all known factors related to the patient population.

Key principles for mitigating algorithmic bias

It is the responsibility of clinicians to minimize prejudices that hurt their underrepresented patients when they use more AI approaches.

Guiding principles for preventing algorithmic bias:

  • Promote health and health care equity during all phases of the health care algorithm life cycle
  • Encourage health and healthcare equity throughout the entire life cycle of the healthcare algorithm. 
  • Make sure that the healthcare algorithm and its application are transparent and understandable. 
  • Authentically engage patients and communities throughout the health care algorithm life cycle and gain their trust. 
  • Identify issues related to the health care algorithm’s fairness and trade-offs.
  • Ensure accountability for equity and fairness in the outcomes of the health care algorithm. 

Challenges in Addressing Algorithm Bias

Minimizing algorithm bias in healthcare AI is a complex and ongoing process. There are several challenges and considerations that must be addressed: 

  • Data security and privacy: Strict data security measures must be put in place, and patient privacy must be respected when compiling large, heterogeneous datasets.
  • Data governance: To guarantee that data is gathered, stored, and used morally and by-laws, strong frameworks for data governance must be developed.
  • Resource allocation: A large amount of money, manpower, and technological infrastructure are needed to gather and preserve representative and varied data sets.
  • Regulatory environment: Since the field of healthcare artificial intelligence is developing quickly, regulatory agencies need to keep up to date to offer clear recommendations for the morally and responsibly using AI-based models and systems.
  • Interoperability: For models based on complete and accurate datasets to benefit the widest variety of demographic groups, data and coding from various healthcare systems and providers must be compatible.

Treatment Plan for Algorithmic  Bias

There are best practices that healthcare data scientists and developers can incorporate to address the challenges of using algorithms and AI. These include:

  • Have a wider range of individuals oversee and evaluate the AI and algorithms.
  • Apply strategies or tactics, such as the use of synthetic data, to effectively handle circumstances in which there is insufficient information available.
  • Collaborate with various communities to make sure the algorithms are beneficial and safe.
  • Rather than introducing the algorithms all at once, do so gradually and carefully.
  • Provide channels for users to offer feedback so that the algorithms can be enhanced over time.
  • Involve employees from a variety of racial and ethnic origins in the development of the algorithms and the validation of patient data.
  • Involve Explainable Artificial Intelligence (XAI) in action to mitigate bias.

The Office of Minority Health (OMH) is focused on helping to reduce differences in health outcomes, known as “Health disparities, for racial and ethnic minority populations and American Indian and Alaska Native communities”.

Summing it up

Making sure algorithm bias in healthcare is minimized, Artificial Intelligence (AI) raises important ethical issues in addition to technical and practical ones. Algorithmic prejudice in healthcare has far-reaching potential repercussions, from misdiagnoses to discriminatory treatment.

Addressing bias in AI is essential to ensure fairness, equity, and trust in the healthcare system. This is where XAI steps in, not as a magic wand, but as a powerful tool for transparency and accountability.

Codewave EIT’s XAI sheds light on the inner workings of algorithms, demystifying their decision-making processes and revealing the biases that can creep in. In essence, our XAI solutions can unlock the black box of AI, allowing us to peer inside and ensure fairness. By making AI systems transparent and understandable, we can hold them accountable for their decisions, identify and address biases, and ultimately build a future where AI serves as a force for good, not a perpetrator of injustice.

At Codewave EIT, we’re committed to being your guide on this journey, helping you harness the power of XAI to build responsible AI solutions that benefit everyone. Feel free to call us to explore the range of solutions we offer.