Biases in AI Diagnostic Algorithms: Promoting Equitable Healthcare 


Would you trust an AI in diagnosing you?


Artificial Intelligence (AI) is making a big entrance into healthcare, promising to revolutionize everything from diagnosing diseases to recommending treatments. But aside from the growing excitement, there’s a serious issue we need to unravel: bias in AI diagnostic algorithms. These biases can worsen health disparities and prevent everyone from benefiting equally from these technological advancements. So, let’s dive into what these biases are, how they impact us, and what we can do to fix them.

What Are AI Diagnostic Algorithms?

AI diagnostic algorithms are like super-smart assistants for doctors. They analyze tons of medical data, spot patterns, and help diagnose diseases. The perks are clear: better accuracy, faster results, and even the potential for personalized treatments. However, these benefits only shine through if the algorithms work fairly for everyone.

Where Do These Biases Come From?

Bias in AI mainly comes from the data used to train these systems. If the training data is mostly from white or male patients (as the western anatomy textbooks were mainly using a white, heterosexual man as a ‘universal model), the AI might not perform well for everyone else. For example, an AI trained mostly on lighter skin tones might struggle to accurately detect skin conditions in people with darker skin, like a pulse oximeter. Sometimes your oxygen level might get indicated differently if you have darker skin or more melanin, which also influences your future treatment. 

The way algorithms are designed can also introduce biases. Choices made by developers, often without realizing it, can skew results. Plus, if the outcome data is biased, the whole system can end up reinforcing existing healthcare disparities.

Real-World Examples of Bias

Let’s look at some real-world examples. AI tools for detecting skin cancer have been found to be less accurate for people with darker skin tones. Why? Because they were trained mostly on images of lighter skin. This means people of color might not get accurate diagnoses, leading to delayed or inappropriate treatments. As Eman Rezk mentions in the article published by Andrea Lawson, “AI models will never be able to correctly diagnose the non-white population as accurately”.

Another example is Optum’s1 healthcare algorithm, which was biased against Black patients. “The algorithm helps hospitals identify high-risk patients, such as those who have chronic conditions, to help providers know who may need additional resources to manage their health”.

In fact, Black patients were sicker than White patients with equal health need scores. Their diabetes was more severe, and their blood pressure was higher. Through its risk estimates, the algorithm was maintaining prejudice. The algorithm has trained to suggest that individual Black patients receive half as much treatment as White patients because Black persons spend less on healthcare.

As Tomas Weber highlights it himself, this case demonstrates why we shouldn’t totally trust AI with our health, these reasons are more about the similarities between humans and AI models than they are about the differences between the two.

The Impact of Biased AI

Judging by all the information above we can make a few conslusions regarding the ‘impact of biased AI’. Biased AI diagnostics can lead to serious health disparities. If certain groups don’t get accurate diagnoses, they can’t receive the right treatments, which widens the health gap. And this isn’t just a theoretical issue – looking back at all the ‘real-world’ exmaples, it’s actually happening now, and it affects real lives.

Trust is another casualty of biased AI. Since patients who feel they’re being treated unfairly by AI systems are less likely to trust and engage with healthcare providers, further worsen health disparities. And let’s not forget the legal and ethical implications. Discriminatory AI practices can violate patient rights and ethical standards of care.

How Do We Fix This?

The good news is there are ways to tackle these biases.

Firstly, Differentiate Between Desirable2 and Undesirable3 Biases: Identify and ensure the inclusion of beneficial biases while eliminating harmful ones in AI development.

Second, Raise Awareness of Unintended Biases: Educate the scientific community, technology industry, policymakers, and the general public about unintended biases in AI.

Third, Implement Explainable Algorithms: Develop algorithms that provide clear, understandable explanations for users, incorporate integrated bias detection systems, and employ mitigation strategies validated through appropriate benchmarking.

And lastly, Integrate Ethical Considerations: Embed key ethical considerations at every stage of technological development to ensure systems prioritize the wellbeing and health of the population.

Conclusion

AI has the power to transform healthcare, but we need to be mindful of its implementation to avoid perpetuating biases. By addressing the sources of bias in AI diagnostic algorithms and promoting strategies for equity, we can work towards a healthcare system that benefits everyone fairly. It’s up to all stakeholders-developers, policymakers, and healthcare providers-to prioritize these efforts and ensure that AI advancements benefit everyone, regardless of their background or identity.

So, the next time you hear about a breakthrough in AI healthcare, remember: it’s not just about the tech. It’s about making sure that tech works for all of us. Let’s strive for an equitable healthcare future where AI serves everyone equally and fairly.


Footnotes

  1. “Optum is the business services arm of UnitedHealth Group”. ↩︎
  2. A desirable bias implies taking into account sex and gender differences to make a precise diagnosis and recommend a tailored and more effective treatment for each individual“. ↩︎
  3. “An undesirable bias is that which exhibits unintended or unnecessary sex and gender discrimination”. ↩︎

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to content