As excitement grows around artificial intelligence (AI) and the benefits it could have in healthcare, many concerns remain about the technology’s potential pitfalls. One main cause for concern is the possibility that the technology could perpetuate racial biases. Various facets of this issue and possible solutions were covered in a recent feature in BMJ.

The author focuses on the possible use of AI in the UK health setting and what it could mean for patients of diverse racial and ethnic backgrounds. AI is already being developed, tested, and used in a variety of cases ranging from the diagnosis of dermatologic and retinal diseases to understanding transplant failures. The relative ease offered by automation is an attractive feature to many people working in health care, especially in the diagnostic sector. A total of 5 medical centers set to open in the UK in 2019 planned to use AI to increase the speed of diagnoses.

In a country where long patient wait times are putting a major strain on the health system, AI seems a promising tool to optimize the process of care delivery. Some even argue that it has the potential to help address current biases in healthcare, as AI can analyze huge amounts of data from entire populations, whereas past medical research has been focused on patients who are predominantly older and white. Some AI initiatives use hospital inpatient data rather than clinical trial data, circumventing limits in research sampling. This could also pave the way for using global data to better understand certain diseases.


Continue Reading

However, there are problems with the data analysis methods used in many AIs. For one, there are no formal requirements for how to best ensure that data used in AI research are representative of patient populations. As of now, companies developing and using the technology must ensure that their data are racially and ethnically inclusive. According to the author, recent research using AI has failed in this regard. An AI algorithm created at Stanford University in Palo Alto, California, showed that the technology had the same ability to recognize potentially problematic moles as trained dermatologists, but only in white patients.

If more AI technologies are developed and implemented without transparency of data, it could create a scenario in which the technology is plagued with internalized biases, unknown to practitioners or patients. This could lead to widespread underdiagnoses or misdiagnoses, especially among patients who are members of minority groups. The author states that moving forward, there should be an emphasis placed on the equitability of AI used in a healthcare setting. This means ensuring that data used to develop the tools are representative and, when it is not, making that fact clear to practitioners and patients.

Related Articles

Reference

Noor P. Can we trust AI not to further embed racial bias and prejudice? BMJ 2020;368:m363. doi:BMJ 2020;368:m363

This article originally appeared on Medical Bag