Avoiding AI Algorithmic Bias in Healthcare

‘Algorithm’ is a term coined in the new age that avid users of social media should be familiar with. While it does an efficient job in feeding us information it knows we are interested in, it is a double-edged sword that also results in algorithmic bias.

The prevalence of technology use in our daily lives has allowed the uncontrollable spread of algorithmic bias. It is everywhere, including but not limited to healthcare providers, insurers, technology companies, and regulators, according to the Center for Applied AI at Chicago Booth.

It is extra worrying that this phenomena is growing in the healthcare industry. Essentially, algorithms are designed to assist with decision making. Scores are assigned to people to determine who requires immediate attention. The higher the score, the more severe the condition is, forcing healthcare workers to prioritize them. In contrast, if two people have the same score, it means that they have the same basic needs. However, the algorithm fails if it takes into account aspects like the colour of their skin, which should not be a determining factor.

Mitigating AI Bias

In order to mitigate the issue, here are 4 strategies to address algorithmic bias.

Firstly, the inventory of the algorithms should be examined. Developers should consider how and when algorithms are used to get a proper grasp of the framework. Afterwards, a team should be appointed to monitor the data, especially when it involves a more diversified group.

Next, it is time to identify the algorithmic bias. To effectively spot the issue, you have to be familiarised with the target audience, specifically, the factors that the system is highlighting. To organise your information clearly, you can consider using a table for the stakeholders to fill in the following questions: the ideal target, the actual target, and the risk of bias.

It is also important to filter out any inefficient algorithm and purge them out of the system to prevent any lag in the system.

Lastly, the algorithm should be monitored and audited regularly to ensure that they have not deviated from their purpose, which is possible due to its self-learning behaviour.

While it may seem like algorithm bias is a side effect of algorithms that we have to live with, Vox disagrees.

According to Christina Animashaun, Vox journalist, “these systems can be biased based on who builds them, how they’re developed and how they’re ultimately used.”

For example, Amazon’s resume-screening-tool is built upon decades of resumes, which tend to come from men, especially since sexism was more prevalent in the past. This meant that the system would pick up the same discrimination against women.

Perhaps, the best way to mitigate algorithm bias, is for developers to be more conscious about their own biases.

Leave a Comment