How to counter bias when creating a model

 

Māori suffer from worse health outcomes in Aotearoa New Zealand, even when compared with people of comparable economic and social context.

While there are other factors at play, some of this reflects bias in the data collected and used for decision making, in the analysis that leads to care and policy decisions. Addressing this bias in our modelling and analysis is a critical step to building a more equitable health system.

“Essentially all models are wrong, but some are useful.(1)” These are the often-quoted words of George Box, a noted British statistician who has been described as one of the great statistical minds of the 20th century (2).

This doesn’t mean that models – which can be defined as simplified descriptions of a system or process, to assist calculations and predictions – aren’t of use, though. Indeed, Box went on to add an important caveat: “the practical question is, how wrong do they have to be to not be useful?”

When done well, models provide us with a useful approximation for the system they represent. They can help us to better understand how the systems they study function in the real world, allowing us to make more informed decisions on how to best respond to the situation at hand.

 

Models in healthcare

The role that models can play in improving healthcare is important. Through harnessing the power of Artificial Intelligence (AI), models can be used to make scientific discoveries, monitor disease, and to help us better understand risk factors for different communities.

Rather than accept models at face value, it’s important to scrutinise the information they provide. In healthcare, particular focus is required to address bias – implicit stereotypes and prejudices.

Bias in data occurs when components of a dataset are overweighted or overrepresented. When this happens, it can lead to skewed outcomes, systematic prejudice, and low accuracy (3).

AI can suffer from bias, which has striking implications for health care (4). Bias in algorithms can mean that existing inequities in areas like socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation are compounded, and inequities in health systems amplified (5).

Models are only as accurate as the data they use. If a particular cohort is absent from, or misrepresented in data that’s being used in AI models, it can result in the model reinforcing bias.

It’s widely accepted for example, that people with lower incomes have higher health risks than those with high incomes. If a model is using data collected from an expensive private clinic, it is biased as it represents that particular patient population (6).

 

Addressing bias is critical to Māori health in Aotearoa New Zealand

As acknowledged by numerous studies, Māori suffer from worse health outcomes in Aotearoa New Zealand, even when compared with people of comparable economic and social context.

Some of this reflects bias in the data collected and used for decision making, in the analysis that leads to care and policy decisions. Addressing this bias in our modelling and analysis is a critical step to building a more equitable health system.

Steps to addressing bias:

So how do we ensure that models remain, in the words of George Box, useful, and as free from bias as possible? There are a number of areas where bias can affect a model’s outputs – and there are also steps that can be taken to mitigate these.

1. Data

There’s no shortage of available data in New Zealand. However, when you’re considering how to find data, it’s important to remember that data can only be used for the purpose for which it was collected; any other use is called “secondary purpose” and requires additional consent

Bias in data can come from different sources including historical bias, data imbalance, missingness, and human prejudice. Creating sets of data for your model comprised of diverse cohorts that represent the population your model will serve is important to counter the impact of bias.

Bias-related harms can be reinforced by machine learning models and systems. When training machine learning models using historically collected data, or drawing any conclusion from data, you should be mindful about the potential bias in the data, regarding sensitive attributes such as age, ethnicity and gender.

2. Perspectives

It’s very important to consider a wide range of perspectives when developing a model, and continually engage subject matter experts such as clinicians, and end-users, throughout your project.

Co-designing from the start and having a clinical champion/sponsor to ensure that developments can be incorporated in existing workflows so they can be used will set you up for success. This can encompass involving clinicians in the project at an advisory level, or in iterative development of models.

You should also engage end users (usually consumers and clinicians) early in co-design, to understand how outputs can be tangible for those who will use them or be impacted by them.

3. Data scientists

A data scientist can explain what steps are taken at which points to counter the bias. Mitigating bias and improving fairness is mostly not a technical challenge though, but a much broader systematic challenge. Including diverse voices and perspectives in data science work by, for example, having Māori researchers involved in your project, can help mitigate this challenge.

Data scientists are unlikely to have the correct context or cultural awareness to fully grasp what the data is telling them. This makes having a wide range of perspectives represented, including those of Māori and other ethnicity groups, consumers, and perspectives that cover age and ability ranges, particularly important.

4. De-biasing

If you’re aware that your model contains bias, you can look to address this by introducing a weighting to minimise the impact of the bias. This method can be applied to make a model more equitable, effectively setting an artificial standard in your algorithm that adjusts for the groups experiencing bias.

This effectively forces your model to account for underrepresented groups and makes results more applicable to them. Techniques for reducing biasing are still relatively new in data science though, with further research still needed to show that de-biasing reliably achieves its intended purpose (7).

5. Monitoring

Deploying a model is never the end of modelling work. During its continuing life cycle, a model and its working environment need to be overseen continuously. After a model has been created and validated, you should monitor it to determine whether it addresses the question it set out to answer.

It’s also important to monitor how the model is performing using ‘real-life’ data, which may differ from the data used to train the model, and whether it’s showing any unintended consequences which may lead to further bias.

No model will be perfect, and there’s no silver bullet that can ensure models are free from bias. But it’s essential that every effort is made to address this potential issue.

By continuing to strive to create and maintain models, and mitigate bias wherever possible, models can continue to develop and serve as a useful tool in building a more equitable health system, and advancing healthcare for all.

 

(1). George E. P. Box, Empirical Model-Building and Response Surfaces (1987), p. 424

(2). https://www.significancemagazine.com/science/428-george-box-1919-2013-a-wit-a-kind-man-and-a-statistician

(3). https://www.statice.ai/post/data-bias-types

(4). https://www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care/

(5). Artificial intelligence and algorithmic bias: implications for health systems, Trishan Panch, Heather Mattie and Rifat Atun, J Glob Health. 2019 Dec; 9(2): 020318.

(6). https://research.aimultiple.com/ai-bias-in-healthcare/

(7). https://www.nature.com/articles/s43856-021-00028-w