Contact Us

nurse looking at form

4 Ways to Mitigate Bias and Prioritize Patients

No items found.

All data is collected by humans or human-designed systems—and humans have bias. Understanding, reducing, and disclosing bias is important for building and training any model, but the stakes are exceptionally high when it comes to healthcare. ML systems are driving actual interventions, guiding profound medical decisions, and assisting physicians and other healthcare practitioners in the diagnosis and treatment of actual, individual people.

Prioritizing the patient means minimizing bias.

For any team building models that assist in patient care, there’s an obligation to manage bias thoroughly and professionally as a fundamental part of their process. At KUNGFU.AI, we take this responsibility seriously, and we are continually building on our approach to identifying and managing potential bias in our effort to prioritize patient outcomes. 

A Preventive Approach to Managing Bias 

In the way we think about a preventive approach to healthcare, the KUNGFU.AI team thinks about managing bias as a critical preventive measure throughout the ML lifecycle. 

Let’s consider oral health, for example. A routine that includes daily brushing, flossing, and regular dental check-ups is the best way to prevent major dental work down the road. You wouldn’t brush your teeth just once and expect a lifetime of fresh breath and healthy teeth. 

Similarly, preventing bias to ensure that a ML system is effective for all patients takes a deliberate, consistent routine. There’s no single quick fix, but rather a series of preventive measures that must be implemented routinely at each step of the ML lifecycle. A proactive approach to managing bias can help ensure the ongoing health of a model and prevent the need for major corrective action at deployment. 

With this preventive approach in mind, our team has established some practical guidelines in an effort to minimize bias and prioritize meaningful patient outcomes across the life of a project. 

4 Ways to Mitigate Bias 

1. Expect bias at every step.

Start by recognizing what you don’t know. A critical first step for researchers, data scientists, and engineers working in this industry is to build and maintain an understanding of the most common biases in healthcare. Bias is not a bug. It’s the result of being human, and it should be expected throughout the lifecycle of a project—from design through data collection, training, deployment, and post-deployment monitoring. 

2. Build a diverse team.

Any population segment can be impacted negatively by bias, but race and gender bias in the healthcare system has led to particularly critical inequalities in care. Because this bias is a reflection of our narrow human thinking, an important step in mitigating bias is a team made up of diverse backgrounds and perspectives. A diverse data science team making hypotheses and scouring the data with varied experiences and approaches is more likely to find bias than a group of people all thinking the same way. 

But, a truly diverse team does not come together on its own. Leaders must understand why a diverse team is necessary and be fully committed to inclusive hiring. If you’re looking for a place to start, KUNGFU.AI co-founder, Stephen Straus, started The Diversity Pledge as a way to support startup founders, investors, and executives in their commitment to inclusive hiring practices. 

3. Engage subject matter experts from the start.

Data science teams often bring on subject matter experts after their model has been trained. But bias starts early, especially when it comes to data as complex and nuanced as patient health. Often, collected data doesn’t account for all of the patients’ symptoms. Sometimes, a model “learns” from training data that is different from the data it will need to review in a real world patient care setting. 

These issues can arise easily if the team doesn’t have deep medical expertise—and can potentially lead to disastrous results for real patients, such as errors in analyzing chest x-ray and brain imaging.

One way to avoid these unintended outcomes is to tap into subject matter experts early in the project. From collection, an expert should review datasets thoroughly to be sure that they are complete and fully representative of a patient population. 

4. Evaluate, monitor, and do both continuously.

A model that leads to meaningful patient outcomes needs more than good training. An ML solution in healthcare must perform well—without bias—for all relevant demographics using real patient data that is subject to the infinite, ever-changing variables of the healthcare landscape. Imagine training a model today with lung disease data gathered before the COVID-19 pandemic. The new medical realities of a global pandemic would require thorough testing and calibration to investigate and address how new data has drifted away from the original dataset. 

Ensuring that estimated probabilities from your model match actual population incidences requires rigorous, ongoing testing and monitoring. 

This means testing against existing population data before training, at deployment, and regularly for the life of the model. Planning ahead for a regular testing and calibration schedule is key to reducing bias and maintaining a healthy, functional ML system.

Read more

Learn more about prioritizing patient outcomes through responsible machine learning in the healthcare industry. Download our newest white paper.