Contact Us

doctor looking at radiology scans

Data, Security, and Ethical Risks of AI Use in Healthcare

No items found.

Artificial Intelligence (AI) has promised to revolutionize our modern world. And while we’re still barely scratching the surface of what AI will ultimately be able to do, providers all across the healthcare industry are already leveraging AI to improve patient outcomes while improving their bottom lines.

And while most doctors and health care providers are known to follow some variation of the Hippocratic Oath, it’s not common for data scientists or developers to make a similar pledge. That said, more than a few have been known to quote Jeff Goldblum’s character Malcolm from Jurassic Park and opine, “...your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should.”

Movie quotes aside, new technology — especially tech as data hungry as AI — brings with it renewed responsibilities around bias, data security, privacy, and ethical use.

While technology, especially AI, is often introduced to increase efficiency or lower costs, there are very real concerns that directly impact patient outcomes and lives. Because of this, patients must be prioritized at each step of development and deployment of AI and machine learning (ML) systems. This prioritization leads to improved patient outcomes, but it also demands additional complexity at every stage — requiring deeper expertise in both healthcare, as well as data privacy and security.

The Sheer Amount of Data

Unlike some sectors, healthcare produces a constantly growing volume of data for each patient, and has for years. This includes clinical data, diagnoses, medical imaging, financial records, and many others. This is great news for algorithms that glean insight and create new relationships from large datasets, and it will likely lead to incredible breakthroughs and changes in the future. But this also creates ongoing and evolving privacy and ethical concerns. 

And while healthcare is very carefully regulated in most countries — including regulatory schemes like HIPAA in the United States — compliance requires an active commitment. Additionally, AI is evolving so fast that laws are often outdated or have large gray areas where AI may be operating.  This is why it is critical that AI systems, the companies that build and deploy them, and the healthcare organizations that utilize them all take their obligation to act in the patients’ best interests seriously.

The Eternal Vigilance of Cybersecurity

In addition to privacy concerns, keeping data secure is one of the most complex concerns of any organization today. Where and how data is stored is only one part of it. Actively looking for and mitigating attacks to public or attractive targets requires a highly specialized skill set and a constant state of alertness. How the data needs to be accessed and used, including the use of AI, impacts these efforts as well. One way of helping keep data secure is using a data federation model. 

Federated Data

AI and ML can learn and accomplish more with larger datasets, but connecting to multiple data sources creates incredible security and integrity concerns. One way to overcome these concerns is to allow federated access. This means that the AI and the data scientists running it never take possession of the data and instead only transfer the weights and biases of the algorithm. Doing this not only allows each provider to maintain their own data security standards, but it also allows the AI to learn from data across providers — ultimately providing better patient outcomes from a broader dataset.

The Human Factor

Perhaps the greatest concern in using AI in healthcare isn’t in the tactical deployments or systems devised to maintain security. It’s making sure that patients and their data are treated ethically. Prioritizing the patient in healthcare AI use means understanding that each choice made in deploying technology or leveraging their data ultimately impacts their care. It’s important to make sure that introducing new tech doesn’t negatively impact their care while it’s building its model or creating a new breakthrough.

But it’s more than that. Prioritizing patients means AI deployment comes with user-friendly interfaces that doctors, nurses, and techs can understand and use quickly. It means that data, diagnoses, treatment,  or other clinical conclusions aren’t delayed by a system running at max capacity or focused on crunching numbers for a secondary purpose. It means listening to both patient and health care provider feedback so issues are quickly identified and resolved.

It all comes down to trust.

AI promises to absolutely transform how we regard human health. At the same time, reducing cost and improving efficiency will always be a concern, especially for large healthcare organizations. But none of this happens without trust. Patients have to trust their provider is acting in their best interest. Health care providers have to trust that these solutions are created with the same focus on their patient responsibility.

This is why our core values at KUNGFU.AI prioritize sound data science and prioritize the people our solutions will ultimately benefit. To learn more about how we prioritize patients by creating responsible machine learning for the healthcare industry, read our latest whitepaper.