What it will take to weed out AI bias in healthcare

Experts on a panel hosted by the Digital Medicine Society and the Consumer Technology Association discuss how healthcare organizations can utilize artificial intelligence without amplifying inequities.
By Emily Olsen
10:41 am
Share

Photo: skynesher/Getty Images

Artificial intelligence is being used across the healthcare industry with the goal of delivering care more efficiently and improving outcomes for patients. But if health systems and vendors aren't careful, AI has the potential to support biased decision-making and make equities even worse

"Algorithmic bias really is the application of an algorithm that compounds existing inequity," Sarah Awan, equity fellow with CEO Action for Racial Equity and senior manager at PwC, said in a seminar hosted by the Digital Medicine Society and the Consumer Technology Association.

"And that might be in socioeconomic status, race and ethnic background, religion, gender, disability, sexual orientation, etc. And it amplifies inequities in health systems. So while AI can help identify bias and reduce human bias, it really also has the power for bias at scale in very sensitive applications."

Healthcare is behind other industries when it comes to using data analytics, said Milissa Campbell, managing director and health insights lead at NTT DATA Services. But it's important to figure out the basics before an organization rushes into AI. 

"Having a vision to move to AI should absolutely be your vision, you should already have your plan and your roadmap and be working on that. But address your foundational challenges first, right?" she said. "Because any of us who've done any work in analytics will say garbage in, garbage out. So address your foundational principles first with a vision towards moving to a very unbiased, ethically managed AI approach."

Carol McCall, chief health analytics officer at ClosedLoop.ai, said bias can creep in from the data itself, but it can also come from how the information is labeled. The problem is some organizations will use cost as a proxy for health status, which might be correlated but isn't necessarily the same measure. 

"For example, the same procedure if you pay for it under Medicaid, versus Medicare, versus a commercial contract: the commercial contract may pay $1.30, Medicare will pay $1 and Medicaid pays 70 cents," she said. 

"And so machine learning works, right? It will learn that Medicaid people and the characteristics associated with people that are on Medicaid cost less. If you use future cost, even if it's accurately predicted as a proxy for illness, you will be biased." 

Another issue McCall sees is that healthcare organizations are often looking for negative outcomes like hospitalizations or readmissions, and not the positive health results they want to achieve.

"And what it does is it makes it harder for us to actually assess whether or not our innovations are working. Because we have to sit around and go through all the complicated math to measure whether the things didn't happen, as opposed to actively promoting if they do," she said.

For now, McCall notes many organizations also aren't looking for outcomes that might take years to manifest. Campbell works with health plans, and said that, because members may move to a different insurer from one year to the next, it doesn't always make financial sense for plans to consider longer-term investments that could improve health for the entire population.

"That is probably one of the biggest challenges I face is trying to guide health plan organizations who, from a one standpoint, are committed to this concept, but [are] limited by the very hard and fast ROI near-term piece of it. We need to figure [this] out as an industry or it will continue to be our Achilles heel," Campbell said.

Healthcare organizations that are working to counteract bias in AI should know they're not alone, Awan said. Everyone involved in the process has a responsibility to promote ethical models, including vendors in the technology sector and regulatory authorities.

"I don't think anyone should leave this call feeling really overwhelmed that you have to have this problem figured out just yourself as a healthcare-based organization. There is an entire ecosystem happening in the background that involves everything from government regulation to if you're working with a technology vendor that's designing algorithms for you, they will have some sort of risk mitigation service," she said. 

It's also important to look for user feedback and make adjustments as circumstances change.

"I think that the frameworks need to be designed to be contextually relevant. And that's something to demand of your vendors. If they come and try to sell you a pre-trained model, or something that's kind of a black box, you should run, not walk, to the exit," McCall said.

"The odds that that thing is not going to be right for the context in which you are now, let alone the one that your business is going to be in a year from now, are pretty high. And you can do real damage by deploying algorithms that don't reflect the context of your data, your patients and your resources."

Share