White Papers

Why Most Fairness Metrics Don’t Work in Healthcare AI/ML

Selecting an appropriate definition of fairness is difficult for healthcare algorithms, as they are applied to myriad diverse problems. Read the paper to learn why we need different definitions of fairness and to understand the most ideal fairness metric for population health AI/ML.

While most existing fairness metrics aren’t appropriate to assess algorithms that inform population health decisions, healthcare organizations still have a responsibility to ensure their algorithms help fairly distribute limited resources. Learn about a new fairness metric developed to address the unique challenges of assessing fairness in a healthcare setting: Group Benefit Equality (GBE). With GBE, healthcare organizations now have a fairness metric to ensure that their predictive models reduce health disparities rather than exacerbate them.

Read the paper to learn: 

  • The existing problem with measuring fairness
  • What an appropriate fairness metric must address
  • About Group Benefit Equality (GBE), the most ideal fairness metric for population health AI/ML

Read the white paper

Name(Required)

By clicking on “Submit,” you agree to our Privacy Policy and Terms of Use. You may receive marketing materials and can opt-out at any time.

Make AI/ML a core element of your care strategy.

Get in touch today to see the ClosedLoop platform in action.