Why AI in Healthcare Does Not Have to be a “Black Box”

April 24, 2019

It’s not enough for predictive models to be accurate, they must also be explainable. One of the main barriers to adopting artificial intelligence for hospitals and clinicians is their concern it’s a “black box” making it difficult to trust the results. Andrew Eye joins the DataPoint podcast and explains how ClosedLoop unpacks the “black box” of AI by allowing data scientists and clinicians to understand why and how factors impact a models prediction, driving faster adoptions and better clinical results.

Check it out and let us know what you think! http://bit.ly/2v4SABF

Interested in hearing more about the importance of explainability and ClosedLoop? Check out these other podcasts and videos featuring Andrew:

Back to the blog

Register and get updated on events and news from ClosedLoop

We add new resources regularly. Enter your email address to get them directly in your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.