Have any questions for us? Want to partner with us? info@medeva.io
Write to us
MEDEVA
MEDEVA

BLOG DETAILS

The Ideal Model characteristics for the Healthcare applications

Analytics10/12/2022

Objective:

Interpretable machine learning is the process of providing explanations of machine learning and deep learning models to individuals with domain knowledge in a way that is comprehensible in natural language and easy-to-understand representations. This explanation should make sense to domain experts.

Need:

Healthcare is a unique field where patients have individual needs based on their physiology, genetics, social circumstances, and other characteristics. Therefore, any application or model built for healthcare must have proper interpretability capabilities to provide clear explanations from both modeling and clinical perspectives.

Components of interpretability:

According to the "A Governance Model for the Application of AI in Health Care (GMAIH) by Sandeep Reddy, Sonia Allan, Simon Coghlan, and Paul Cooper," any model built using Electronic Health Records (EHR) should have four characteristics:

  1. Fairness
  2. Transparency
  3. Trustworthiness
  4. Accountability

Fairness:

Fairness in machine learning means that the algorithm's results are independent of sensitive variables, such as gender, ethnicity, sexual orientation, disability, and other traits that should not correlate with the outcome. In modeling, the models should not be biased and should address the confounding factors. Collaboration between AI experts, clinical experts, and legal experts is necessary to design the model for fairness in all directions.

Transparency:

Transparency in modeling means that the models should be able to provide clear explanations to the end-users about how they are working. This increases the trustworthiness of the users. Explainable Machine Learning/Deep Learning (XAI) techniques, such as LIME and SHAP, can provide global and local explanations for both linear and black box models.

Trustworthiness:

In order to increase trustworthiness, proper privacy and security norms are required, along with educating clinicians on some of the fundamentals of AI/ML. Similarly, by learning some clinical domain, AI professionals can increase their trustworthiness. These efforts help to make the models more trustworthy.

Accountability:

Governments should have clear regulations in place to audit and approve the models before deployment in healthcare settings in terms of the characteristics of fairness, transparency, and trustworthiness.

DR Venugopala Rao Manneni

305, V4 Tower, Plot No.14, Karkardooma Community Centre, Delhi 110092