Creating Confidence in Decision-Making: How Explainable AI is Augmenting Healthcare

Article
By
MathCo Team
June 6, 2022 9 minute read

Empowering healthcare with actionable and accessible insights

With AI-enabled healthcare operations, R&D, therapies, clinical trials, care planning, and everyday tasks steadily becoming the norm across the industry, the problem of trust in opaque models and unsubstantiated insights holds stakeholders back from fully embracing data-driven care delivery and optimized patient outcomes. Rushabh Padalia, Partner at MathCo dives into how Explainable AI (XAI) is the natural answer to this problem of trust in its ability to create transparency and instill confidence in everyday decision-making.

Q: From predictive analytics and context-aware applications to automation and machine vision, we’ve already seen pervasive AI adoption in healthcare. Why is Explainable AI now the need of the hour for healthcare organizations?

Rushabh: While AI spending across industries soared exponentially in 2020, healthcare was naturally expected to be at the forefront of this boom. The numbers, however, tell an entirely new story: AI in the healthcare market is singlehandedly expected to grow at a CAGR of 46.2%, as opposed to a mere 20.1% for all other industries combined.[1]

With AI now instrumental to drug discovery, precision medicine, disease prediction, and many dimensions of healthcare operations, its relevance across global contexts is only set to grow. However, this expansive growth has also turned the spotlight on the idea of transparency and trust in closed-box AI models. As decision-making in the healthcare space often has direct and critical implications for patients’ health outcomes, many stakeholders are, understandably, reluctant to rely on mathematical models for decision-making.

It is gaps like these – between numbers and patient outcomes – that Explainable AI can effectively bridge, pushing new frontiers for healthcare automation.  For instance, when AI is used to identify high-risk patients for specific diseases, it is important to understand the factors leading to specific individuals being highlighted as high risk vs low risk to design timely interventions. Typically, in such scenarios, XAI will help HCPs understand the number of visits by a patient in the last 6-12 months, the combination of one or more acute and or chronic diseases they may be diagnosed with, the impact of certain treatments on similar risky populations based on past data, and so on, to facilitate comprehensive patient overviews.

The use cases for XAI in this space are truly immense: whether it is for an insurance firm to review a potentially fraudulent claim to reduce operational costs, a drug manufacturer to fast-track end-to-end drug discovery phases to control a pandemic, or care management teams to identify the right patient populations for timely intervention, the Explainable component to AI will provide the required confidence for stakeholders to leverage AI across a range of applications and healthcare sub-segments.

Q: As XAI is precisely achieved by embedding ‘explainability’ into algorithms and ensuring high-quality data, could you touch upon how teams developing such models in a healthcare context could decide upon the kind of data they should collect, capture, and capitalize on?

Rushabh: With 1. the Centers for Medicare and Medicaid Services’ (CMS) Interoperability mandate of always placing the patient first while accessing data within and across entities (from Payers, Providers, and Pharmacies, for instance), and 2. Fast Healthcare Interoperability Resources (FHIR) defining data exchange protocols and content models, data availability across the healthcare sector has improved continuously and exponentially.

While there is no additional data collection or capturing required specifically for XAI – given that it is always better to have varied dimensions of data captured for any kind of AI model – the importance of data/feature engineering increases in the context of XAI. XAI requires the ability to slice and dice the data so that the model output can be presented in a variety of ways. Business-facing design thinking, the ability to trace data back to data patterns, and even having Explainable AI built into blueprints, such as in the case of NucliOS, our proprietary AI-powered platform, have proven to help stakeholders gain confidence with models and their results, enabling faster and more effective decision-making.

Similarly, in the context of healthcare, the aim of building an AI application should always be to reduce the cost of care, improve the quality of this care, enhance patients’ health outcomes, and ensure that the models gain the trust of the stakeholders using them. The following two use cases illustrate situations where XAI would be critical in enabling the successful consumption of an AI model:

1. Automating prior authorization – where an AI model would recommend the approval or denial of a patient’s treatment and payment coverage, based on historical data, to healthcare personnel such as nurses and medical directors – would require decisions to be assisted with XAI components to be truly trustworthy. Such advanced features can draw up a comparison of historical patient requests with similar dimensions, show actual approved vs denied ratios, and visually represent stakeholders’ predicted decisions – enabling any consumer of the model to accept the predicted decision with greater ease, expedite decision-making through clear insights, or even refute the decision predicted using data-backed reasoning.

2. As the emerging field of precision medicine brings together the complex and fascinating dimensions of patient demographics, clinical medicine, and genomics, HCPs will require a holistic view across all three dimensions to effectively deliver care. Understanding the impact of each of these three dimensions of precision medicine, for instance, getting a view of lifestyle changes’ interactions with genetic expression, is where XAI can augment HCPs effectively.

Q: With healthcare being a highly individualized sector, how do you think organizations can achieve and leverage AI systems that offer tailored explainability to meet unique patient needs at scale?

Rushabh: A significant problem that most healthcare practitioners today face is that although no two patients can be identical, from a clinical standpoint, they are frequently treated with the same diagnosis for the same disease. And this is precisely the kind of problem that AI can help resolve, with its ability to contextualize decision-making at speed and scale.

By analyzing large quantities of patient data, including demographic information, genomic profiles, environmental factors, prescription drugs, laboratory tests, and hospitalization history, AI models can prescribe customized medication and design-focused therapies unique to each patient’s needs.

However, achieving this level of personalization requires an AI system to be able to explain why it has prescribed a particular drug or what factors were involved in recommending, for instance, a minimally invasive surgery over medication. As drug and treatment efficacy can differ based on patients’ genetic makeup and biomarkers, explainability will be vital to limiting risks for patients and translating complex ML insights into successful healthcare outcomes.

Explainability ensures that practitioners are more confident in their decision-making and can reduce the time they spend on analyzing scans, for instance. The right explainable system will enable doctors to reliably assess recommendations against a patient’s condition, identify anomalies, and make informed decisions in critical clinical situations.

Q: From drug developers and medical representatives to HCPs, insurers, and drug distributors, how does a more transparent and causal AI system support healthcare workforces on an operational level?

Rushabh: Besides elevating patient experience and simplifying healthcare access, AI will be key to enhancing healthcare workforces’ efficiency and quality of care delivered. On an operational level, explainability in AI systems can enable more transparency across the healthcare value chain as organizations, providers, and systems benefit from universalized access to patient data and greater visibility into a system’s decision-making rationale, leading to improved workforce interoperability, diagnostic speed, accuracy, and patient outcomes.

With EHRs enabling more connected patient data for life science organizations, healthcare firms, and health insurers, healthcare forces can now better equip themselves to offer proactive care. For instance, context-aware biomedical devices can retrieve context data from sensors and digital patient profiles to recognize the context in which hospital workers perform their tasks. For a nurse who has come in for her shift and has to check on a patient admitted in her absence, these contextual elements would include her location, timing of care delivery, reliance on other staff members, and device location and state. Here, a context-sensitive device embedded in hospital beds could enable the bed to be aware of the patient, nurse, and diagnosis to display relevant patient information, prescription history, and next best actions for care. This improved awareness reduces dependency on manual patient records and allows for speedier and more targeted care, with personnel largely focusing on patient-critical tasks.

Add to this the enhanced ‘clarity of grounds’ unlocked by XAI, and this will not only translate into quality patient engagements across different touchpoints but also lower the hours healthcare workforces spend on routine, administrative, and documentation-intensive jobs, reducing the risk of burnout.

Q: A lack of trust in typically complex AI systems has only grown in recent times. Help us understand how XAI can help organizations restore and strengthen patient trust to enable seamless care.

Rushabh: The key to scaling AI in healthcare is strengthening patient trust in these systems. With healthcare delivery essentially involving a human element, the increased reliance on digital systems often becomes a very fundamental, yet decisive, sticking point in a patient’s trust in AI, combined with differing digital literacy levels, misconceptions, and more. Concerns that AI might make decisions in a biased manner naturally create poor patient trust, leading to knock-on effects for AI adoption in healthcare systems.

Internally, healthcare organizations can introduce explainability into their systems by unlocking collaborative whitespaces between healthcare teams, practitioners, and solution developers – with stakeholders learning about the data going into a system, potential causes of bias in decision-making, and transparent mechanisms to calibrate trust. This informed approach can then be conveyed by practitioners to patients on how AI-based solutions function and can help make personalized care recommendations unique to their patient profile.

On a leadership level, clearly defining an AI system’s purpose, scope, and operation from an ethical and logical standpoint would allow for well-directed efforts, setting benchmarks for greater responsibility and liability and providing an unambiguous outlook to incentivize further research into AI innovation and adoption. Constant communication with the general public on new research findings and ensuring transparency for technological solutions used will be vital to reducing patients’ resistance to AI systems.

Lastly, on a larger level, complying with patient healthcare information (PHI) guidelines on patient data, its usage, interoperability, and confidentiality, stipulated by regulations such as HIPAA, would also allow patients, practitioners, and healthcare systems to have the same level of trust in new AI solutions that they do in other drugs, devices, and systems that have received similar approvals. Organizations that regularly evaluate their operations against HIPAA regulations and provide certified services while engaging in consistent dialogue to apprise the public can bolster patients’ trust in the promise of AI.

Bibliography

[1] “Artificial Intelligence (AI) in Healthcare Market Size, Growth Report Analysis 2031.” MarketsandMarkets, January 25, 2023. https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-healthcare-market-54679303.html