Architecting MLOps Solutions for Healthcare

Article
By
Anjali Iyer
May 2, 2023 6 minute read

Understanding MLOps

Machine learning operations (MLOps) are often defined as using ML models in development/operations (DevOps) scenarios. But a more nuanced – yet simple – way of approaching this definition is to look at MLOps as everything that surrounds machine learning. This includes data engineering, DevOps infrastructure systems, experiment management, as well as monitoring and observability for data and data pipelines across a wide range of projects, processes, and use cases.

A machine learning model aims to create a statistical model with the data collected by applying an algorithm. Therefore, data, ML models, and code are the pillars of any ML-based software. The process starts with collecting and preparing the data to be analyzed, after which the machine learning algorithm is written and executed. Finally, after training the model, it is deployed as a part of business applications. The image below depicts the functions of a general MLOps model.

Machine learning is currently being used to great effect across industries, including retail, technology, and supply chain, where innovation in pattern detection, risk analysis, and recommendation systems are a few significant use cases. However, these have also been accompanied by data-related issues such as transparency, regulatory compliance, ethics, privacy, and developer/data bias.

Enter MLOps, where tools can automatically record and store information about how data is used, when models are deployed and recalibrated, by whom, and why changes were made, thus establishing transparency.

MLOps: Revolutionizing the healthcare industry

In the healthcare industry, where the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Payment Card Industry (PCI) add more layers of complexity to patient privacy and regulatory compliance, development and deployment teams can work with a robust MLOps framework to adhere to compliance protocols and privacy regulations, while also leveraging data for numerous applications.

Let’s look at a specific use case within the healthcare industry – wound care in post-acute treatments, which is often overshadowed by other common illnesses such as heart conditions, strokes, etc. As a lack of data in the post-acute space – compared to acute hospital and ambulatory care – persists, data capturing is often limited to standardized intake forms. Apart from lacking a structured care delivery model, this space also relies heavily on vendor/third-party relationships for intervention decisions, which are crippled by outdated and highly subjective wound care quality metrics.

A niche electronic health record (EHR) system would be the first step to resolving this challenge and creating advanced insight generation capabilities for the post-acute space. This system can leverage decades of unique experience to ensure that environmental context is considered. Specific attributes that change routinely between evaluations should also be grouped together within this system to increase the accuracy of data capture in keeping with the clinical workflow. Patient-specific attributes should then be captured at the initial consultation and updated during the treatment, where applicable. The following are some functionalities that an EHR system provides:

  1. Homogeneity: Clinicians use the same EHR systems to maintain and analyze data.
  2. Auto-calculation: The initial entry data in the appropriate fields is auto-calculated through an AI algorithm, which prevents incremental error generation.
  3. Consistency: Mandated entries of wound factors, which are necessary in wound evaluation, can ensure consistency in data entry.
  4. Data accuracy: A seamless UI/UX interface reduces erroneous data entry and helps maintain data accuracy.

Leveraging MLOps to simplify post-acute wound care.

The following image outlines the MLOps model that would streamline this process for the post-acute space.

  1. Develop a machine learning model (predicts wound healing time) that is robust and dynamic while still maintaining a level of generality across defined categories of wound-level factors.
  2. Integrate patient- and visit-level factors into the model as they significantly influence the independent variable across the defined wound-level categories.
  3. Add the complete machine learning model – after monitoring, adjusting, and validating it – using live EHR data without re-integrating the results into the EHR until appropriate efficacy is confirmed. The model will then predict wound healing time to create a more accurate predictive tool.
  4. Use the wound healing time model to produce a dynamic and predictive healing trajectory.
  5. Concurrently develop a “progress” metric representing weighted wound-level variables that contribute to measuring and determining change over time, thus defining a solid metric for “outcome”.
  6. Measure the effect and intervention of treatment by tracking the progress and noting the variables that change over time. This will determine the optimal treatment plan given any constellation of wound-level factors (i.e., wound type, severity, anatomic location), patient-level factors (i.e., comorbidities, medication), and visit-level factors (i.e., BMI determinations, lab results.) Then, introduce a production pilot once acceptable efficacy is established through monitoring, analysis, and adjustment.
  7. Attach cost variables to the dressings and interventions used so that the EHR can determine clinical outcomes and cost-effectiveness concurrently. It can also include a hybrid of these factors for a data-driven treatment recommendation engine that augments physician decision-making.

Streamlining patient outcomes: An MLOps framework for healthcare.

This model building pipeline provides an end-to-end architecture from ETL, to develop and deploy the machine learning model in the AWS environment. The model building pipeline helps achieve the following:

  1. Version your data effectively and kick off a new model training run.
  2. Validate the received data and check against data drift.
  3. Efficiently preprocess data for your model training and validation.
  4. Effectively train your machine learning models.
  5. Track your model training.
  6. Analyze and validate your trained and tuned models.
  7. Deploy the validated model.
  8. Scale the deployed model.
  9. Capture new training data and model performance metrics with feedback loops.

This inference pipeline model provides an architecture for receiving data elements passed from front-end EHR through the AWS API Gateway. The inference building pipeline helps achieve the following:

  1. Process data obtained and feed it to the ML model hosted on the API endpoint.
  2. Deploy and train the model to perform inference.
  3. Test the model’s performance and use it to make predictions on new data points.

It must be noted here that deploying inference services is still a relatively new discipline, with its own unique set of challenges. These include incorporating changing data patterns, identifying patterns, deploying necessary changes, re-architecting to fit structured databases, and so on.

Challenges and recommendations: How to make the most of MLOps.

However, a few architectural patterns are now being put into practice to address these challenges, and many of them are attempting to abstract the mechanics of production from data science teams’ practices. In my opinion, to solve these problems, the following can be possible solutions:

  1. Creating a cross-functional team of data scientists and data engineers.
  2. Using production-ready platforms right from the start of the project.
  3. Monitoring solutions with optimization techniques to understand performance anomalies in ML.
  4. Adopting higher-level automation and abstractions wherever possible.

References

1. “ML-Ops.Org.” ML Ops: Machine Learning Operations. Accessed June 1, 2022. https://ml-ops.org/content/mlops-principles.

2. “The State of Mlops in 2021.” ZDNET. Accessed January 20, 2022. https://www.zdnet.com/article/the-state-of-mlops-in-2021/.

Leader
Anjali Iyer
Associate Principal

A delivery excellence business leader with 11 + years of hands-on experience directing digital transformation implementations for Fortune 500 businesses, Anjali Iyer currently works as a part of the Delivery Leadership team at MathCo. When not working, Anjali can be found playing chess or trekking through steep valleys.

Healthcare & Life Sciences

Synthetic Data: A Potential Game Changer for Healthcare

Read more
Healthcare & Life Sciences

How Hyper-Personalization Is Shaping Patient Support Programs

Read more
All

Redefining Enterprise Transformation: The Power of Generative AI

Read more