An Event Driven Approach to MLOps

Tapas Das
3 min readJun 4, 2023

--

Pic Courtesy: https://blogs.nvidia.com/blog/2020/09/03/what-is-mlops/

The understanding of the machine learning lifecycle is constantly evolving. When I first came across the graphics illustrating this “cycle”, the emphasis was on the usual suspects (data ingestion, cleansing, EDA, modeling etc). Less emphasis was given to the more elusive and less tangible final state- “model deployment”, “model serving”, “model observability”, etc.

While the end-to-end ML lifecycle has always been pitched as an actual “cycle”, till date there has been limited success in actually managing this end-to-end process at enterprise level scale.

Orchestration-based MLOps Architecture

Most of the MLOps architectures or implementations I’ve come across are “Orchestration” based with tight coupling between the different components. The data is typically waiting in a warehouse and a workflow orchestration tool is used to schedule the extraction and processing, as well as the retraining of the model on fresh data.

This architecture is particularly useful for problems where users don’t need real-time scoring, like a content recommendation engine (for songs or articles) that serves pre-computed model recommendations when users log into their accounts.

But this architecture falls short in below scenarios.

  • When there are new data sources constantly getting added to the ML lifecycle
  • When the model needs to be retrained for real-time applications
  • When there’s requirement for user-triggered manual model retraining

Message-based MLOps Architecture

The message-based architecture follows a different approach where a message broker (e.g. Kafka) acts as the middle-man to help coordinate processes between the different ML components.

Pic created by author

This is greatly helpful when we want our system to continuously train on real-time data ingestion from an IoT device for stream analytics or for online serving.

Below are the steps as part of this architecture:

  1. ADS Creation Pipeline” ingests and processes the source data for building the feature store and final analytical dataset. Once the process is complete, it pushes a message to the message broker.
  2. The “Model Training Pipeline” subscribes to the message broker, so that when new message comes in from “ADS Creation Pipeline”, then it will start the model training process and deploy the final model into a model registry and API endpoint. Once the process is complete, it pushes another message to the message broker.
  3. The “Model Serving Pipeline” subscribes to the message broker, and gets notified when new message comes in from “Model Training Pipeline”. Incase of model decay or data drift, it pushes separate messages to the broker inorder to either start model retraining (in case of model decay) or start fresh data ingestion (in case of data drift).

Summary

In conclusion, “Message-based” event driven MLOps architecture greatly helps in decoupling the different ML components, while orchestrating the entire ML lifecycle using broker messages.

This takes care of the limitations mentioned for “Orchestration-based” MLOps architecture.

  1. Seamless addition of new data sources — Since every data source will have its own pipeline which just has to subscribe to the message broker in order to push/pull messages.
  2. Real-time model retraining — Since the model training pipeline is decoupled from other components, it can be triggered independently using message broker, without impacting the entire process.

There will be many more evolutions in the MLOps architectures, and I hope this article will serve as a primer for future iterations.

--

--

Tapas Das
Tapas Das

Written by Tapas Das

Solutions Architect at The Math Company

No responses yet