Weekly roundup of MLOps and DataOps - Issue #7



Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that Weekly roundup of MLOps and DataOps will receive your email address.

Weekly roundup of MLOps and DataOps
Weekly roundup of MLOps and DataOps - Issue #7
By Subbu Banerjee • Issue #7 • View online
I know its being quite sometime since my last newsletter, so without further ado lets get started.
MLOPS is still a growing field with companies still trying to figure out what are the real needs of the platform that would enable a datascience team.

Thoughtworks guide to MLOPS platform
GitHub - thoughtworks/mlops-platforms: Compare MLOps Platforms. Breakdowns of SageMaker, VertexAI, AzureML, Dataiku, Databricks, h2o, kubeflow, mlflow...
MLOPS Antipatterns
We describe lessons learned from developing and deploying machine learning models at scale across the enterprise in a range of financial analytics applications. These lessons are presented in the form of antipatterns. Just as design patterns codify best software engineering practices, antipatterns provide a vocabulary to describe defective practices and methodologies. Here we catalog and document numerous antipatterns in financial ML operations (MLOps). Some antipatterns are due to technical errors, while others are due to not having sufficient knowledge of the surrounding context in which ML results are used. By providing a common vocabulary to discuss these situations, our intent is that antipatterns will support better documentation of issues, rapid communication between stakeholders, and faster resolution of problems. In addition to cataloging antipatterns, we describe solutions, best practices, and future directions toward MLOps maturity.
6 Lessons Learned at Booking.com deploying ML
TL;DR Our main conclusion is that an iterative, hypothesis driven process, integrated with other disciplines was fundamental to build 150 successful products enabled by Machine Learning. Here is the paper
EuroPython: Production ML Monitoring
Alejandro presents an end-to-end example showcasing best practices, principles, patterns and techniques around monitoring of machine learning models in production. He shows how to adapt standard microservice monitoring techniques towards deployed machine learning models, as well as more advanced paradigms including concept drift, outlier detection and AI explainability. If you are interested in the video
Production Machine Learning Monitoring: Outliers, Drift, Explainers & Statistical Performance | by Alejandro Saucedo | Towards Data Science
Did you enjoy this issue?
Subbu Banerjee

Weekly roundup of MLOps and DataOps

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Powered by Revue