Loading video player...
Introduction In this video, I walk through the complete MLOps lifecycle for an Australian Rain Prediction system. We move beyond simple model training to build a system that tracks its own health and evolves automatically using MLflow, Prometheus, Grafana, and Evidently,. Key Concepts Covered: • Automated Experiment Tracking: How every training run, parameter (XGBoost), and metric (F1 Score, Accuracy) is logged to MLflow,. • Smart Model Promotion: The logic that compares new models against the production baseline and only promotes them to the Registry if they perform better,. • Real-Time Monitoring: Using Prometheus to scrape metrics from FastAPI and visualizing Request Latency (477ns) and System Health in Grafana,. • Drift Detection: How we use Evidently AI to detect data drift and trigger conditional retraining to save compute resources. Timestamps: • 0:00 - Introduction: The Goal of Self-Tracking Systems • 0:30 - Model Lifecycle: Logging to MLflow & Dagshub, • 1:30 - The "Smart" Pipeline: Automated Production Promotion • 2:00 - API Monitoring: Grafana Dashboards & Prometheus Alerts, • 3:30 - Handling Data Drift: Conditional Retraining • 4:30 - Conclusion: The Full MLOps Stack Tech Stack: • Model Tracking: MLflow & Dagshub, • API Framework: FastAPI • Monitoring: Prometheus & Grafana • Drift Detection: Evidently AI • Interface: Streamlit #MLOps #DataScience #Python #MLflow #Grafana #MachineLearning #DevOps