Machine Learning Systems That Ship

I design reproducible ML pipelines, structured experimentation, and practical evaluation strategies that turn data into decisions.

Focus: pipeline architecture, MLflow experiment discipline, thresholding for real-world tradeoffs.

Flagship Project

CardioSentinel

ML • Evaluation • Reproducibility

End-to-end heart-attack risk modeling system comparing linear models and boosted trees using structured feature engineering, MLflow tracking, and precision–recall driven threshold tuning.

  • Config-driven pipelines (safe vs learned feature separation)
  • Experiment tracking and artifact logging with MLflow
  • Decision-focused evaluation (precision floors, recall tradeoffs)

Tip: replace the GitHub link above with your CardioSentinel repo URL.

Consulting

I help teams move from “we trained a model” to “we have a reliable ML system.”

Experiment Frameworks

Set up clean MLflow tracking, metrics, artifacts, and model comparison that actually guides decisions.

Pipelines & Reproducibility

Turn notebooks into config-driven pipelines with disciplined preprocessing and feature governance.

Evaluation Strategy

Precision/recall, thresholds, calibration, and cost-aware tradeoffs tailored to your business context.

Insights

Curated write-ups based on real build-and-debug work (short, practical, and reusable).

Thresholds: why “lower can be better”

How changing the decision threshold can improve recall when missing positives is expensive.

Pipelines vs notebooks

When to keep exploration flexible, and when to lock a workflow into a repeatable pipeline.

Safe vs learned features

How to avoid leakage and keep feature engineering honest during train/test splits.

Contact

Best way to reach me:

Replace LinkedIn/GitHub/Email with your actual links.