Neuromorphic Inference Lab

Full-stack machine learning engineered for reality.

I build systems where modelling quality and engineering quality are inseparable: scalable ML pipelines, CI/CD for model deployment, inference serving, and monitoring-ready outputs. Where it adds leverage, I ship RAG/LLMOps with traceability and evaluation discipline.

Scalable ML Pipelines CI/CD for ML Model Serving Inference Scaling Monitoring RAG / LLMOps

Serving & Latency Signals

Demonstrates production discipline: deployment-ready artefacts, serving endpoints, and latency-aware design choices.

Model Serving p95 Latency Budget Rollback Mindset

Forecasting Evaluation Signals

Forecasting treated as a system: backtesting discipline, error reporting, and monitoring-ready outputs for planning decisions.

Backtesting MAE / MAPE Retraining Triggers

RAG Reliability Signals

RAG with explicit traceability: retrieval trace, similarity signals, citation discipline, and refusal policies on weak evidence.

Retrieval Tracing Guardrails Evaluation Harness

These are screening signals, not marketing claims: each card links to the live system and a proof anchor for fast verification.

Hero systems

Three end-to-end builds that demonstrate the same full-stack identity: data → features → training → artefacts → serving → monitoring mindset.

NeuroGrid Fault Risk Scoring

Medium-voltage fault risk inference with versioned artefacts, CI-friendly training flow, and live API serving.

Tabular ML Model Serving CI Training Artefact Versioning

Forecast Studio

Forecasting treated as engineering: feature pipelines, backtesting harness, and deployment-ready outputs.

Time-Series Backtesting MLOps Patterns Monitoring-ready Outputs

RAG Knowledge Copilot

Retrieval traceability, grounded drafting, and guardrails — designed to signal production-minded LLMOps.

RAG LLMOps Evaluation Harness Guardrails

Why this portfolio is structured this way

The goal is to remove ambiguity in technical screening: each system is paired with clickable proof and explicit keywords. This improves matching for ATS and increases reviewer confidence.

Engineering signals

  • Versioned artefacts and reproducible runs.
  • CI/CD mindset (quality gates, regression checks).
  • Serving and latency budgets where relevant.
  • Monitoring hooks and retraining triggers.

Modelling signals

  • Backtesting/evaluation discipline (not just a single score).
  • Feature engineering aligned with operational constraints.
  • Uncertainty-aware thinking when required (forecasting).
  • Grounding and faithfulness for LLM workflows (RAG).