A selection of production systems and applied ML work. Each project is written as a lightweight case study:
context, role, technical depth, and what “done” looks like in production (tests, monitoring, handoff).
Software systemsMicroservices · Integrations
Quanoni Platform
Context:
Platform for on-demand and scheduled consultations with routing, notifications, and payments.
Role:
Built and integrated core backend services and production-ready workflows (auth, messaging, payments, async processing).
Technical depth:
OAuth2 + JWT authentication, event-driven flows with Kafka, external API integrations (WhatsApp + payments),
and operable services with predictable failure handling (retries, idempotency where needed).
Outcome:
Production workflows designed for reliability, observability, and maintainable handoff (tests + operational considerations).
Spring BootSpring AIOAuth2JWTPostgreSQLAngularKafkaWhatsApp APIPayment API
Context:
Tooling that accelerates application generation and feature delivery through AI-assisted workflows.
Role:
Integrated LLM capabilities and implemented core platform features around authentication and payments.
Technical depth:
Spring AI + LLM API integration, secure JWT-based sessions, MySQL persistence, Angular-based UI,
and payment provider integration to support monetized usage.
Outcome:
AI features integrated in a way that is operationally safe for production (clear interfaces, controlled flows, and maintainable code paths).
Spring BootSpring AIJWTMySQLAngularPayment APILLM API integration
Context:
End-to-end ML system running under real deployment constraints (edge devices, sensor integration, resource limits).
Role:
Built the training/inference pipeline and distributed communication layer, including data ingestion from sensors.
Technical depth:
gRPC-based coordination, time-series storage in InfluxDB, PyTorch models (LSTM/GRU),
and robust ingestion from sensor APIs with attention to reliability and reproducibility.
Outcome:
Demonstrates full ML engineering loop: data → training → evaluation → inference, with distributed/edge constraints in mind.
PythongRPCPyTorchInfluxDBSensors API integrationLSTMGRU
Context:
Dataset and experimentation pipeline designed for trustworthy ML work on thermal/building data.
Role:
Implemented the preprocessing, quality checks, visualization, and modeling baselines for repeatable experimentation.
Technical depth:
Data visualization and automated quality checks, anomaly detection/removal, and model baselines using XGBoost and PyTorch (LSTM).
Outcome:
Cleaner datasets and reproducible experiments that make model performance comparisons more reliable.