ST

Fullstack · AI Engineering

Saad Tachrimant

How I work

How I ship reliable fullstack platforms and AI features.

Four years delivering Spring Boot/Angular platforms, Kafka-backed data flows, and applied AI/LLM features. The focus is scoped engagements with acceptance tests, weekly slices, and observable handoff so teams can run what we build.

1) Clarify scope and acceptance

We start with outcomes, constraints, and interfaces. I write down acceptance tests, integration points, and what “done” means so we avoid scope drift and overengineering.

2) Stack choices follow reliability

I pick tools based on reliability, maintainability, and handoff. Typical choices: Spring Boot for services, Angular for frontends, Kafka for streaming, Postgres/MySQL/MongoDB for storage, and Python/PyTorch/Spring AI for models and LLM features.

3) AI & ML treated as product

Neural features, classical models, and LLM flows are engineered like the rest of the system: evaluation, cost control, and guardrails from day one.

4) Delivery, observability, handoff

Weekly thin slices with demos, tests, and instrumentation. I keep assumptions explicit, document decisions, and hand off with runbooks so you are not dependent on me.

5) Engagement modes

Freelance, delivery-focused, and scoped. I either own a mission end-to-end or embed with your team to ship specific slices.

The goal is clear scope, visible progress, and production-ready outcomes — not open-ended advisory work.

What working together feels like

Structured, transparent, and calm. I surface trade-offs early, document decisions, keep stakeholders updated weekly, and make progress visible with demos and metrics.

Start a scoped project