My Journey
Professional Experience
A detailed look at my career path and achievements
AI Engineer - LLM Ops
Devoteam
Devoteam is a leading European consulting group that helps businesses leverage digital transformation through cloud, data, and AI solutions.
Delivered technical consulting for AWS-backed startups while contributing to the internal AI community through mentoring and knowledge sharing.
Key Achievements
- Took ownership of 4+ startup engagements, designing and implementing production-grade ML and LLM solutions
- Conducted technical interviews and mentored junior engineers
- Led sessions on emerging AI trends like LLMOps and multi-cloud architectures
- Established best practices for LLM evaluation and multi-provider flexibility that became reusable assets
Client Engagements
Kinetix
Audio AI Startup • 2025
Situation: Kinetix's GPU costs for Whisper inference were unsustainable at scale. Their NVIDIA-based infrastructure threatened unit economics as customer volume grew.
Task: Evaluate AWS Inferentia as a cost-effective alternative and migrate the Whisper inference stack without compromising transcription quality or latency.
Action: I ported the Whisper model to AWS Inferentia using the Neuron SDK, conducted rigorous benchmarks against NVIDIA Triton across various audio workloads, and implemented KV-Cache and I/O optimizations.
Result: Achieved 50% reduction in inference costs with a 10% performance improvement. Delivered a comprehensive benchmark report that informed Kinetix's long-term infrastructure strategy.
Start Catalog
Enterprise SaaS • 2025
Situation: The application was tightly coupled to OpenAI, creating vendor lock-in. Additionally, valuable enterprise knowledge was trapped in SharePoint repositories.
Task: Enable multi-provider LLM flexibility without disrupting existing functionality, and build a RAG pipeline to unlock SharePoint knowledge.
Action: I architected a feature-flip system for seamless OpenAI/Bedrock switching. Built a SharePoint crawling mechanism, indexed documents in OpenSearch with semantic chunking, and integrated a Bedrock agent with knowledge base.
Result: Delivered multi-provider LLM flexibility allowing runtime provider switching, plus a fully operational enterprise RAG system with secure, permission-aware document retrieval.
Welcome Account
FinTech Translation • 2025
Situation: The team had no industrialized way to measure LLM translation quality. Manual spot-checks couldn't scale, and regressions were only discovered after impacting customers.
Task: Build an automated LLM evaluation pipeline integrated with CI/CD, and implement observability for real-time monitoring of usage, latency, and costs.
Action: I implemented a DeepEval-based evaluation framework directly in the CI/CD pipeline. Added Sentry tracing for LLM calls and integrated Comet translation quality metrics.
Result: Established automated CI/CD LLM testing that catches regressions before deployment. Enabled live usage and cost monitoring giving leadership full visibility into AI spend.
Bevolta
Energy Tech Startup • 2025
Situation: AWS deprecated their Forecast service, forcing an urgent migration. Bevolta's system provided per-client forecasting models critical for personalization.
Task: Migrate from AWS Forecast to SageMaker AutoML while preserving per-client model personalization and implementing intelligent retraining.
Action: I designed a migration architecture using Autopilot with multiple algorithms. Built a DynamoDB-based performance tracking system with conditional retraining triggers. Created two operational modes: cost-conscious and exploratory.
Result: Achieved functional migration with zero service disruption. Delivered intelligent performance monitoring that triggers retraining only when needed, plus FinOps optimization reducing ongoing ML infrastructure costs.
Technologies
ML Engineer - MLOps
Solocal
Data Scientist - ML Engineer
Renault Digital
