FlashAcademy DevEnv: From "Works On My Machine" to Kubernetes-Native
After modernizing FlashAcademy's production infrastructure, rebuild their local development environment to match — ending years of onboarding friction and "works on my machine" debugging.
The Context
This project came immediately after the FlashAcademy EKS migration. We'd just finished building a modern, GitOps-driven production platform. But there was a problem: the local development environment was still stuck in 2019.
A senior developer asked if I could rewrite their Makefile into Taskfile as a quick improvement. I took one look at the setup and decided a total rewrite was in order.
The Problem
When I tried to set up the local dev environment on my MacBook — with 20+ years of infrastructure experience — I hit wall after wall. The steps were confusing, the dependencies were fragile, and I found myself questioning whether things were working or silently failing.
The cert situation was a red flag
The setup required injecting locally-generated certificates into your OS's certificate authority. Browsers still threw warnings. Every developer had gone through this ritual, and every developer had a slightly different result.
Onboarding was blocked by availability
New hires couldn't set up their environment without a senior developer jumping on a call to walk them through the process. If the seniors were busy? Onboarding waited.
The development workflow was... creative
Developers would literally sync inside the Docker containers, make changes there, then copy the generated files back to their host machine. This wasn't a workaround — it was the standard process.
M1/M2 Macs were second-class citizens
Three of us had Apple Silicon machines. The x86 images ran painfully slow under emulation. This had been raised multiple times, but fixing it meant touching the fragile Docker Compose setup that nobody wanted to touch.
The Unity team had given up
FlashAcademy has Unity developers building WebGL components. They had tried to get the local environment working. They couldn't. They'd been working around it ever since.
The Decision
The senior developer wanted a Makefile-to-Taskfile conversion. A reasonable ask — Taskfile is cleaner, cross-platform, and declarative.
But I'd just spent months building a proper Kubernetes platform for production. The local environment was Docker Compose with Traefik and Make scripts — completely different from what we'd deployed to EKS. Every "works on my machine" bug was a symptom of this mismatch.
I proposed something bigger: rebuild local development to mirror production. Same Kubernetes manifests. Same patterns. True environment parity.
They said yes.
The 7 Days
I worked solo, referencing the EKS infrastructure we'd just built.
Days 1-2: Tilt Foundation
Started with Tilt and the critical path — the services developers touch every day. Got the core backend and frontend running in k3d with live reload. Proved the concept worked.
Days 3-4: Service Migration
Brought over the remaining services one by one. 12 microservices across 6 different tech stacks: PHP/Laravel, React/TypeScript, Python/FastAPI, Astro.js, Unity WebGL, and supporting infrastructure.
Days 5-6: Makefile → Taskfile
Rewrote all the developer commands. task up to start everything. task db:seed to populate test data. 18 Taskfile modules for granular control.
Day 7: Polish and Documentation
ARM64 detection for M1/M2 Macs. The wildcard DNS trick (*.xip.flashacademy.com auto-resolves to localhost). AWS ACM certificate sync so nobody ever generates local certs again.
It was exhausting. I focused exclusively on this for the full week and it drained me. But it was done.
The Solution
| Layer | Technology | Purpose |
|---|---|---|
| Cluster | k3d | Lightweight K8s-in-Docker (single-node, 15s startup) |
| Orchestration | Tilt | Live reload, unified UI, service dependencies |
| Manifests | Kustomize | Base/overlay pattern — same manifests local → prod |
| Ingress | ingress-nginx | Production-identical routing |
| SSL | AWS ACM sync | Real certificates, zero local generation |
| Registry | k3d local registry | Build once, cache locally |
| Task Runner | Taskfile | Declarative, cross-platform commands |
The Results
| Metric | Before | After |
|---|---|---|
| Setup commands | 10+ manual steps | 4 commands |
| New engineer onboarding | 2-3 days (senior required) | 30 minutes (self-service) |
| Environment parity | Docker Compose vs K8s | 100% identical manifests |
| M1/M2 Mac support | Slow x86 emulation | Native ARM64 builds |
| Live reload latency | Rebuild container | ~500ms (no rebuild) |
| Unity team status | "Never got it working" | Fully onboarded |
Additional metrics:
- • 12 microservices integrated across 6 tech stacks
- • 56 Kubernetes manifests with proper Kustomize structure
- • 18 Taskfile modules for granular control
- • Delivered solo in 7 days (typically 4-6 weeks for a platform team)
The Reaction
A couple of jaw drops. Some loud praise.
A genuine "Thank you" from developers who'd been fighting this setup for years.
"At last!"
The Unity developers — the ones who had never gotten the old setup working — were finally onboarded. The resistance disappeared when the friction did.
What Made This Work
Environment parity was the real goal
This wasn't just "make local dev easier." It was "make local dev identical to production." Same manifests, same patterns, same debugging experience. When something works locally now, it works in staging. That confidence changes how teams ship.
ARM64 detection solved a real pain point
Custom build.tilt library detects the architecture and builds native images. M1/M2 Macs went from painfully slow to fast. No configuration needed.
Zero hosts-file editing
*.xip.flashacademy.com wildcard DNS auto-resolves to 127.0.0.1. Works instantly on any machine, any OS. One less manual step that could go wrong.
AWS ACM certificate sync
Production SSL certificates pulled from Secrets Manager. No more local cert generation, no more browser warnings, no more injecting things into your OS's certificate authority.
The EKS migration paid dividends
We'd already containerized everything properly during the production migration. We'd already fought through the PHP/Apache legacy issues. This project inherited all that work — the recipe was proven.
Skills Demonstrated
Connection to the EKS Migration
This project only worked because of the production infrastructure migration we'd just completed. The containerization was done. The Kubernetes patterns were established. The manifests existed.
Building local dev to mirror production meant developers now work in the same environment they deploy to. The "works on my machine" category of bugs effectively disappeared.
Two projects, one coherent platform — from laptop to production.
This case study represents a complete developer environment transformation, delivered solo in 7 days, enabling a team that had struggled with local development for years to onboard new engineers in 30 minutes.
Have a Similar Challenge?
Whether it's developer experience, local environments, or platform engineering — I'd love to hear about it.
← Back to Case Studies