Skip to content
Barnett Studios
13 April 2026 · Guide · Lyubomir Bozhinov · 6 min read

Strangler Fig in Practice: Migrating Legacy Without the Big Bang

Everyone knows the strangler fig pattern. Few teams finish the migration. Here's what actually kills it — and the fixes that work.

Your legacy system has reached the point where every change is archaeology. The instinct is to rewrite — start fresh, do it properly this time. Every team that attempts it believes they’re the exception. Joel Spolsky called the full rewrite “the single worst strategic mistake that any software company can make.” He wrote that in 2000, about Netscape. The lesson hasn’t aged. The few rewrites that succeed take two to five years and consume the entire engineering organisation. Still true, even in the age of LLMs. A full rewrite is a bold move, indeed. The strangler fig offers a different bet: incremental replacement, reversible at every step.

Martin Fowler named the pattern in 2004, after Australian strangler figs — epiphytes that seed in the upper branches of a host tree, grow downward over many years, and eventually envelope and kill the host. You don’t kill the old system. You grow the new one around it until the old one is empty.

The pattern is well understood. The execution is where teams get stuck. Here’s what actually kills strangler fig migrations in practice.

The shared database

The most common failure mode. You’ve strangled the service layer — new requests route to the new system, old ones still hit the monolith — but both systems still read and write the same database.

You’ve moved the coupling, not removed it. Schema changes break both systems. Deployments are still coordinated. Sam Newman calls this the Shared Database anti-pattern in Monolith to Microservices, and it’s the trap most teams fall into first. The database is deferred because it’s the hardest thing to split — and because you can demo a new microservice to stakeholders, but you can’t demo a database migration. Your services may speak HTTP to each other, but if they share tables, you’ve drawn a diagram, not an architecture.

One easy fix: Change Data Capture. Debezium streams changes from the old database into the new system’s data store via (for example) Kafka — no dual-writes, no schema coupling, no coordination at deploy time. If CDC is too heavy for your scale, a simpler event relay works: the old system publishes change events, the new system consumes them. Either way, the database boundary is the migration that matters most. Everything upstream is plumbing.

The fig that never finishes

You migrated the easy routes — high-traffic, well-understood, few dependencies. The remaining twenty percent handle eighty percent of the edge cases. So they stay. Now you’re operating two systems indefinitely: double the infrastructure cost, double the on-call burden, double the cognitive load. I’ve seen migrations that have been “almost done” for three years.

Ian Cartwright, Rob Horn, and James Lewis documented this in Patterns of Legacy Displacement on Fowler’s site: displacement efforts stall not because the team lacks capability, but because there’s no forcing function. The hard routes are hard. Nobody volunteers. The deadline extends. The old system becomes permanent.

Set a kill date. Not a target — a decision point. If routes aren’t migrated by that date, the decision escalates to leadership: finish the migration or consciously accept the dual-system cost. Don’t extend the deadline. Most teams drift. Drift is the default. A kill date forces the conversation that drift avoids. The conversation is uncomfortable. That’s the point.

The distributed monolith

You strangled the monolith into services that still deploy together, share a release train, and fail in cascade. Newman’s test is simple: can two services deploy independently, without coordinating? If not, they’re one service with a network boundary in the middle — worse than a monolith, because you’ve added latency and failure modes without gaining independence.

The fix feels like going backwards: merge services that can’t deploy independently back into one. A well-structured modular monolith is better than a distributed monolith. Shopify’s engineering team demonstrated this at scale — they evaluated microservices, decided the operational overhead was unacceptable, and evolved their monolith into a modular architecture instead. They shipped faster because of it. The goal was never microservices. It was the appropriate level of complexity for your deployment topology. The courage to merge services back together is underrated.

The missing seam

You want to strangle incrementally, but there’s no clean boundary to strangle through. No front controller, no API gateway, no single point where you can intercept and redirect requests.

Michael Feathers defined the concept of a seam in Working Effectively with Legacy Code — a place where you can alter behaviour without editing the code around it. If your monolith doesn’t have one, create it before you start strangling. Fowler’s Branch by Abstraction does exactly this: introduce an abstraction layer inside the monolith, route through it, then redirect the abstraction to the new system.

The seam is the prerequisite. The strangler fig is the strategy you execute through it. Skip the first step and you’ll spend months trying to route traffic through a system that has no routing layer.

The tooling that changed everything

When Fowler wrote the original article in 2004, the tooling barely existed. Today the execution is dramatically easier — and most of the failure modes above have tooling-level solutions.

Proxies and gateways — Envoy, Kong, Traefik — give you the routing layer for traffic splitting at the infrastructure level. Feature flags let you canary new routes to a percentage of traffic before cutting over. CDC with Debezium and Kafka handles data migration without dual-writes or schema coupling. eBPF-based observability — Cilium, Pixie — lets you monitor traffic at both the old and new layers without instrumenting either codebase.

And increasingly, LLMs. AI coding assistants are particularly effective at the grunt work that makes strangling painful — understanding undocumented legacy code, writing tests for behaviour you need to preserve, generating adapter layers between old and new interfaces. They don’t replace the architectural decisions. They make the migration labour less daunting.

The combination is what matters. The gateway routes traffic. Feature flags control the percentage. CDC keeps the databases in sync during transition. Observability confirms the new path behaves like the old one. Each tool handles one concern. Together, they make the incremental approach safe.

The pattern hasn’t changed much since 2004. The infrastructure around it has matured enough that the hard parts are no longer technical. They’re organisational.


The strangler fig works because every step is reversible — if the new route fails, traffic goes back to the old one. It fails when teams avoid the hard decisions: cutting the shared database, setting the kill date, merging services that shouldn’t have been split. The technical strategy is sound. Finishing it is a leadership problem.