Context Engineering: The Skill That Replaced Prompt Engineering
Everyone's talking about context engineering as if it's a prompting technique. It's not. It's the skill that determines whether your AI investment produces strategy or noise — and the part that matters most has nothing to do with code.
Everyone’s talking about context engineering as if it’s a better way to write prompts. It’s not. Or rather — it is, but that framing misses the point so completely that it’s almost worse than not knowing the term at all.
Andrej Karpathy defined it as “the delicate art and science of filling the context window with just the right information for the next step.” Harrison Chase, the CEO of LangChain, put it more bluntly: “When agents mess up, they mess up because they don’t have the right context; when they succeed, they succeed because they have the right context.” Both are right. Both are talking about code.
The part nobody’s writing about is that the same discipline — curating the right information, in the right format, at the right time — applies to every decision an organisation makes. And the decisions that matter most aren’t happening in the code editor.
The bottleneck has moved
The inner loop is accelerating fast — AI handles more of the code generation, testing, and implementation every month. But the outer loop — what to build, for whom, why now — hasn’t sped up as significantly. That’s where the bottleneck has moved.
Andrew Ng frames it precisely: “Writing software, especially prototypes, is becoming cheaper. This will lead to increased demand for people who can decide what to build.” McKinsey’s research tells the same story from the other direction — adoption in marketing and operations dwarfs adoption in strategy and planning. The gap is enormous, and it’s not because the tools aren’t capable. It’s because most teams treat AI as an execution engine, not a thinking partner.
Prompt engineering trained us to ask better questions. Context engineering asks a harder one: does the model have everything it needs to give you an answer worth acting on?
Context engineering where it actually matters
The coding use case is well-documented — RAG pipelines, token budgets, tool descriptions, memory systems. I’ve built tooling for this myself. But the use cases that change what gets built, not just how fast, look different.
Requirements. Most teams hand an LLM a feature request and ask it to generate user stories. That’s transcription, not analysis. Context engineering for requirements means giving the model the existing product context, the customer feedback, the competitive landscape, the technical constraints — and then asking it to challenge your assumptions. Where are the gaps in this PRD? What customer problem are we not solving? What have we assumed that we haven’t validated? The output quality is entirely dependent on the input context. Garbage in, polished garbage out.
Architecture. Generating a system diagram is trivial. Evaluating whether an event-sourced architecture is worth its complexity for your specific regulatory requirements, team size, and operational maturity — that’s a context problem. The model needs to know your constraints, your compliance obligations, your team’s experience, your existing infrastructure. Without that context, you get textbook answers. With it, you get trade-off analysis that’s actually useful.
Business analysis. Competitive intelligence, market sizing, positioning strategy, pricing — all context-dependent. The difference between a generic SWOT analysis and one that surfaces an insight you hadn’t considered is the difference between feeding the model a company name and feeding it your customer interviews, your churn data, your pricing experiments, and your competitors’ recent moves.
Marketing and content strategy. This is not “AI writes blog posts.” This is using engineered context — audience analysis, competitive content landscape, topic saturation research, the author’s voice and expertise — to make strategic decisions about what to publish, when, and why. The drafting is the easy part — call me old-fashioned, but I still write things myself, sometimes even longhand. The context curation that precedes the writing is where the value lives for marketing content.
The handoff problem
In a multi-agent workflow I described in an earlier post, the pipeline looks like this: a Business Analyst agent challenges assumptions. A PM structures the findings into a PRD. An Architect proposes the system design. Engineers implement. A human reviews at every gate between phases.
Every one of those handoffs is a context engineering problem. What does the PM agent need to know about the BA’s findings? What does the Architect need from the PRD that the PM produced? What context gets lost in the handoff, and what irrelevant context accumulates? The agents have full autonomy within each phase — but the quality of each phase depends entirely on the context it receives from the one before.
This is the same problem that plagues human teams, by the way. Requirements get lost between product and engineering. Architectural constraints don’t reach the people writing the code. Context degrades at every handoff. The discipline of context engineering — deciding what to include, what to exclude, and how to structure the handoff — is not new. What’s new is that we now have a name for it, and a set of tools that make the cost of getting it wrong immediately visible.
Less is harder than more
Ned Lowe, CTO and co-founder of MISSION+, describes the discipline of being “uncomfortably simple” — stripping away complication until what remains is the actual problem. Context engineering demands the same restraint. The temptation is always to include more: more documents, more history, more data. But Anthropic’s own guidance is clear — treat context as a finite resource. Every token must justify its inclusion. Too much context degrades output as reliably as too little. The signal drowns in the noise.
It’s the same principle that makes a great film editor — what you leave on the cutting room floor matters more than what you keep. The audience never sees the discarded footage, but they feel its absence in the pacing, the clarity, the impact. Context engineering is a curation skill, not a generation skill. Knowing what to leave out is harder than knowing what to put in. A well-engineered context for an architecture decision might be three paragraphs — the constraints, the options, the evaluation criteria. A poorly engineered one is a 50-page dump of every document tangentially related to the system. The model will process both. Only one will produce a useful answer.
The fractional CTO’s discovery phase — which I wrote about in the previous post — is context engineering by another name. You’re building a mental model of the organisation: its technology, its team, its culture, its constraints. That mental model becomes the context for every subsequent decision. The quality of your recommendations depends on the quality of that model. Garbage context, garbage strategy.
From talking to thinking
Prompt engineering was learning to talk to the machine. Context engineering is learning to think with it.
The people who treat it as a coding skill will optimise the inner loop — faster code generation, better test coverage, smoother deployments. That’s valuable. But the people who apply it to requirements, architecture, strategy, and vision will change what gets built, not just how fast it ships.
The term is almost a year old. The discipline is as old as decision-making itself — curate the right information, present it clearly, and the quality of the decision follows. What’s changed is that we now have systems where the cost of bad context is immediate and measurable, and the reward for good context is an AI that thinks with you instead of past you.
The outer loop is where a large proportion of the value lives. Engineer the context accordingly.