Every Dependency Is a Decision You Didn't Make
Your lockfile is hundreds of trust relationships nobody negotiated. The highest-profile supply chain attacks of the last decade exploited trust, not code — and no scanner caught any of them in time.
The most sophisticated supply chain attack in open source history was discovered because one developer thought a login felt slow.
In March 2024, Andres Freund — a Microsoft engineer working on PostgreSQL — noticed 500 milliseconds of unexplained SSH latency on a Debian test machine. He traced it to a backdoor in xz utils, planted by a contributor named Jia Tan who had spent over two years earning maintainer trust before inserting malicious code through modified build scripts and test fixtures disguised as harmless binary data. CVSS 10.0. Had it reached stable Linux distributions, it would have compromised OpenSSH globally. It hadn’t reached them yet — because Freund noticed the latency. By accident. Not by any tool, process, or audit.
Your lockfile is hundreds of trust relationships nobody negotiated. I’ve spent over a decade watching teams treat dependencies as a solved problem — run the scanner, tick the compliance box, move on. The scanner catches known vulnerabilities after they’re public. The attacks that matter exploit trust before anyone knows to look.
How trust fails
The xz backdoor was social engineering — a long game played against a single maintainer. But trust fails in other ways too.
Infrastructure acquisition. The Polyfill.io CDN domain was sold to a new owner in early 2024. By June, it was injecting malicious redirects into the script served to end users. Over 380,000 hosts were embedding the compromised script. The trust here wasn’t in a package — it was in a URL loaded at runtime. A <script> tag pointing to a domain you do not control is a terrible trust decision that seems like an obvious mistake in retrospect. But it still gets made.
Maintainer handoff. In 2018, the maintainer of event-stream — a popular npm package — handed ownership to a stranger who asked politely. The new maintainer added a targeted payload that stole cryptocurrency wallet credentials from Copay users. It ran undetected for over two months. The trust model was simple: someone offered to help, and the exhausted maintainer said yes.
Namespace confusion. In 2021, Alex Birsan demonstrated that he could compromise internal builds at Apple, Microsoft, and PayPal by publishing public packages that matched their private package names. The build system resolved the public version. No exploit, no vulnerability — just a trust assumption baked into the package manager’s resolution logic.
Typosquatting. A developer mistyping crossenv instead of cross-env gets a malicious package. No sophistication required. The trust failure is banal.
Multiple incidents. Myriad failure modes. npm audit catches none of them. What it catches — known CVEs in published advisories — is the least dangerous category: public, named, patchable. The attacks that matter are invisible until someone notices a login is 500 milliseconds too slow.
Your attack surface is an architecture decision
The average npm project pulls in around 80 transitive dependencies. Each one is a trust relationship — entered on your behalf, by someone else, without your review.
The choice to reach for a framework that brings 200 transitive packages vs. writing 50 lines yourself is a security decision wearing productivity clothes. Nobody frames it that way. Dependency count doesn’t appear in architecture reviews, sprint planning, or tech debt discussions. It should. Over 2,000 malicious packages were identified on npm alone in 2024 — a roughly 57-fold increase from 2018. The attack surface is expanding faster than the tooling that’s supposed to protect it.
This is harder to address in existing codebases — frameworks like Next.js and Rails bring deep dependency trees by design. But every new dependency is a decision point. The question is whether that decision is deliberate or invisible.
Security that fights the developer loses
Every security control that creates friction creates a workaround. The friction dynamic is the same as in human-AI oversight: the safe path has to be easier than the unsafe path, or people route around it. Not out of malice. Out of pragmatism.
At one company, the security team mandated npm audit and dependabot on every PR. Within a month, the output was so noisy — dozens of warnings in dev dependencies that never touched production — that every developer on the team had muscle-memorised the --force flag. The tool was technically running. The security posture was worse than before it was introduced, because now there was a false sense of compliance to an amateurish dictate.
The approval board that takes a week to vet a new dependency results in engineers just using old versions or copying the code directly. The secure internal library that requires three days of onboarding will always make it easier to pull the npm package that takes three minutes to integrate. Every friction point is a workaround waiting to happen.
And now the newest vector: AI-generated code accepted without reviewing the trust implications. When an LLM suggests pip install requests or npm install axios, the developer accepts a dependency decision made by a model trained on popularity, not security. Worse — open-source code generation models hallucinate non-existent package names 21.7% of the time, according to a study testing 16 models across 576,000 code samples. Attackers register these hallucinated names and wait. The practice has a name: slopsquatting. A security researcher registered one hallucinated package — huggingface-cli — as a proof of concept and watched it accumulate over 30,000 downloads in three months, including from a major tech company that had copy-pasted the AI’s recommendation into their documentation.
AI-assisted development increases the rate at which dependencies enter a codebase. If your review process was already strained at human speed, it won’t survive at AI-augmented speed. Every person is a vector — not because they’re careless, but because the system made the careful choice expensive and the careless choice instant.
Trust infrastructure
The fix isn’t a better scanner. It’s making trust decisions deliberate — and making the deliberate path the easy path.
Inventory the trust you’ve extended. You cannot manage trust relationships you haven’t mapped. A Software Bill of Materials (SBOM) is the dependency equivalent of an asset register. Regulatory frameworks worldwide are converging on this — the EU’s Digital Operational Resilience Act, effective since January 2025, mandates ICT third-party risk management for financial institutions and doesn’t care whether your dependency came from npm or a vendor. The direction of travel is clear even where mandates haven’t landed yet: you can’t make trust decisions about relationships you haven’t inventoried.
Verify provenance, not just version. npm now supports provenance attestations via Sigstore — cryptographic proof that a published package was built from a specific source repository via a specific CI pipeline. It answers the question no scanner asks: “was this package built from the code I can read?” Adoption is still limited. It should be the default.
Make the dependency decision visible. Not a review board — it creates friction that defeats the purpose. A lightweight gate: who maintains this? How many transitive deps does it bring? Is there a smaller alternative? Three questions, answered in the PR description, before the merge. Not thirty pages of policy. A shared definition of what “acceptable trust” means for your team — and the discipline to apply it when the deadline is tomorrow.
Watch behaviour, not advisories. Tools like Socket.dev analyse what packages do — network calls, filesystem access, shell execution — rather than waiting for a CVE to be filed. Socket reports detecting over a hundred zero-day supply chain attacks per week through behavioural analysis. This is the difference between checking a guest list and watching what the guests do after they walk in.
The supply chain doesn’t start at the registry. It starts at the keyboard — every time someone types npm install, accepts an AI suggestion, or merges a PR without checking what it pulled in. Most of those decisions are made by someone who just wants the feature to work. If your security model depends on every developer making the careful choice every time — at 4pm on a Friday, with looming deadlines, with an LLM offering the fast answer — it has already failed. The model that works is the one where the careful choice is the easy choice, the trust decisions are visible, and the cost of getting it right is lower than the cost of getting it wrong.