Question for tanepiper: what would you have Microsoft do to improve things here?
My read of your article is that you don't like postinstall scripts and npx.
I'm not convinced that removing those would have a particularly major impact on supply chain attacks. The nature of npm is that it distributes code that is then executed. Even without npx, an attacker could still release an updated package which, when executed as a dependency, steals environment variables or similar.
And in the meantime, discarding box would break the existing workflows of practically every JavaScript-using developer in the world!
You mention 2FA. npm requires that for maintainers of the top 100 packages (since 2022), would you like to see that policy increased to the top 1,000/10,000/everyone? https://github.blog/security/supply-chain-security/top-100-n...
You also mention code signing. I like that too, but do you think that would have a material impact on supply chain attacks given they start with compromised accounts?
The investment I most want to see around this topic is in sandboxing: I think it should be the default that all code runs in a robust sandbox unless there is as very convincing reason not to. That requires investment that goes beyond a single language and package management platform - it's something that needs to be available and trustworthy for multiple operating systems.
It's a stretch to pin blame on Microsoft. They're probably the reason the service is still up at all (TFA admits as much). In hindsight it's likely that all they wanted from the purchase was AI training material. At worst they're guilty of apathy, but that's no worse than the majority of npm ecosystem participants.
"No Way To Prevent This" Says Only Package Manager Where This Regularly Happens
My non-solution years ago was to use as little dependencies as possible. And vendor node_modules then review every line of code changed when I update dependencies.
Not every project and team can do that. But when feasible, it's a strong mitigation layer.
What worked was splitting dependency diff review among the team so it's less of a burden. We pin exact versions and update judiciously.
The title does not do justice to public course of thinking which is more like "Oh no, not again! Anyway, look at this shiny new tool."
Anyone have a good solution to scan all code in our Github org for uses of the affected packages? Many of the methods we've tried have dead ended. Inability to reliably search branches is quite annoying here.
Here’s a one-liner for node devs on MacOS, pin your versions and manually update your supply chain until your tooling supports supply chain vetting, or at least some level of protection against instantly-updated malicious upstream packages.
Would love to see some default-secure package management / repo options. Even a 24 hour delayed mirror would be better than than what we have today.
find . -name package.json -not -path "/node_modules/" -exec sh -c ' for pkg; do lock="$(dirname "$pkg")/package-lock.json" [ -f "$lock" ] || continue tmp="$(mktemp)" jq --argfile lock "$lock" \ ".dependencies |= with_entries(.value = $lock.dependencies[.key].version) | .devDependencies |= with_entries(.value = $lock.dependencies[.key].version // $lock.devDependencies[.key].version)" \ "$pkg" > "$tmp" && mv "$tmp" "$pkg" done ' sh {} +
I think the cooldown approach would make this type of attack have practically no impact anymore, if nobody ever updates to a newly published package version until, say, 2-3 days have gone by, surely there will be enough time for owner of the package to notice he got pwnd.
pnpm have already implemented a minimum age policy
Here's a short recap of what you can do right now, because changing the ecosystem will take years, even if "we" bother to try doing it.
1. Switch to pnpm, it's not only faster and more space efficient, but also disables post-install scripts by default. Very few packages actually need those to function, most use it for spam and analytics. When you install packages into the project for the first time, it tells you what post-install scripts were skipped, and tells you how to whitelist only those you need. In most projects I don't enable any, and everything works fine. The "worst" projects required allowing two scripts, out of a couple dozen or so.
They also added this recently, which lets you introduce delays for new versions when updating packages. Combined with `pnpm audit`, I think it can replace the last suggestion of setting up a helper dependency bot with zero reliance on additional services, commercial or not:
https://pnpm.io/settings#minimumreleaseage
2. If you're on Linux, wrap your package managers into bubblewrap, which is a lightweight sandbox that will block access to almost all of your system, including sensitive files like ~/.ssh, and prevent anything running under it from escalating privileges. It's used by flatpak and Steam. A fully working & slightly improved version was posted here:
https://news.ycombinator.com/item?id=45271988
I posted the original here, but it was somewhat broken because some flags were sorted incorrectly (mea culpa). I still prefer using a separate cache directory instead of sharing the "global" ~/.cache because sensitive information might also end up there.
https://news.ycombinator.com/item?id=45041798
3. Setup renovate or any similar bot to introduce artificial delays into your supply chain, but also to fast-track fixes for publicly known vulnerabilities. This suggestion caused some unhappiness in the previous discussion for some reason — I really don't care which service you're using, this is not an ad, just setup something to track your dependencies because you will forget it. You can fully self-host it, I don't use their commercial offering — never has, don't plan to.
https://docs.renovatebot.com/configuration-options/#minimumr...
https://docs.renovatebot.com/presets-default/#enablevulnerab...
4. For those truly paranoid or working on very juicy targets, you can always stick your work into a virtual machine, keeping secrets out of there, maybe with one virtual machine per project.
It seems to me like one obvious improvement is for npm to require 2fa to submit packages. The fact that malware can just automatically publish packages without a human having to go through an MFA step is crazy.
Here is an issue from 2013 where developers are asking to fix the package signing issue. Gone fully ignored because doing so was “too hard”: https://github.com/npm/npm/pull/4016
I playing and win in jo777 guys, search in google now
I think if somebody wants to see library distribution channels tightened up they need to be very specific about what they would like to see changed and why it would be better, since it would appear that the status quo is serving what people actually want - being able to create and upload packages and update them when you want.
> But right now there are still no signed dependencies and nothing stopping people using AI agents, or just plain old scripts, from creating thousands of junk or namesquatting repositories.
This is as close as we get in this particular piece. So what's the alternative here exactly - do we want uploaders to sign up with Microsoft accounts? Some sort of developer vetting process? A curated lib store? I'm sure everybody will be thrilled if Microsoft does that to the JS ecosystem. (/s) I'm not seeing a great deal of difference between having someone's NPM creds and having someone's signing key. Let's make things better but let's also be precise, please.
funny how npm is the exact same model as maven, gopkg, cpan, pip, mix, cargo, and a million others.
but only npm started with a desire to monetize it (well, npm and docker hub) and in its desire for control didn't implement (or allowed the community to implement) basic higiene.
I see two ways to fight supply chain attacks:
* The endless arms race.
But, nevermind. It's been 2 years since Jia Tan and the amount of such 'occurrences' in the npm ecosystem in the past 10 years are bordering on uncountable at this point.
And yet this hack got through? This amateuristic and extremely obvious attempt? The injected function was literally named something like 'raidBitcoinEthWallet' or whatnot.
npm has clearly done absolutely nothing in this regard.
We haven't even gotten to the argument of '... but then hackers will simply use the automated tools themselves and only release stuff that doesn't get flagged'. There's nothing to talk about; npm has done nothing.
Which gets us to:
* Web of trust
This seems to me to be a near perfect way for the big companies that have earned so, so much money using the free FOSS they rely on, to contribute.
They spend the cash to hire a team that reviews FOSS stuff. Entire libraries, sure, but also updates. I think most of them _already do this_, and many will even openly publish issues they found. But they do not publish negative results (they do not publish 'our internal team had a quick look at update XYZ of project ABC and didn't see anything immediately suspicious').
They should start doing that. And repos like npm, maven, CPAN, etcetera should allow either the official maintainer of a library, or anybody, to attach 'assertments of no malicious intent'.
Imagine that npm hosts the following blob of text for NPM hosted projects in addition to the javascript artefacts:
> "I, google dev/security team, hereby vouch for this update in the senses: not-malicious. We have looked at it and did not see anything that we think is suspicious or worse. We make absolutely no promises whatsoever that this library is any good or that this update's changelog accurately represents the changes in it; we merely vouch for the fact that we do not think it was written with malicious intent. We explicitly disavow any legal or financial liability with this statement; we merely stake our good name. We have done this analysis on 2025-09-17 and did it for the artefact with SHA512 hash 1238498123abac. Signed, [public/private key infrastructure based signature, google.com].
And a general rule that google.com/.well-known/vouch-public-key or whatever contains the public key so anybody can check.
Aside from Jia Tan/xz (which always stops any attempt; Jia Tan/xz was so legendary, exactly how the fuck THIS still happens given that massive wakeup call boggles my mind!), every supply chain attack was pretty dang easy to spot; the problem was: Nobody was looking, and everybody configures their build scripts to pick up point updates immediately.
We should update these to 'pick them up after a certain 'vouch' score is reached'. Where everybody can mess with their scoring tables (don't trust google? reduce the value of their vouch to 0).
I think security-crucial 0day fixing updates will not be significantly hampered by this; generally big 0-days are big news and any update that comes out gets analysed to pieces. The vouch signatures would roll in within the hour after posting them.
The beatings will continue until JS dev culture reforms.
[dead]
Yes
> The tools we use to build software are not secure by default, and almost all of the time, the companies that provide them are not held to account for the security of their products.
The companies? More like the unpaid open source community volunteers who the Fortune 500 leech off contributing nothing in return except demands for free support, fixes and more features.