ashishb 11 hours ago

Here's my `npm` command these days. It reduces the attack surface drastically.

  alias npm='docker run --rm -it -v ${PWD}:${PWD} --net=host --workdir=${PWD} node:25-bookworm-slim npm'

  - No access to my env vars
  - No access to anything outside my current directory (usually a JS project).
  - No access to my .bashrc or other files.
Ref: https://ashishb.net/programming/run-tools-inside-docker/
  • phiresky 10 hours ago

    That seems a bit excessive to sandbox a command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

    Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.

    • tetha 2 hours ago

      At work, we're currently looking into firejail and bubblewrap a lot though and within the ops-team, we're looking at ways to run as much as possible, if not everything through these tools tbh.

      Because the counter-question could be: Why would anything but ssh or ansible need access to my ssh keys? Why would anything but firefox need access to the local firefox profiles? All of those can be mapped out with mount namespaces from the execution environment of most applications.

      And sure, this is a blacklist approach, and a whitelist approach would be even stronger, but the blacklist approach to secure at least the keys to the kingdom is quicker to get off the ground.

    • ashishb 9 hours ago

      > Also I can recommend pnpm, it has stopped executing lifecycle scripts by default so you can whitelist which ones to run.

      Imagine you are in a 50-person team that maintains 10 JavaScript projects, which one is easier?

        - Switch all projects to `pnpm`? That means switching CI, and deployment processes as well
        - Change the way *you* run `npm` on your machine and let your colleagues know to do the same
      
      I find the second to be a lot easier.
      • larusso 4 hours ago

        I don’t get your argument here. 10 isn’t a huge number in my book but I don’t know of course what else that entails. I would opt for a secure process change over a soft local workflow restriction that may or may not be followed by all individuals. And I would definitely protect my CI system in the same way than local machines. Depending on the nature of CI these machines can have easy access rights. This really depends how you do CI and how lacks security is.

      • azangru 17 minutes ago

        > which one is easier?

        > Switch all projects to `pnpm`?

        Sorry; I am out of touch. Does pnpm not have these security problems? Do they only exist for npm?

      • jve an hour ago

        You do the backward logic here. I would go for a single person to deal with pnpm migration and CI rather than instruct other 10 for everyone to hopefully do the right thing. And think about it when the next person comes in... so I'd go for the first option for sure.

        And npm can be configured to prevent install scripts to be run anyways:

        > Consider adding ignore-scripts to your .npmrc project file, or to your global npm configuration.

        But I do like your option to isolate npm for local development purposes.

      • afavour 9 hours ago

        There are a great many extra perks to switching to pnpm though. We switched on our projects a while back and haven’t looked back.

      • fragmede 7 hours ago

        Am I missing something? Don't you also need to change how CI and deployment processes call npm? If my CI server and then also my deployment scripts are calling npm the old insecure way, and running infected install scripts/whatever, haven't I just still fucked myself, just on my CI server and whatever deployment system(s) are involved? That seems bad.

        • ashishb 6 hours ago

          Your machine has more projects, data, and credentials than your CI machine, as you normally don't log into Gmail on your CI. So, just protecting your machine is great.

          Further, you are welcome to use this alias on your CI as well to enhance the protection.

          • arghwhat 2 hours ago

            Attacking your CI machines means to poison your artifacts you ship and systems they get deployed to, get access to all source it builds and can access (often more than you have locally) and all infrastructure it can reach.

            CI machines are very much high-value targets of interest.

          • fragmede an hour ago

            > Further, you are welcome to use this alias on your CI as well to enhance the protection.

            Yes, but if I've got to configure that across the CI fleet as well as in my deploy system(s) in order to not get, and also be distributing malware, what's the difference between having to do that vs switching to pnpm in all the same places?

            Or more explicitly, your first point is invalid. Whether you ultimately choose to use docker to run npm or switch to pnpm, it doesn't count to half-ass the fix and only tell your one friend on the team to switch, you have to get all developers to switch AND fix your CI system, AND also your deployment system (s) (if they are exposed).

            This comment proffers no option on which of the two solutions should be preferred, just that the fix needs to made everywhere.

    • ashishb 9 hours ago

      > That seems a bit excessive to sandbox a command that really just downloads arbitrary code you are going to execute immediately afterwards anyways?

      I won't execute that code directly on my machine. I will always execute it inside the Docker container. Why do you want to run commands like `vite` or `eslint` directly on your machine? Why do they need access to anything outside the current directory?

      • bandrami 8 hours ago

        I get this but then in practice the only actually valuable stuff on my computer is... the code and data in my dev containers. Everything else I can download off the Internet for free at any time.

        • ashishb 8 hours ago

          No.

          Most valuable data on your system for a malware author is login cookies and saved auth tokens of various services.

          • hinkley 7 hours ago

            Maybe keylogging for online services.

            But it is true that work and personal machines have different threat vectors.

            • spicybright 5 hours ago

              Yes, but I'm willing to bet most workers don't follow strict digital life hygiene and cross contaminate all the time.

      • apsurd 8 hours ago

        it annoys me that people fully automate things like type checkers and linting into post commit or worse entirely outsourced to CI.

        Because it means the hygiene is thrown over the fence in a post commit manner.

        AI makes this worse because they also run them "over the fence".

        However you run it, i want a human to hold accountability for the mainline committed code.

        • ashishb 8 hours ago

          I run linters like eslint on my machine inside a container. This reduces attack surface.

          How does this throw hygiene over the fence?

          • apsurd 8 hours ago

            Yes in a sibling reply, i was able to better understand your comment to mean "run stuff on my machine in a container"

      • throwaway290 8 hours ago

        It's weird that it's downvoted because this is the way

        • apsurd 8 hours ago

          maybe i'm misunderstanding the "why run anything on my machine" part. is the container on the machine? isn't that running things on your machine?

          is he just saying always run your code in a container?

          • minitech 8 hours ago

            > is the container on the machine?

            > is he just saying always run your code in a container?

            yes

            > isn't that running things on your machine?

            in this context where they're explicitly contrasted, it isn't running things "directly on my machine"

    • simpaticoder 10 hours ago

      pnpm has lots of other good attributes: it is much faster, and also keeps a central store of your dependencies, reducing disk usage and download time, similar to what java/mvn does.

    • Kholin 7 hours ago

      I've tried use pnpm to replace npm in my project, it really speed up when install dependencies on host machine, but much slower in the CI containers, even after config the cache volume. Which makes me come back to npm.

    • worthless-trash 4 hours ago

      > That seems a bit excessive to sandbox a command that

      > really just downloads arbitrary code you are going to

      > execute immediately afterwards anyways?

      I don't want to stereotype, but this logic is exactly why javascript supply chain is in the mess its in.

  • dns_snek 32 minutes ago

    This folk-wisdom scapegoating of post-install scripts needs to stop or people are going to get really hurt by the false sense of security it's creating. I can see the reasoning behind this, I really do, it sounds convincing but it's only half the story.

    If you want to protect your machine from malicious dependencies you must run everything in a sandbox all the time, not just during the installation phase. If you follow that advice then disabling post-install scripts is pointless.

    The supply chain world is getting more dangerous by the minute and it feels like I'm watching a train derail in slow motion with more and more people buying into the idea that they're safe if they just disable post-install scripts. It's all going to blow up in our collective faces sooner or later.

  • kernc 7 hours ago

    > alias npm=...

    I use sandbox-run: https://github.com/sandbox-utils/sandbox-run

    The above simple alias may work for node/npm, but it doesn't generalize to many other programs available on the local system, with resources that would need to be mounted into the container ...

    • ashishb 7 hours ago

      > The above simple alias may work for node/npm, but it doesn't generalize for many other programs that are available on the local system, with resources that would somehow have to get mounted into the container ...

      Thanks. You are right, running inside Docker won't always work for local commands. But I am not even using local commands.

      Infact, I have removed `yarn`, `npm`, and several similar tools already from my machine.

      It is best to run them inside Docker.

      > I use sandbox-run: https://github.com/sandbox-utils/sandbox-run

      How does this work if my local command is a Mac OS binary? How will it run inside Docker container?

    • fingerlocks 5 hours ago

      Or use ‘chroot’. Or run it as a restricted owner with ‘chown’. Your grandparents solutions to these problems still work.

  • bitbasher 8 hours ago

    There are so many vectors for this attack to piggyback off from.

    If I had malicious intentions, I would probably typo squat popular plugins/lsps that will execute code automatically when their editor runs. A compromised neovim or vscode gives you plenty of user permissions, a full scripting language, ability to do http calls, system calls, etc. Most LSPs are installed globally, doesn't matter if you downloaded it via a docker command.

    • ashishb 7 hours ago

      > A compromised neovim or vscode gives you plenty of user permissions, a full scripting language, ability to do http calls, system calls, etc. Most LSPs are installed globally, doesn't matter if you downloaded it via a docker command.

      Running `npm` inside Docker does not solve this problem. However, running `npm` inside Docker does not make this problem worse either.

      That's why I said running `npm` inside Docker reduces the attack surface of the malicious NPM packages.

      • dns_snek 12 minutes ago

        I think this approach is harmful because it gives people a false sense of security and makes them complacent by making them feel like they're "doing something about it even if it's not perfect". It's like putting on 5 different sets of locks and bolts on your front door while leaving the back door unlocked and wide open during the night.

  • lelanthran 3 hours ago

    Won't that still download malicious packages that are dep?

  • silverwind 3 hours ago

    This will break native dependencies when the host platform is not the same as the container platform.

  • sthuck 10 hours ago

    That definitely helps and worth doing. On Mac though I guess you need to move the entire development to containers due to native dependencies.

    • chuckadams 10 hours ago

      My primary dev environment is containers, but you can do a hell of a lot with nix on a mac.

  • genpfault 10 hours ago

    > triple-backtick code blocks

    If only :(

bytefish 4 hours ago

I feel super uneasy developing Software with Angular, Vue or any framework using npm. The amount of dependencies these frameworks take is absolutely staggering. And just by looking at the dependency tree and thousands of packages in my node_modules folder, it is a disaster waiting to happen. You are basically one phishing attack on a poor open source developer away from getting compromised.

To me the entire JavaScript ecosystem is broken. And a typo in your “npm -i” is sufficient to open up yourself for a supply-chain attack. Could the same happen with NuGet or Maven? Sure!

But at least in these languages and environments I have a huge Standard Library and very few dependencies to take. It makes me feel much more in control.

ab_testing 8 hours ago

Given the recent npm attacks, is it even safe to develop using npm. Whenever I start a react project, it downloads hundreds of additional packages which I have mo idea about what they do. As a developer who has learnt programming as a hobby, is it better to stick to some other safe ways to develop front end like thyme leaf or plain js or something else.

When I build backend in flask or Django, I specifically type the python packages that I need. But front end development seems like a Pandora box of vulnerabilities

  • azangru 12 minutes ago

    > As a developer who has learnt programming as a hobby, is it better to stick to some other safe ways to develop front end like thyme leaf or plain js or something else.

    Oh, absolutely, there is no question about it. Fewer dependencies means less headache; and if you can get the number of your dependencies to zero, then you have won the internet.

  • silverwind 3 hours ago

    All package ecosystems that allow unvetted code being published are affected, it just happens that npm is by far the most popular one, so it gets all the news.

  • maxloh 2 hours ago

    I come from a JavaScript background, and I've got to admit that the ecosystem is designed in a way that is really prone to attack.

    It is like the xz incident, except that each dependency you pull is maintained by a random guy on the internet. You have to trust every one of them to be genuine and that they won't fall into any social engineering attacks.

  • socalgal2 7 hours ago

    It's no different anywhere else. I just downloaded jj (rust), it installed 470+ packages

    When I downloaded wan2gp (python) it installed it install 211 packages.

    • brabel 2 hours ago

      Oh man you pick the one other language that followed the JavaScript model?! How about C, Java, Go, Lisp, C#, C++, D… and new ones like Odin that are explicitly against package managers for this very reason.

    • klabb3 3 hours ago

      > It's no different anywhere else.

      But it is. Both C/C++ and Go are not at all like this.

      I don’t know about Python but Rust ecosystem tends attract enthusiasts who make good single purpose packages but that are abandoned because maintainers move on, or sometimes forked due to minor disagreements similar to how Linux/unix is fragmented with tribal feuds.

      • immibis an hour ago

        Go is like this...

    • BrouteMinou 7 hours ago

      M'yea, good luck finding such occurrence with NuGet or Maven for example. I would rephrase your "anywhere else".

      NPM is a terrible ecosystem, and trying to defend its current state is a lost cause. The energy should be focused on how to fix that ecosystem instead of playing dumb telling people "it's all ok, look at other, also poorly designed, systems".

      Don't forget that Rust's Cargo got heavily inspired by NPM, which is not something to brag about.[0]

      > "Rust has absolutely stunning dependency management," one engineer enthused, noting that Rust's strategy took inspiration from npm's.

      [0]https://rust-lang.org/static/pdfs/Rust-npm-Whitepaper.pdf

    • scuff3d 5 hours ago

      One of the biggest things that pushes me away from Rust is the reliance on micro dependencies. It's a terrible model.

      • codedokode 3 hours ago

        What's wrong with micro dependencies? Isn't it better to download only the code you need? Also it makes refactoring easier, and enforces better architecture.

        • CraigJPerry 2 hours ago

          Larger attack surface - you just need one of those N dependencies to fall for a spear phishing attack and you're cooked. Larger N is necessarily worse.

          It depends on the software being written, but if it's a product your business sells or otherwise has an essential dependency on, then the best model available right now is vendoring dependencies.

          You still get all the benefits of standing on top of libraries and frameworks of choice, but you've introduced a single point of entry for externally authored code - there are many ways you can leverage that to good effect (vuln scans, licence audits, adding patch overlays etc etc) and you improved the developer experience - when they check out the code, ALL of the code to build and run the project is already present, no separate npm install step.

          You can take this model quite a bit further and buy some really useful capabilities for a development team, like dependency upgrades because they're a very deliberate thing now, you can treat them like any other PR to your code base - you can review the changes of each upgrade easily.

          There's challenges too - maybe your npm dep builds a native binary as part of it, you now need to provide that build infra / tooling, and very likely you also want robust build artifact and test caching to save wasting lots of time.

        • jgtrosh 3 hours ago

          Is this bait? The whole context is malicious software being installed en masse via NPM micro dependencies.

        • jraph 3 hours ago

          Dependency management is work. And almost nobody does this work seriously because it has become unrealistic to do, which is the big concern here.

          You now have to audit the hundreds of dependencies. Each time you upgrade them.

          Rust is compiled and source code doesn't weigh that much, you could have the compiler remove dead code.

          And sometimes it's just better to review and then copy paste small utility functions once.

          • procaryote 2 hours ago

            > Rust is compiled and source code doesn't weigh that much, you could have the compiler remove dead code.

            I get the impression that one driver to make microdependencies in rust is that code does weigh a lot because the rust compiler is so slow.

            For a language with a focus on safety, it's a pretty bad choice

        • rwmj 25 minutes ago

          They should just be part of the stdlib of the language.

          • wongarsu 5 minutes ago

            Rust has a really big and comprehensive stdlib, especially compared to languages like C or JavaScript. It just decided that certain things won't be solved in the standard lib because there is no obviously-right solution and evolving towards a good solution is much easier in packages than in the stdlib, because the stdlib isn't versioned.

            Some of the gaps feel huge, like no random, no time/date handling, and no async runtime. But but for most of them there are canonical packages that 95% of the ecosystem uses, with a huge amount of eyeballs on them. And sometimes a better solution does emerge, like jiff slowly replacing chrono and time for time/date handling.

            Obviously this isn't the best solution from a security perspective. There would be less potential for supply chain attacks if everything was in the standard library. But that has to be weighed against the long-term usability of the language

  • fragmede 7 hours ago

    Just a heads up that Pypi isn't immune from the same attack, with "Pypi supply chain attack" into Google revealing a (much smaller) number of packages that turned out to be malware. Some were not misspellings either, with one being a legitimate package that got hacked via GitHub Actions and a malicious payload added to the otherwise legitimate package.

  • nektro 7 hours ago

    this is one of the less talked about benefits of using bun

    • Defletter 6 hours ago

      How does Bun avoid this? Or is it more that Bun provides things that you'd otherwise need a dependency for (eg: websockets)?

      • lioeters an hour ago

        From a link mentioned elsewhere in the thread:

        > Unlike other npm clients, Bun does not execute arbitrary lifecycle scripts for installed dependencies, such as `postinstall` and `node-gyp` builds. These scripts represent a potential security risk, as they can execute arbitrary code on your machine.

        https://bun.com/docs/guides/install/trusted

        I've also found the Bun standard library is a nice curated set of features that reduces dependencies.

crtasm 14 hours ago

>When you run npm install, npm doesn't just download packages. It executes code. Specifically, it runs lifecycle scripts defined in package.json - preinstall, install, and postinstall hooks.

What's the legitimate use case for a package install being allowed to run arbitrary commands on your computer?

Quote is from the researchers report https://www.koi.ai/blog/phantomraven-npm-malware-hidden-in-i...

edit: I was thinking of this other case that spawned terminals, but the question stands: https://socket.dev/blog/10-npm-typosquatted-packages-deploy-...

  • j1elo 13 hours ago

    Easy example that I know of: the Mediasoup project is a library written in C++ for streaming video over the internet. It is published as a Node package and offers a JS API. Upon installing, it would just download the appropriate C++ sources and compile them on the spot. The project maintainers wanted to write code, not manage precompiled builds, so that was the most logical way of installing it. Note that a while ago they ended up adding downloadable builds for the most common platforms, but for anything else the expectation still was (and is, I guess) to build sources at install time.

    • lenkite 3 hours ago

      Believe such build tools and processes should be run inside a container environment. Maybe once all OS have native, cheap and lean containers and permit dead simple container execution of scripts, this will be possible.

      • codedokode 3 hours ago

        Google's OS, Android, has sandboxes. It's some Linux problem that they do not want to backport it.

        • immibis an hour ago

          It just has Linux user IDs. They're in every Linux.

          • codedokode 40 minutes ago

            I think that was long time ago, because this doesn't allow to request permissions in runtime.

    • exe34 13 hours ago

      how hard would it be to say "upon first install, run do_sketchy_shit.sh to install requirements"?

      • SoftTalker 10 hours ago

        But most users would do that without inspecting it at all, and a fair number would prefix it with “sudo” out of habit.

        • nkrisc 9 hours ago

          But that’s at least a conscious and explicit action the user chooses to make and is explicitly aware of making.

        • hombre_fatal 8 hours ago

          That's fine, and it's still better than doing it on install.

        • exe34 2 hours ago

          you can always add --am-an-idiot as a switch to npm install.

      • IgorPartola 5 hours ago

        Hard. In npm land you install React and 900 other dependencies come with it. And how ok are you reviewing every single one of those scripts and manually running them? Not that it is good that this happens but realistically most people would just say “run all” and let it run instead of running each lifecycle script by hand.

        • jacquesm 4 hours ago

          My solution is to bypass React entirely. I'd much rather have a smaller, possibly less functional or pretty front end than to have to worry about this stuff continuously. I would not get any work done. There is no way I'm going to take on board 900 dependencies which I will then inflict on the visitors to my website.

        • exe34 2 hours ago

          the only way I'd use react in a project is to download the react.js build. I don't see why people want to download 900 dependencies.

      • cyphar 8 hours ago

        rpm and dpkg both provide mechanisms to run scripts on user machines (usually used to configure users and groups on the user machine), so this aspect is not an NPM-specific. Rust has the same thing with build.rs (which is necessary to find shared C libraries for crates that link with them) so there is a legitimate need for this that would be hard to eliminate.

        Personally, I think the issue is that it is too easy to create packages that people can then pull too easily. rpm and dpkg are annoying to write for most people and require some kind of (at least cursory) review before they can be installed on user's systems from the default repos. Both of these act as barriers against the kinds of lazy attacks we've seen in the past few months. Of course, no language package registry has the bandwidth to do that work, so Wild West it is!

        • scheme271 7 hours ago

          rpm and dpkg generally install packages from established repos that vet maintainers. It's not much but having to get one or two other established package authors to vouch for you and having to have some community involvement before you can publish to distro repos is something.

          • cyphar 7 hours ago

            I agree, that is what I talk about in the second paragraph! ;)

      • lelandbatey 13 hours ago

        People want package managers to do that for them. As much as I think it's often a mistake (if your stuff requires more than expanding archives different folders to install, then somewhere in the stack something has gone quite wrong), I will concede that because we live in an imperfect world, other folks will want the possibility to "just run the thing automatically to get it done." I hope we can get to a world where such hooks are no longer required one day.

        • exe34 12 hours ago

          yes that's why npm is for them. I'd rather download the libraries that I need one by one.

      • ares623 6 hours ago

        You see, when you treat everything as a "product", this is what you end up with.

  • squidsoup 13 hours ago

    pnpm v10 disables all lifecycle scripts by default and requires the user to whitelist packages.

    https://github.com/orgs/pnpm/discussions/8945

    • sroussey 12 hours ago

      It’s just security theater in the end. You can just as easily put all that stuff in the package files since a package is installed to run code. You have that code then do all the sketchy stuff.

      What’s needed is an entitlements system so a package you install doesn’t do runtime stuff like install crypto mining software. Even then…

      • Mogzol 11 hours ago

        A package, especially a javascript package, is not necessarily installed to run code, at least not on the machine installing the package. Many packages will only be run in the browser, which is already a fairly safe environment compared to running directly on the machine like lifecycle scripts would.

        So preventing lifecycle scripts certainly limits the number of packages that could be exploited to get access to the installing machine. It's common for javascript apps to have hundreds of dependencies, but only a handful of them will ever actually run as code on the machine that installed them.

        • sroussey 2 hours ago

          True… I do a lot of server or universal code. But don’t trust browser code either. Could be connecting to MetaMask and stealing crypto, running mining software, or injecting ads.

          And with node you get files and the ability run arbitrary code on arbitrary processes.

      • theodorejb 11 hours ago

        I would expect to be able to download a package and then inspect the code before I decide to import/run any of the package files. But npm by default will run arbitrary code in the package before developers have a chance to inspect it, which can be very surprising and dangerous.

        • sroussey 2 hours ago

          npm used to do that. bun never did. No idea about the past for pnpm or yarn.

    • codedokode 3 hours ago

      Finally sane solution. When I was thinking about package manager design, I also thought that there should be no scripts, the package manager should just download files and that's all.

    • chrisweekly 12 hours ago

      One of the many reasons there is no good reason to use npm; pnpm is better in every way.

    • chuckadams 10 hours ago

      PHP composer does the same, in config.allow-plugins.<package> in composer.json. The default behavior is to prompt, with an "always" option to add the entry to composer.json. It's baffling that npm and yarn just let the scripts run with nary a peep.

    • theodorejb 11 hours ago

      Bun also doesn't execute lifestyle scripts by default, except for a customizable whitelist of trusted dependencies:

      https://bun.com/docs/guides/install/trusted

      • codedokode 3 hours ago

        "Trusted" dependencies are poor solution, the good solution is either never run scripts, or run them inside qemu.

    • ehutch79 10 hours ago

      Also, you can now pin versions in that whitelist

  • zahlman 11 hours ago

    > doesn't just download packages. It executes code. Specifically, it

    It pains me to remember that the reason LLMs write like this is because many humans did in the training data.

    • marcus_holmes 8 hours ago

      Is the objection the small sentence that could have been a clause?

      • zahlman 4 hours ago

        The objection is to the redundant, flowery prose overall, and the overall inaccuracy. (Of course the installer "doesn't just download packages"; installation at minimum would also involve unpacking the archive and putting the files in the right place....)

        In about as much text, we could explain far better why and how NPM's behaviour is risky:

        > When you install a package using `npm install`, NPM may also run arbitrary code from the package, from multiple hook scripts specified in `package.json`, before you can even audit the code.

    • jsrozner 10 hours ago

      That whole koi blog post is sloppy AI garbage, even if it's accurate. So obnoxious.

  • maxloh 2 hours ago

    Many front end tools are written in a faster language. For example, the next version of TypeScript compiler, SASS, SWC (minifier), esbuild (bundler used by Vite), Biome (formatter and linter), Oxc (linter, formatter and minifier), Turbopack (bundler), dprint (formatter), etc.

    They use proinstall script to fetch pre-built binaries, or compile from source if your environment isn't directly supported.

  • bandrami 8 hours ago

    OK but have you seen how many projects' official installation instructions are some form of curl | bash?

    • dns_snek 3 hours ago

      Both post-install scripts and curl|bash scripts are just scapegoats of supply chain security with similar faulty reasoning.

    • codedokode 3 hours ago

      In most cases to use a library you just need to download files and place them into a directory.

  • Timshel 3 hours ago

    There is the "--ignore-scripts" option and had no issue using it for now.

  • vorticalbox 13 hours ago

    One use case is downloading of binaries. For example mongo-memory-server [0] will download the mongoDB binary after you have installed it.

    [0] https://www.npmjs.com/package/mongodb-memory-server

    • 8note 13 hours ago

      why would i want that though, compared to downloading that binary in the install download?

      the npm version is decoupled from the binary version, when i want them locked together

      • jonhohle 12 hours ago

        I think it falls into a few buckets:

        A) maintainers don’t know any better and connect things with string and gum until it most works and ship it

        B) people who are smart, but naive and think it will be different this time

        C) package manager creators who think they’re creating something that hasn’t been done before, don’t look at prior art or failures, and fall into all of the same holes literally every other package manager has fallen into and will continue to fall into because no one in this industry learns anything.

  • DangitBobby 13 hours ago

    I seem to recall Husky at one point using lifecycle hooks to install the git hooks configured in your repository when running NPM install.

    • mock-possum 4 hours ago

      Playwright does it too. Or at least, I’ve worked a place that used npm install hooks to setup husky and install playwright browser binaries

  • interstice 12 hours ago

    Notable times this has bitten me include compiling image compression tools for gulp and older versions of sass, oh and a memorable one with openssl. Downloading a npm package should ideally not also require messing around with c compilation tools.

jtokoph 9 hours ago

Keep in mind that the vast majority of the 86,000 downloads are probably automated downloads by tools looking for malicious code, or other malicious tools pulling every new package version looking for leaked credentials.

When I iterate with new versions of a package that I’ve never promoted anywhere, each version gets hundreds of downloads in the first day or two of being published.

86,000 people did not get pwnd, possibly even zero.

  • hinkley 3 hours ago

    When I published a library it got about 300 downloads a week for the first few and then dropped down to about 100. That would be a lot of weeks.

    > Many of the dependencies used names that are known to be “hallucinated” by AI chatbots.

    There’s more here than that.

  • userbinator 8 hours ago

    Or it's some poor idiot's CI repeatedly downloading them, and for a zombie project that no one will ever use.

  • marcus_holmes 8 hours ago

    As TFA says, they're targeting package names that are somewhere in LLM training data but don't actually exist, so are being hallucinated by LLMs. And there's now a large number of folks with zero clue busy vibe-coding their killer app with no idea that bad things can happen.

    I would not be surprised to find that 80%+ of those 86,000 people got pwned.

2d8a875f-39a2-4 an hour ago

The npm ecosystem's approach to supply chain security is criminally negligent. For the critical infrastructure that underpins the largest attack surface on the Internet you would think that this stuff would be priority zero. But nope, it's failing in ways that are predictable and were indeed predicted years ago. I'm not closely involved enough with the npm community to suggest what the next steps should be but something has to change, and soon.

650REDHAIR 13 hours ago

As a hobbyist how do I stay protected and in the loop for breaches like this? I often follow guides that are popular and written by well-respected authors and I might be too flippant with installing dependencies trying to solve a pain point that has derailed my original project.

Somewhat related, I also have a small homelab running local services and every now and then I try a new technology. occasionally I’ll build a little thing that is neat and could be useful to someone else, but then I worry that I’m just a target for some bot to infiltrate because I’m not sophisticated enough to stop it.

Where do I start?

  • Etheryte 13 hours ago

    Use dependencies that are fairly popular and pick a release that's at least a year old. Done. If there was something wrong with it, someone would've found it by now. For a hobbyist, that's more than sufficient.

  • jonhohle 12 hours ago

    There are some operating systems, like FreeBSD, where you use the system’s package manager and not a million language specific package managers.

    I still maintain pushing this back to library authors is the right thing to do instead of making this painful for literally millions of end-users. The friction of getting a package accepted into a critical mass of distributions is the point.

  • marcus_holmes 8 hours ago

    Somewhat controversial these days, but treat every single dependency as a potential security nightmare, source of bugs, problem that you will have to solve in the future. Use dependencies carefully and as a last resort.

    Vendoring dependencies (copying the package code into your project rather than using the package manager to manage it) can help - it won't stop a malicious package, but it will stop a package from turning malicious.

    You can also copy the code you need from a dependency into your code (with a comment giving credit and a link to the source package). This is really useful if you just need some of the stuff that the package offers, and also forces you to read and understand the package code; great practice if you're learning.

    • devsda 7 hours ago

      Inspecting 10 layers of dependencies individually to install a popular tool or an lsp server is going to work once or twice. Eventually either complacency or fatigue sets in and the attacker wins.

      I think we need a different solution that fixes the dependency bloat or puts more safeguards around package publishing.

      The same goes for any other language with excessive third-party dependency requirements.

      • marcus_holmes 5 hours ago

        Agree.

        It's going to take a lot of people getting pwned to change these attitudes though

    • hinkley 3 hours ago

      Local proxies that can work offline also helps. Though not as much as vendoring.

  • pier25 10 hours ago

    Avoid dependencies with less than 1M downloads per week. Prefer dependencies that have zero dependencies like Hono or Zod.

    https://npmgraph.js.org/?q=hono

    https://npmgraph.js.org/?q=zod

    Recently I switched to Bun in part because many dependencies are already included (db driver, s3 client, etc) that you'd need to download with Node or Deno.

  • socalgal2 7 hours ago

    (1) Start by not using packages that have stupid dependencies

    Any package that includes a CLI version in the library should have it's dev shamed. Usually that adds 10-20 packages. Those 2 things, a library that provides some functionality, and a CLI command that lets you use the library from the command line, SHOULD NEVER BE MIXED.

    The library should be its own package without the bloat of the command line crap

    (2) Choose low dependency packages

    Example: commander has no dependencies, minimist now has no dependencies. Some other command line parsers used to have 10-20 dependencies.

    (3) Stop using packages when you can do it yourself in 1-2 lines of JS

    You don't need a package to copy files. `fs.copyFileSync` will copy a file for. `fs.cpSync` will copy a tree, `child_process.spawn` will spawn a process. You don't need some package to do these things. There's plenty of other examples where you don't need a package.

    • hinkley 3 hours ago

      > Any package that includes a CLI version in the library should have it's dev shamed. Usually that adds 10-20 packages.

      After your little rant you point out Commander has zero dependencies. So I don’t know what’s up with you.

      If the library you’re building has anything with application lifecycle, particularly bootstrapping, then having a CLI with one dependency is quite handy for triage. Most especially for talking someone else through triage when for instance I was out and there was a production issue.

      Which is why half the modules I worked on at my last place ended up with a CLI. They are, as a rule, read mostly. Which generally doesn’t require an all caps warning.

      Does every module need one of those? No. But if your module is meant as a devDependency, odds are good it might. And if it’s bootstrapping code, then it might as well.

      > should have it's dev shamed

      Oh I feel embarrassed right now. But not for me.

  • numbsafari 12 hours ago

    Don't do development on your local machine. Full stop. Just don't.

    Do development, all of it, inside VMs or containers, either local or remote.

    Use ephemeral credentials within said VMs, or use no credentials. For example, do all your git pulls on your laptop directly, or in a separate VM with a mounted volume that is then shared with the VM/containers where you are running dev tooling.

    This has the added benefit of not only sandboxing your code, but also making your dev environments repeatable.

    If you are using GitHub, use codespaces. If you are using gitlab, workspaces. If you are using neither, check out tools like UTM or Vagrant.

    • bigstrat2003 8 hours ago

      That's not a realistic solution. Nobody is going to stop using their machine for development just to get some security gains, it's way too much of a pain to do that.

      • socalgal2 7 hours ago

        You are right, if it's a pain no one is going to do it. So the thing that needs to happen is to make it not a pain.

      • fragmede 6 hours ago

        The way to sell it isn't vague security somethings, but in making it easier to reproduce the build environment "from scratch". If you build the Dockerfile as you go, then you don't waste hours at the end trying to figure out what you did to get it to build and run in the first place.

    • hinkley 3 hours ago

      I used to have a separate account on my box for doing code for other people, one for myself and another for surfing the web. Since I have an Apple TV hooked up to one of my monitors I don’t have a ton of reasons for hopping credentials between accounts so I think I’ll be going back to at least that.

      The fact I use nvm means a global install won’t cross accounts.

    • suck-my-spez 11 hours ago

      Are people actually using UTM to do local development?

      Im genuinely curious because I casually looked into it so that i could work on some hobby stuff over lunch on my work machine.

      However I just assumed the performance wouldn't be too great.

      Would love to hear how people are setup…

      • rickstanley 9 hours ago

        When I had a Macbook from work, I set up an Arch Linux VM using their basic VM image [1], and followed these steps (it may differ, since is quite old): https://www.youtube.com/watch?v=enF3zbyiNZA

        Then, I removed the graphical settings, as I was aiming to use SSH instead of emulated TTY that comes ON by default with UTM (at that time).

        Finally, I set up some basic scripting to turn the machine on and SSH into it as soon as sshd.service was available, which I don't have now, but the script finished with this:

        (fish shell)

            while not ssh -p 2222 arch@localhost; sleep 2; end;
        
        Later it evolved in something like this:

            virsh start arch-linux_testing && virsh qemu-monitor-command --hmp arch-linux_testing 'hostfwd_add ::2222-:22' && while not ssh -p 2222 arch@localhost; sleep 2; end;
        
        I also removed some unnecessary services for local development:

            arch@archlinux ~> sudo systemctl mask systemd-time-wait-sync.service 
            arch@archlinux ~> sudo systemctl disable systemd-time-wait-sync.service
        
        
        And done, performance was really good and I could develop on seamlessly.

        [1]: https://gitlab.archlinux.org/archlinux/arch-boxes/-/packages...

      • hombre_fatal 8 hours ago

        I started using UTM last week on my Macbook just to try out NixOS + sway and see if I could make environment that I liked using (inspired by the hype around Omarchy).

        Pretty soon I liked using the environment so much that I got my work running on it. And when I change the environment, I can sync it to my other machine.

        Though NixOS is particularly magical as a dev environment since you have a record of everything you've done. Every time I mess with postgres hb_conf or nginx or pcap or on my local machine, I think "welp, I'll never remember that I did that".

      • suchar 10 hours ago

        With remote development (vscode and remote extension in jetbrains with ssh to VM) performance is good with headless VM in UTM. Although it always (?) uses performance cores on Apple Silicon Macs, so battery drain is a problem

  • uyzstvqs 11 hours ago

    I'm not sure about NPM specifically, but in general: Pick a specific version and have your build system verify the known good checksum for that version. Give new packages at least 4 weeks before using them, and look at the git commits of the project, especially for lesser-known packages.

  • jhancock 9 hours ago

    As 'numbsafari said below, you should no longer user your host for dev..this includes all those cool AI assistant tools. You need to containerize all the things with runpod or docker

  • ajross 13 hours ago

    > As a hobbyist how do I stay protected and in the loop for breaches like this?

    For the case of general software, "Don't use node" would be my advice, and by extension any packaging backend without external audit and validation. PyPI has its oopses too, Cargo is theoretically just as bad but in practice has been safe.

    The gold standard is Use The Software Debian Ships (Fedora is great too, arch is a bit down the ladder but not nearly as bad as the user-submitted madness outside Linux).

    But it seems like your question is about front end web development, and that's not my world and I have no advice beyond sympathy.

    > occasionally I’ll build a little thing that is neat and could be useful to someone else, but then I worry that I’m just a target for some bot

    Pretty much that's the problem exactly. Distributing software is hard. It's a lot of work at a bunch of different levels of the process, and someone needs to commit to doing it. If you aren't willing to commit your time and resources, don't distribute it in a consumable way (obviously you can distribute what you built with it, and if it's appropriately licensed maybe someone else will come along and productize it).

    NPM thought they could hack that overhead and do better, but it turns out to have been a moved-too-fast-and-broke-things situation in hindsight.

    • zahlman 11 hours ago

      > PyPI has its oopses too, Cargo is theoretically just as bad but in practice has been safe.

      One obvious further mitigation for Python is to configure your package installer to require pre-built wheels, and inspect the resulting environment prior to use. Of course, wheels can contain all sorts of compiled binary blobs and even the Python code can be obfuscated (or even missing, with just a compiled .pyc file in its place); but at least this way you are protected from arbitrary code running at install time.

    • squidsoup 13 hours ago

      Having spent a year trying to develop against dependencies only provided by a debian release, it is really painful in practice. At some point you're going to need something that is not packaged, or newer than the packaged version in your release.

      • LtWorf 11 hours ago

        That's when you join debian :)

      • ajross 13 hours ago

        It really depends on what you're doing. But yes, if you want to develop in "The NPM Style" where you suck down tiny things to do little pieces of what you need (and those things suck down tiny things, ad infinitum) then you're naturally exposed to the security risks inherent with depending on an unaudited soup of tiny things.

        You don't get secure things for free, you have to pay for that by doing things like "import and audit software yourself" or even "write simple utilities from scratch" on occasion.

    • paulryanrogers 11 hours ago

      Didn't Debian ship a uniquely weak version of OpenSSL for years? HeartBleed perhaps?

      IME Debian is falling behind on security fixes.

      • ajross 11 hours ago

        They did, and no one is perfect. But Debian is the best.

        FWIW, the subject at hand here isn't accidentally introduced security bugs (which affect all software and aren't well treated by auditing and testing). It's deliberately malicious malware appearing as a dependency to legitimate software.

        So the use case here isn't Heartbleed, it's something like the xz-utils trojan. I'll give you one guess as to who caught that.

    • megous 11 hours ago

      As a hobyist (or profesionally) you can also write code without dependencies outside of node itself.

gbransgrove 11 hours ago

Because these are fetching dependencies in the lifecycle hooks, even if they are legitimate at the moment there is no guarantee that it will stay that way. The owner of those dependencies could get compromised, or themselves be malicious, or be the package owner waiting to flip the switch to make existing versions become malicious. It's hard to see how the lifecycle hooks on install can stay in their current form.

creativeSlumber 5 hours ago

> Many of the dependencies used names that are known to be “hallucinated” by AI chatbots. Developers frequently query these bots for the names of dependencies they need. LLM developers and researchers have yet to understand the precise cause of hallucinations or how to build models that don’t make mistakes. After discovering hallucinated dependency names, PhantomRaven uses them in the malicious packages downloaded from their site.

I found it very interesting that they used common AI hallucinated package names.

codedokode 3 hours ago

I am surprised that anyone in this year runs scripts from random people from Github without sandboxing. As a wise proverb says, a peasant won't cross himself until the thunder bursts out. Spend a couple hours setting up a sandbox and be safer.

severino 11 hours ago

I wonder what could one do if he wants to use NPM for programming with a very popular framework (like Angular or Vue) and stay safe. Is just picking a not very recent version of the top level framework (Angular, etc.) enough? Is it possible to somehow isolate NPM so the code it runs, like those postinstall hooks, doesn't mess with your system, while at the same time allowing you to use it normally?

  • theodorejb 11 hours ago

    One option to make it a little safer is to add ignore-scripts=true to a .npmrc file in your project root. Lifestyle scripts then won't run automatically. It's not as nice as Pnpm or Bun, though, since this also prevents your own postinstall scripts from running (not just those of dependencies), and there's no way to whitelist trusted packages.

edoceo 14 hours ago

Happy I keep a mirror of my deps, that I have to "manually" update. But also, the download numbers are not really accurate for actual install count - for example each test run could increment.

cxr 13 hours ago

Imagine if we had a system where you could just deposit the source code for a program you work on into a "depository". You could set it up so your team could "admit" the changes that have your approval, but it doesn't allow third parties to modify what's in your depository (even if it's a library that you're using that they wrote). When you build/deploy your program, you only compile/run third-party versions that have been admitted to the depository, and you never just eagerly fetch other versions that purport to be updates right before build time. If there is an update, you can download a copy and admit it to your repo at the normal time that you verify that your program actually needs the update. Even if it sounds far-fetched, I imagine we could get by with a system like this.

  • chrisweekly 13 hours ago

    You're describing a custom registry. These exist IRL (eg jFrog Artifactory). Useful for managing allow-listed packages which have met whatever criteria you might have (eg CVE-free based on your security tool of choice). Use of a custom registry, and a sane package manager (pnpm, not npm), and its lockfile, will significantly enhance your supply-chain security.

    • cxr 12 hours ago

      No. I am literally describing bog standard use of an ordinary VCS/SCM where the code for e.g. Skia, sqlite, libpng, etc. is placed in a "third-party/" subdirectory. Except I'm deliberately using the words "admit" and "depository" here instead of "commit" and "repository" in keeping with the theme—of the widespread failure of people to use SCMs to manage the corresponding source code required to build their product/project.

      Overlay version control systems like NPM, Cargo, etc. and their harebrained schemes involving "lockfiles" to paper over their deficiencies have evidently totally destroyed not just folks' ability to conceive of just using an SCM like Git or Mercurial to manage source the way that they're made for without introducing a second, half-assed, "registry"-dependent VCS into the mix, but also destroyed the ability to recognize when a comment on the subject is dripping in the most obvious, easily detectable irony.

      • minitech 8 hours ago

        Yeah, people invented the concept of packages and package management because they couldn’t conceive of vendoring (which is weird considering basically all package managers make use of it themselves) and surely not because package management has actual benefits.

        Maybe in a perfect world, we’d all use a better VCS whose equivalent of submodules actually could do that job. We are not in that world yet.

        • cxr 7 hours ago

          Do you understand the reasons, and are you able to clearly articulate them? Are you able to describe the tangible benefits in the form of a set of falsifiable claims—without resorting to hand-waving or appeals to the perceived status quo or scoffing as if the reasons are self-evident and not in question or subject to scrutiny?

      • willtemperley 3 hours ago

        This is exactly what Swift Package Manager does. No drama in the Swift Package world AFAIK.

      • morshu9001 11 hours ago

        Does the lockfile not solve this?

        • socalgal2 7 hours ago

          not really, because you can't easily see what changed when you get a new version. When you check in the third_party repo to your VSC, then when you get a new version, everything that changed is easily visible `git diff` before you commit the new changes. With a lockfile, the only diff is the hash changed.

          • cyphar 7 hours ago

            Not if you use git submodules, which is how most people would end up using such a scheme in practice (and the handful of people that do this have ended up using submodules).

            Go-style vendoring does dump everything into a directory but that has other downsides. I also question how effectively you can audit dependencies this way -- C developers don't have to do this unless there's a problem they're debugging, and at least for C it is maybe a tractible problem to audit your entire dependency graph for every release (of which there are relatively few).

            Unfortunately IMHO the core issue is that making the packaging and shipping of libraries easy necessarily leads to an explosion of libraries with no mechanism to review them -- you cannot solve the latter without sacrificing the former. There were some attempts to crowd-source auditing as plugins for these package managers but none of them bore fruit AFAIK (there is cargo-audit but that only solves one part of the puzzle -- there really needs to be a way to mark packages as "probably trustworthy" and "really untrustworthy" based on ratings in a hard-to-gamify way).

          • minitech 7 hours ago

            The problem is that not enough people care about reviewing dependencies’ code. Adding what they consider noise to the diff doesn’t help much (especially if what you end up diffing is actually build output).

        • cxr 10 hours ago

          What is "this"?

      • chrisweekly 10 hours ago

        Huh? "Just use git" is kind of nonsensical in the context of this discussion.

        • cxr 10 hours ago

          Oh, okay.

  • kej 13 hours ago

    Now you have the opposite problem, where a vulnerability could be found in one of your dependencies but you don't get the fix until the next "normal time that you verify that your program actually needs the update".

    • edoceo 13 hours ago

      If a security issue is found that creates the "normal time".

      That is, when a security issue is found, regardless of supply chain tooling one would update.

      That there is a little cache/mirror thing in the middle is of little consequence in that case.

      And for all other cases the blessed versions in your mirror are better even if not latest.

  • zahlman 11 hours ago

    So, vendoring?

  • lenkite 13 hours ago

    Well in the Java world, Maven had custom repositories which did this for the last 20+ years.

  • anthk 13 hours ago

    You are describing BSD ports from the 90's. FreeBSD ports date back to 1993.

    • ok123456 13 hours ago

      Also, Gentoo dating back to 2003.

  • edoceo 13 hours ago

    That is exactly what I do.

akagusu 8 hours ago

Unpopular opinion: why not reduce the dependency on 3rd party packages? Why not reduce the number of dependencies so you can know what code you are using?

  • gavmor 5 hours ago

    Because then I would have to test, write, and maintain that code—and it becomes susceptible to leaky abstractions!

  • BobbyTables2 8 hours ago

    I’ve wondered this for so long, I questioned my own sanity.

worik 11 hours ago

This has been going on for years now.

I have used Node, I would not go near the NPM auto install Spyware service.

How is it possible that people keep this service going, when it has been compromised so regularly?

How's it possible that people keep using it?

noosphr 10 hours ago

A day ago I got down voted to hell for saying that the JavaScript ecosystem has rotted the minds of developers and any tools that emulate npm should be shunned as much as possible - they are not solutions, they are problems.

I don't usually get to say 'I told you so' within 24 hours of a warning, but JS is special like that.

  • cogman10 9 hours ago

    There's nothing really specially about the JS ecosystem that creates this problem. Plenty of others could fall in the same way, including C++ (see xz).

    The problem is we've been coasting on an era where blind trust was good enough an programming was niche enough.

    • procaryote 2 hours ago

      There's a culture of micro-dependencies.

      In c++, most people wouldn't publish a library that does the equivalent of (1 == value % 2). Even if they did, almost no one would use it. For npm, that library will not only exist, it will have several dependencies and millions of downloads

Uptrenda 9 hours ago

I dub thee "node payload manager."

xaxaxa123 3 hours ago

js is a fucking disaster

ghusto 14 hours ago

When people ask me what's so wrong with lowering the bar of entry for engineering, I point to things like this.