The tragedy of running an old Node project

abdisalan | 257 points

> time to run it after not touching it for 4 years

> Two hours of my life gone...

Two hours of work after 4 years sounds ... perfectly acceptable?

And it would have run perfectly right away if the node version was specified, so a good learning, too

This feels like making a mountain out of a mole hill

mvkel | 3 days ago

> Two hours of my life gone, just to pick up where I left off.

If I had only wasted two hours every time I had to use npm for some reason I'd be significantly ahead of where I am now.

__MatrixMan__ | 2 days ago

I call this phenomenon "node rot". Judging by the comments here, it seems like a universal experience.

My favorite is the way that Python projects rot. Not only does Python's setuptools give you all the fun that node-gyp does, the common practice of versioning packages with packagename>=1.25.5 means you're almost guaranteed breakages as pip installs newer versions of packages than what the project was built with.

RadiozRadioz | 2 days ago

The worst part isn't just that it's nearly impossible to run/update an outdated JS project, but that this process will repeat itself ad infinitum.

On the flip side, anything that uses vanilla JS without a build will most likely run just fine, probably till the end of human civilization.

uludag | 3 days ago

This will always be an issue for the node community - it’s endemic to the JavaScript shipping / speed culture and the package management philosophy.

Go is much, much better on these terms, although not perfect.

I’d venture a guess that Perl 5 is outstanding here, although it’s been a few years since I tried to run an old Perl project. CPAN was dog slow, but other than that, everything worked first try.

I’d also bet Tcl is nearly perfect on the ‘try this 10 year old repo’ test

vessenes | 3 days ago

I recently migrated a project to Node.js 8 (!) to Node.js 14 (hopefully just the beginning), and I can relate to this post.

In the JS ecosystem, I'm aware that Meteor is one major framework that takes backwards-compatibility seriously. Updating a project on an ancient version to a less-ancient version usually is not too hard. They try to keep APIs the same and introduce compatibility packages where possible.

Meteor 2.16 to Meteor 3 introduced major breaking changes due to an underlying technical issue that had no workaround. They had to refactor the whole project from using Fibers-based concurrency to typical async/await.

node-gyp in general has also been a source of issues in the past for me as well.

More recently, ESLint changed their configuration file format and all existing tutorials suddenly became outdated.

I firmly believe the ecosystem does not have to be like this, and we would save a lot of man-hours by being more committed to API stability where possible.

gr4vityWall | 2 days ago

I would heavily recommend to avoid NodeJS packages that depend on node-gyp. Node-gyp powered dependencies are very seldomly worth the hassle.

If you must depend on node-gyp, perhaps use dev containers so at least every developer in your team can work most of the time.

speedgoose | 3 days ago

I've actually had a node project go bad in a mere 4 months. It must be a new record. That was about 4-5 years ago though.

Hopefully the ecosystem as improved since then, but it was nearly impossible to get going.

Some packages had been changed and the version number overwritten with incompatible packages, and the conflicts were plenty.

sgt | 3 days ago

All of the problems here ultimately came down to packages that used the native Node API. You don't need python or deal with C++ to run JavaScript.

Node is an active project. If you build against the native API and don't pin your version to avoid breaking changes between versions, this is what happens. In my experience, JS very rarely breaks between major Node versions, but almost every native package requires a new major update.

This isn't a Node specific problem. Go ahead and upgrade your Go or Python version.

bastawhiz | 2 days ago

You know, I ran into something similar recently with a static site engine (Zola). Was moving to a new host and figured I'd just copy and run the binary, only to have it fail due to linking OpenSSL. I had customized the internals years ago and stupidly never committed it anywhere, and attempting to build it fresh ran into issues with yanked crates.

Since it's just a binary though, I wound up grabbing the OpenSSL from the old box and patching the binary to just point to that instead. Thing runs fine after that.

This is all, of course, still totally stupid - but I did find myself thinking how much worse comparable events in JS have been for me over the years. What would have been easily an entire afternoon ended up taking 15 minutes - and a chunk of that was just double checking commands I'd long forgotten.

Klonoar | 3 days ago

Not defending node here (I had OP's experience almost verbatim), but I had a much worse experience with trying to compile PaulStretch (a C++ project). The dependencies were specified as a bunch of random third party URLs, half of which had gone offline. I ended up giving up after a few hours, and then finding a fork that Just Works.

andai | 2 days ago

This is one reason why whenever I build a new project, I build it inside of a Docker container.

That way, the project has just the dependencies it needs, and I know I can rebuild it at some point in the future and will be unlikely to run into problems when I do.

dmuth | 2 days ago

I always try to remember to put the node version in my package.json - but I do agree that the dependency chain on node-gyp has been a blight on node packages for awhile. Really wonder how that wart became such a critical tool used by so many packages.

binarymax | 3 days ago

Good old node-gyp. I have absolutely no idea what it even is but it has been giving me errors for what feels like a decade. Mostly via front end build stuff from various projects I have worked on

nasmorn | 3 days ago

I'm pretty sanguine about languages and frameworks, but I draw the line at node. I have seen so many horrors visited by dependencies, often to do just one thing where 2 or 3 lines of code would do the job anyway.

When I was managing teams, whatever the language, I would ban any new dependencies which I didn't personally agree with. A lack of control just creates a nightmare.

beardyw | 3 days ago

This is literally every "hot new thing" since 2000.

It is systemic. Part of it is due to too many people creating systems on the fly with too little forethought, but also because there aren't enough "really smart people" working on long term solutions. Just hacks, done by hacks. What did you expect when the people writing the systems don't have long term experience?

everythingishax | 3 days ago

I joined a node project that was stuck on 0.12 while 7.0 was being developed. It was a shit show to get us to 6. As I recall, 10 was a little tricky, 12 and 16 had a lot of head scratchers. I finished the 16 upgrade more than a year after the last person tried, and it was a dumb luck epiphany that kept it that short.

I had a similar experience with emberJS when it was still young. Every time I picked the project up I had one to two hours of upgrade work to get it to run again, and I just had a couple hours to work on it. So half my time went to maintenance and it wasn’t sustainable.

I’m trying a related idea now in elixir and José may be a saint. Though I fear a Java 5 moment in their future, where the levee breaks and a flood of changes come at once.

hinkley | 3 days ago

I’ve started to adopt Nix devShells to help keep a record of each project’s dependencies.

If Nix is too heavy, the learning curve for tools like asdf-vm and mise is much lower and offers similar benefits.

I really wish there was a good equivalent for Windows.

noplacelikehome | 2 days ago

We run node code that's 10 year old. No one dares to touch it; we just run it in docker and hope nothing goes wrong.

anonzzzies | 2 days ago

I had this exact problem with multiple Node blog engines in the past. Constant version breakage was incredibly frustrating. I eventually moved to Hugo. A single binary which I committed with the blog files. Zero issues even years later. I can build the blog on any new machine within seconds. Which was the other revelation of Hugo. 10 seconds to build an 800+ post blog vs minutes using Hexo or similar.

conoro | 2 days ago

Made a site using nanogen ( https://github.com/doug2k1/nanogen ) about 7 years ago ... Anytime I setup a new machine I do a npm install on whatever version of node I end up on, and it..... Just works.

Beat SSG I've found, and all from a medium or dev article on a SSG in 40 lines or less.

sn0n | a day ago

I think many issues goes with bad version or, in some edge cases, not vendor dependencies. I'd good and bad experientes on multiple programming languages, some bad examples:

- bump patch or minor version from a react package but the maintainer rewrote the entire project breaking a lot of things, following semver it's bad to expect things don't break like that for such version; - another example, the ruby gem is removed/yanked from rubygems.org and you'd to find a fork available

On the end, we need to ensure the good practices from software engineering about tests and good release management, the last btw is decades old

brunoarueira | 2 days ago

node-sass is to blame for like 95% of these node-gyp issues in my experience, it's not that much grief to deal with but it's hard to grasp how it was allowed to hang around so terribly for so long

JansjoFromIkea | 2 days ago

First thing I would have done is upgrade the version of Gatsby to latest. Did the author try that?

If upgrading is difficult because of 4 years of breaking changes, blame Gatsby for not being backwards compatible. Also blame your original choice of going with a hokey framework.

Speaking of hokey framework: 167 dependencies and 3000 versions of Gatsby in npm.

melbourne_mat | 3 days ago

My take is one should be especially wary of packages that depend on C libraries and need compliation. You get to be extremely bound to what the OS distribution has to offer. If that's the case, docker is probably the safest solution.

tacone | 16 hours ago

I think this was the biggest mistake that Java made. Breaking reverse compatibility after Java 8 means that thousands of organizations will never leave that version. There is an entire industry based around maintaining Java 8. Eventually there will be two versions of Java, just like Python 2 and 3. There will be Java 8 and Java 698.

foxyv | 21 hours ago

2020 is not "old".

"Not Invented Here" is whats going on. Developers of this age need to learn this.

A recent exmaple is the RPI Foundation nullifying thousands of internet tutorials renaming "/boot" vs "/bootfs". Ask yourself a serious question, did that actually improve anything? No it did not.

exabrial | 2 days ago

Cool that you managed to get it running after just 2 hours. Same thing applies to python projects, a little note in README saves so much time in the future, I always try and use virtual environments and specify a specific python version so I can just nuke and reinstall everything

aronhegedus | 2 days ago

node-gyp was a mistake, building of native addons should have been an explicit separate step all along.

alganet | 3 days ago

i could not tell from the article whether this was a site with a backend using node.js or if it was just a frontend depending on node.js for the build tools.

for the latter i get around the problem by avoiding build tools altogether. i use a frontend framework that i can load directly into the browser, and use without needing any tools to manage dependencies. the benefit from that is that it will ensure that my site will keep running for years to come, even if i leave it dormant for some time. the downside is that it is probably less optimized. but for smaller sites that aren't under continuous maintenance this is a reasonable tradeoff. i built all my recent sites that way using a prebuilt version of the aurelia framework.

incidentally just today i tried to research if i could build a site with svelte that way. well, it turns out that although it should theoretically be possible, i was unable to find a prebuilt version to do so after a few hours of searching. for vuejs i found one within minutes. i'll be learning vuejs now.

see this thread for a discussion on going buildless: https://news.ycombinator.com/item?id=41479365

em-bee | 3 days ago

> node-gyp

We're in ... let's call it a transitional period at work. I've got something like a dozen versions of node being managed by asdf. And in half of the projects I work on regularly, I consistently get warnings about this particular project failing to build.

One day, I'll actually look up what it actually is, and what it does, and why it's being built, but is apparently optional.

pavel_lishin | 3 days ago

OP was trying to install an old dep tree of gatsby on a different node target. These kinds of massive libraries break all the time. Look how big its dependency tree is: https://npmgraph.js.org/?q=gatsby

Fortunately this mindset has been changing in the node ecosystem with projects like https://hono.dev/ (koa/express successor) and https://github.com/porsager/postgres having zero deps.

hombre_fatal | 2 days ago

Dealing with node-gyp cost me at least 5 hours a month in the 2010s. I'm so very happy to not see those errors in my console anymore.

kylehotchkiss | 3 days ago

Native code in an npm module should be regarded as a massive red flag.

philipwhiuk | 3 days ago

Node/JS seems particularly fragile in this regard thanks to the complicated maze of dependencies and sub dependencies and flavour of the month framework syndrome

Havoc | 2 days ago

Acknowledging this is absolutely awful, and also commenting that a project .nvmrc file is your friend!

adamtaylor_13 | 3 days ago

At first I thought it would be a decade old project, but 4 years isn't old by any standards is it?

Anyways, npm ci should have been the first attempt, not npm install so that it installs the same package versions defined in the package-lock.json. Then as others have mentioned, pin your node versions. If you're afraid of this happening again, npm pack is your friend.

In the end, op could have done a bit more. BUT I'll give it to him that when bindings are involved, these things take more time than they should

iamsaitam | 2 days ago

I had to build some project that uses some Ruby package manager. I forgot already what the package manager is called. I got some error about "you don't have all the dev tools". So I installed what Google told me "dev tools" was. Then it still told me that I needed more dev tools. Stackoverflow had some question about this package manager. For Windows (Linux here). 20+ answers, mostly for Mac. All in the style of "this random thing worked for me". All with at least one upvote. Some answer about "I needed to symlink this system library".

Gave up.

Then I ran `devbox init` and installed whatever it told me that was needed. `devbox shell`.

keybored | 3 days ago

This is the reason I ripped out Gatsby from every project where I could. Every six months I’d spend an entire evening fixing obscure problems that shouldn’t even exist just to get things running. And that’s not even considering actual breaking changes of which there were plenty.

YuukiRey | 2 days ago

I still have to maintain many old projects that are using Node 7, but many dependencies are no longer available. Every update needs to patch the docker image manually from the last runnable docker image.

kiettv | a day ago
[deleted]
| a day ago

Node.js (or more accurately, the entire Javascript ecosystem) changes, but the tropes don't.

https://medium.com/hackernoon/how-it-feels-to-learn-javascri... (beware the green background, I recommend reader mode.)

theandrewbailey | 3 days ago

Lately I have been revisiting some older golang tool I wrote since before they introduced go modules.

"go mod init" + identify a working dependency version was all I had to do on any of those 10+ year old projects (5 minute work tops)

0points | 2 days ago

You should save your deps in your SCM! Microsoft is giving away ownership to existing packages if you tell them you will use it for a TypeScript project.

z3t4 | 2 days ago

Can't help but feel that this is a massive nothing-burger. You wouldn't generally expect your Java project to run if you use an incompatible version of the JVM, nor would you generally expect your C++ project to build if you swap one compiler for a different one. Etc, always specify what your project relies on, whether it's in the readme or in the dependency tree.

Etheryte | 2 days ago

This could equally be written about old Android projects.

stuaxo | a day ago

We use package.lock and docker image with local folder binding to run legacy node projects. Eg. docker run -v local:inner node:12 command

SeriousM | 2 days ago

For sure. This is the number one reason I am switching as many projects as I can to HTMX.

https://dubroy.com/blog/cold-blooded-software/

Sibling comments say in so many words, it's no big deal bro, just update. But it is a big deal over time if you have dozens of cold-blooded projects to deal with.

jollyllama | 2 days ago

OP is tired after 2 hours of work.

betimsl | 2 days ago

The tragedy of running an̶ ̶o̶l̶d̶ Node project.

cynicalsecurity | 3 days ago

How is that pretty hard?

nwhnwh | 2 days ago
[deleted]
| 3 days ago

You spent only two hours on this and you think it’s too much?

Also, do not run shit on a node version that is years out of date and out of service. Also, update your damn packages. I know I sound cranky, but running anything internet facing with god knows how many vulnerabilities in is an exceedingly bad idea.

Aeolun | 2 days ago

Having CI would have avoided this problem.

bsuvc | 2 days ago

It's not just personal blogs as many of y'all know.. this is a daily struggle in any SDE mainly working in web .. I am constantly lamenting the fact I spend like 70% of my time at my main job (large financial company) just trying to get my environment , or the application's environment in a position to actually develop on the application itself .. It's insane. I f'n love javascript .. it's allowed a lot of us a doorway into software engineering where a lot of us realize how ... 'special' js and related web development are ;) .. But man it can make you really want to smash the computer some days.

stall84 | 2 days ago

Have you tried DevContainer before ?

Hayabusaa | 3 days ago

yeah? now try running 4 years old React project, it's a hell on earth.

nenadg | 2 days ago

DHH has said this experience is a big reason Rails is pursuing a no-build approach.

mostlysimilar | 2 days ago

Running a new Node project is nearly as problematic… The ecosystem is broken

turnsout | 2 days ago

This goes for both node and python: Avoid native extensions. For python this is less feasible due to its inherently poor performance, so limit yourself to the crucial ones like numpy. For node, there are few good reasons why you would need a native extension. Unless you have your node version pinned, it will try to find the binary for your node version, fail, then attempt to build it on your system, which will most likely fail as well.

incrudible | 2 days ago

FWIW, I've mostly maintained long term PHP projects, and I've had nearly unaltered codebases running for ~25 years since php3. No frameworks, just core PHP. People dump on PHP, but it's a very good tool if you're focused on maintainable output and pick the right functional APIs to cede to mature unix tools. You can expect decades of solid, maintainable output.

Experienced programmers will not pick up a "built on shifting sand" stack, because they can acutely perceive the pain and suffering before it happens, generally from past experience. With fast-crumbling stacks, you need to execute quickly and move on, and treat the whole codebase as an expiring entity. Stacks I personally try to avoid: anything node/javascripty, anything Androidy, anything iDevicey.

Those who don't understand Unix are condemned to reinvent it, poorly. - Henry Spencer

... via https://github.com/globalcitizen/taoup

contingencies | 2 days ago