I really don't struggle that much with cold starts on Node.js/Lambda, and I don't do anything special, my build commands look like:
esbuild src/handler.ts --bundle --external:@aws-lambda-powertools --external:@aws-sdk --minify --outfile=dist/handler.js --platform=node --sourcemap --target=es2022 --tree-shaking=true
Maybe I'm not doing as much as others in my functions and I tend to stick within the AWS ecosystem, so I save some space and I presume cold-start time by not including the AWS SDK/Powertools in the output, but my functions tend to cold start and complete in ~100ms.It’s a TS/JS to wasm to C tool chain, that runs the same JS a dozen times faster than on node. Very cool approach, and lambda cold starts are definitely where it ought to shine.
That said I wonder if it could ever go mainstream – JS is not a trivial language anymore. Matching all of its quirks to the point of being stable seems like a monstrous task. And then Node and all of its APIs are the gorilla in the room too. Even Deno had to acquiesce and replicate those with bugs and all, and it’s just based on V8 too.
I seriously dislike this kind of comparisons.
We're faster! (please disregard the fact that we're barely more than a demo)
Everyone knows about 80:20, the slowdowns will come after you start doing everything your competition does.
Look at Biome. We're 15x as fast as ESLint (but disregard the fact that we don't do typeaware linting). Then comes typeaware linting and suddenly they have huge performance issues that kill the project (I'm unable to use Biome 2)
This happens over and over and over. The exceptions are very, very few (Bun is one example)
Lots of negativity in this thread; let me offer a bit of positivity to contrast!
The project homepage is awesome, it's a mix between a throwback to retro documentation (with ascii charts) and a console out of godbolt: https://porffor.dev/
The hangup on lack of GC is probably unnecessarily overwrought, WasmGC is pretty much here and there will be an entire ecosystem of libraries providing JS GC semantics for WASM compilers that this compiler can tap into (actually implementing the backend/runtime GC support is fairly trivial for baseline support).
It looks similar to GraalVM for Java, right?
It would be amazing if they pull this off. Being able to compile JS to produce minimal binaries for CLIs or just to make slim containers would be nice.
I don't want to take away from the appreciation of this awesome technical achievement, but in practice I have noticed that:
- Cold starts are kinda rare, sure it sucks that your request takes 600ms, but that means you are the first user. If you would've been served by a container that was just scaled up, you'd have been waiting for much longer
- Microservices and AWS lambda are inherently stateless, and do a ton of things to make themselves useful - get credentials, establish db connections, query configuration endpoints, all of which take time, usually more than your runtime spends booting up.
As much as I like lambdas for their deployment and operational simplicity, if you want the best UX, they have inherent technical limitations which make them the wrong choice.
Awesome work, but I am genuinely curios about the use cases where the 200ms init time being a problem?
It'd be good if AWS Lambda provided a wasm runtime option. Cold start times for WebAssembly can be sub-millisecond.
It'd also be interesting to see comparisons to the Java and .NET runtimes on AWS Lambda.
I've approached this historically by committing persistent/reserved instances so you always have a few instances running. This is nice on paper but feels like you're omitting what a more production-appropriate solution is. "Cold starts" aren't just slow because of init, they're also slow because that's when lots of database connections, state starts, etc happen and managing init speed won't solve that.
This is the first time that I've heard of LLRT, so here's a link for anyone else interested: https://github.com/awslabs/llrt
I still like Proffor's approach because by compiling to WASM we could have latency-optimized WASM runtimes (though I'm unsure what that might entail) that would benefit other languages as well.
I wonder if the author is aware of Node native features that improve startup times, like V8 code cache and startup snapshots. An overview of integrating them into native single-executable applications is here:
https://nodejs.org/api/single-executable-applications.html#s...
Nice we haven’t faced this cold start problem. We like the idea of Lambdas being offered in a simple runtime platform where you can store and run the code as needed.
And chain it with other stuff as well which is where workflow engines like n8n or Unmeshed.io works better. You can mix up lambdas in different languages as well.
This is exciting to see. I run some latency sensitive code on Lambda with the Node runtime, so cold starts are troublesome. I hope I'll be able to use this one it's in beta or fully released.
Title had me excited before I read past the first two words
Paring this with Fil-C would give it a garbage collector for free?
This is cool attempt to make a lot of JavaScript run faster in lambda. I personally got a significant decrease in cold start and runtime by switching from js to golang and would recommend as well
This is seriously snappy, and impressive work.
Tl;dr
Use an experimental (as in, 60% of ECMA tests passing, "currently no good I/O or Node compat") AOT compiler for JS. You remove the cold start by removing the runtime, at the cost of maybe your JavaScript working and not having a garbage collector.
No big corporate will ever use this, they'd be too worried about the compiler being compromised in some way. Llrt will never go primetime either, so we're stuck with the full Node runtime for a while.
Oliver is doing awesome work here. A few interesting points:
- Porffor can use typescript types to significantly improve the compilation. It's in many ways more exciting as a TS compiler.
- There's no GC yet, and likely will be a while before it gets any. But you can get very far with no GC, particularly if you are doing something like serving web requests. You can fork a process per request and throw it away each time reclaiming all memory, or have a very simple arena allocator that works at the request level. It would be incredibly performant and not have the overhead of a full GC implementation.
- many of the restrictions that people associate with JS are due to VMs being designed to run untrusted code. If you compile your trusted TS/JS to native you can do many new things, such as use traditional threads, fork, and have proper low level memory access. Separating the concept of TS/JS from the runtime is long overdue.
- using WASM as the IR (intermediate representation) is inspired. It is unlikely that many people would run something compiled with Porffor in a WASM runtime, but the portability it brings is very compelling.
This experiment from Oliver doesn't show that Porffor is ready for production, but it does validate that he is on the right track, and that the ideas he is exploring are correct. That's the imports take away. Give it 12 months and exciting things will be happing.