Analyzing the Performance of WebAssembly vs. Native Code

liminal | 73 points

45% slower seems pretty decent considering they use a wasm kernel they developed to mimic the unix kernel so they can run non-modified unix programs inside the browser. It's actually pretty impressive that they did this, and even more impressive that it works and like another commentator said, is not even an order of magnitude slower.

I'm more interested in 1) usages of wasm in the browser that don't involve running unmodified unix programs and 2) wasm outside the browser for compile-once-run-anywhere usecases with sandboxing / security guarantees. Could it be the future for writing native applications?

Languages like Kotlin, C#, Rust, as well as C/C++ etc support wasm quite well. Could we see that be a legitimate target for applications in the future, if the performance gap was closer to 10%-ish? I would personally prefer running wasm binaries with guaranteed (as much as possible ofc) sandboxing compared to raw binaries.

edit: it's from 2019, there have been significant improvements made to wasm since then.

b_e_n_t_o_n | 14 hours ago

That it’s not even an order of magnitude slower sounds actually pretty good!

rlili | 14 hours ago

(2019) Popular in:

2019 (250 points, 172 comments) https://news.ycombinator.com/item?id=20458173

2020 (174 points, 205 comments) https://news.ycombinator.com/item?id=19023413

gnabgib | 14 hours ago

45% slower to run everywhere from a single binary...

I'll take that deal any day!

icsa | 14 hours ago

45% slower means..?

Suppose native code takes 2 units of time to execute.

“45% slower” is???

Would it be 45% _more time?_

What would “45% _faster_” mean?

PantaloonFlames | 14 hours ago

The data here is interesting, but bear in mind it is from 2019, and a lot has improved since.

azakai | 14 hours ago

I have built Fibonacci wasm wasi executable for Rust. When I execute it in https://exaequos.com (with wex runtime under development), it is faster than the native app on my MacBook

baudaux | 9 hours ago

This is pretty good actually considering the low hanging optimizer optimizations left and that the alternative is JS which generally performs 2-10x slower.

I think vectorization support will narrow the aggregate difference here as a lot of SPEC benefits from auto vectorization if I recall correctly.

vlovich123 | 14 hours ago

(2019)

fanf2 | 14 hours ago

... in browsers. Which at best JIT compile. There are several WASM runtimes that AOT compile and have significantly better performance (e.g. ~5-10% slower).

The title is highly misleading.

turbolent | 14 hours ago

Yeah, I've seen this when test Rust code compiled into native and wasm. I don't know about 45% though, I haven't measured it.

ModernMech | 14 hours ago