Ask HN: event loops vs. greenthreading in modern languages

hot_gril | 6 points

> It's purely to avoid making OS threads wait on I/O in a concurrent application

This isn't true. That is, this is an important part of it, but that's not the only part. Thread APIs don't support things like cancellation natively, and async/await lets you write state machines in a way that doesn't look like a state machine.

> If OS threads were cheap enough to just spawn one for each request, this wouldn't be a thing.

These are both advantages regardless of the overhead of spawning OS threads.

> So now I'm wondering, if Golang and now Java can task-switch without the user having to tell it when, is there any point of doing it explicitly like in JS or Rust?

Yes. I actually gave two talks on this a while back, there's transcriptions on these pages:

* https://www.infoq.com/presentations/rust-2019/

* https://www.infoq.com/presentations/rust-async-await/

The first one is more of what you're asking about, and the second one is how Rust's design here works.

One short way to answer the question though, is about this part:

> because the runtime automatically decides what is blocking

Yes, in Rust specifically, there is no runtime, and so you cannot make these guarantees.

I hope the first link answers things more thoroughly than that, but that's one simple way into thinking about this.

steveklabnik | a day ago

The reason for avoiding OS threads is not only the cost of creating them, but also the overhead of scheduling them. The kernel uses preemptive multitasking, so it does work to determine when it should switch control between threads, and which threads should be running, and there is a cost of performing that context switch. The OS also can tell from syscalls when a thread is waiting on I/O.

Java virtual threads are more similar to async/await since they are a form of cooperative multitasking. That is, virtual threads must explicitly tell the runtime when they are ready to yield execution to other tasks. In practice, the programmer does not need to do this manually; the low-level 'blocking' operations in the JDK have been modified to signal this state. This is why you do not need to use an explicit syntax like async/await. Under the bonnet, virtual threads are mounted and unmounted to/from platform threads (JVM wrappers for OS threads) in a pool. All of this involves overhead of copying virtual thread contexts to and from the heap, and of managing the scheduling of virtual threads.

Why don't other languages use this implicit approach? I am sure there a various reasons, but I cannot really speak to them in specifics. I would be interested to know as well. I would guess that the overhead of the implementation is one major reason.

oftenwrong | a day ago