Just watched it. It’s soo good! King’s talks are always a gold mine of insights sprinkled with fun and thoughtful drawings and delivered in such a chill way. Also, they keep getting better every time. Always makes you excited for the next one. Keep it up, King!
imho animated illustration of “concurrency” (at 1:35) is conceptually wrong.
hand = CPU core
coin = process/thread/fiber…
Using this hand-coin model I would describe parallelism and concurrency like this:
Parallelism (doing two things really simultaneously):
- initial state - both coins are on the ground
- pick up both coins simultaneously
- throw both coins up simultaneously
- catch both coins simultaneously
- put both coins back to the ground simultaneously
Concurrency (doing two things alternately, by chunks, with one hand/CPU core)
- initial state - both coins are on the ground
- pick up coin-1 from the ground
- throw coin-1 up
- coin-1 is up in the air now
- pick up coin-2 from the ground
- throw coin-2 up
- coin-2 is falling, catch it
- put coin-2 back to the ground
- coin-1 is falling, catch it
- put coin-1 back to the ground
This IS concurrency, doing many things by small chunks with one hand (CPU core)
That’s interesting. I didn’t think too deeply about it, but I guess my interpretation of the concurrency animation would be that whenever Zero is catching a coin it means that a thread is grabbing a lock to a shared resource, where each coin is a thread, catching a coin is grabbing a lock and Zero is the shared resource.
@tensorush I see your way of thinking (multi-threading, mutexes and whatsnot), but think about concurrency within single thread (popular approaches are coroutines/fibers/whatever-them-name-it, but my way is much more simple, event-catching + state machines)
Fibers and the like are also known as “green threads” or “software threads”. I didn’t specify which threads (software or hardware) because concurrency can occur with either.
I guess “hardware threads” are procesess/light-weight-processes (aka threads) that are driven by operating system kernel? Did you mean them? Or WTF are “hardware threads”?!?!?
I can’t speak for @tensorush, but hardware threads often refer to an actual physical resource.
Multiprogramming is full of examples regarding concurrent work where the appearance of multiple things happening at once is useful. In this sense, the one dude flipping two coins one after another isn’t a bad representation.
Concurrency, in this context, can be thought of as the appearance of simultaneous program execution. That is, concurrency through multiprogramming is what makes it possible to use a web browser while listening to music with an MP3 player at the same time. Throughout most of the history of computing, these applications were not actually executing code at the same time; rather, the kernel was just switching back and forth between them so quickly that users could not notice.
So yeah, one guy alternating between flipping one coin after another is not a bad analogy here. It’s understandably difficult to draw someone handling small chunks of a coinflip in a way that would translate to an audience, but I get your point.
Yes, sorry for the confusion. Hardware threads, aka OS threads, are scheduled by the OS scheduler, which happens in kernel space, while software threads are scheduled by some user space scheduler, for example, the goroutine scheduler present in the Go compiler.
They are both software.
Hyper-threading inside a CPU core is a bit special, but imo this is the only thing that could be named “hardware thread”.
How many hands that dude has? Maybe he is a six-hand शिव,?
Schedulers are both software, but, conceptually, threads that they are scheduling aren’t. OS scheduler is the one scheduling “hardware” threads to be executed on CPU cores, while the Go scheduler is the one scheduling “software” threads to be executed by “hardware” threads.
In any case, these terms exist just to draw a high-level distinction between the two kinds of threads, so that they’re easier to refer to. There’s not much else to them than that, they’re not too literal that way, just tech slang I guess.
I guess it is marketing guys slang ,-)
Of course, especially when you have 24+ cpu cores, but only 2 ethernet interfaces ,-)
Is there updated source code for this somewhere?
The last stuff I see from him about thread pools is almost two years old since it has been touched:
I’d really like to not reinvent thread pool stuff if there is already something much better out there.
The two really aren’t even remotely equivalent. You can see that from the fact that one requires an allcator and the other is using intrusive lists with fixed allocation for an SPMC queue.
Given that Bun and Ghostty (through libxev) both use some variant of zap’s
Thread.Pool it would be nice if it landed it
std. @buzmeg - the two linked locations contain somewhat updated versions of the code that should at least run on modern Zig versions.
Thanks @scheibo . The fact that those version of ThreadPool are being used in projects that are building right now is quite helpful.