Removing Mutex/Condition/Futex from std.Thread was an explicit decision, and it’s a decision I’m unhappy about, that was the scope of the thread. It forces me to reimplement them in my code (which I did) and other people will also reimplement them. It creates fragmentation/duplication in the ecosystem. In my view, if std has an API for spawning threads, it should also have APIs for synchronizing threads. These APIs are the base layer for implementing other abstractions, e.g. implementing the std.Io interface.
Is there a reason you can’t use the low-level std.Io.Threaded.mutexLock/std.Io.Threaded.mutexUnlock?
I have yet to convert over to the Io module, and don’t want to knock it until I try it. I suspect I’ll like it actually: as a way to do what it does.
On first principles, I didn’t think of Zig as the kind of language which makes you pay for what you’re not using. With Io, it is. But the reason isn’t Io itself, it’s because you chose to remove all of the primitives which do what Io does now: just individually, instead of all compiled into a bunch of function pointers needed to fulfill the very extensive interface.
I do see it as boldly tackling some hard problems, and I know it’s here to stay. I look forward to building something cool with it. It’s just… one of my programs, it needs: to write to stdout. That’s it. Ok no, it spawns some threads now, and takes a timer, but those are build configurable options.
It’s never going to netBindIp, it doesn’t need secureRandom. But when I update, it’s going to have those things, written into memory which will never be reached. The optionality that provides, I’m not using, and I won’t be using: that program is done, I’m only updating it because approximately no one is going to keep an 0.15-era Zig compiler around, myself included.
It’s not the last version of Zig you’ll ever publish, and I’m not describing a big problem here. But I hope you’ll consider it anyway.
can you be more specific
Sometimes I feel a bit alone in believing that this dead code elimination problem will be tackled… It is very frequently presented as the main issue with std.Io.
I bet that it will be just a distant memory in at most one year ![]()
Unless I’m missing something, all of these functions have to be implemented in order to create an Io instance, and because it’s data, they’ll all be analyzed and end up in the code. netBindIp and secureRandom are among them.
Like how in this thread an enormous .rodata table was created, because Zig doesn’t ‘lazy’ a struct: ask for a struct, get the whole struct.
If that’s already not true, I encourage you to communicate that, because from what I’ve seen here it’s perhaps the only sticking point in the whole business. I’m only bothered in the abstract, it will make no practical difference to the code I write personally.
Measuring binary size of ZTAP before and after building under 0.16 is one of the first tasks I’ve set myself, but I’m not downloading it until Sunday. So I don’t want to jump the gun. We had a long thread on this topic in January.
Seems entirely practical to me. I mentioned “not the last release of Zig” for several reasons, including that. Many things could be done here, and this is just a size optimization, that would be a silly thing to blow out of proportion.
tracking issue is https://codeberg.org/ziglang/zig/issues/31421
still, as far as I understand, it solves the problem for IO users
but we are talking about working with OS “objects”/primitives without IO
after the hard work and the release of 0.16, it’s time to get “input from the field”
these are my two cents (one - mutex, second - posix)
Your feedback is not helpful, the entire point of Io is to abstract away the OS primitives.
The reason you want the Os primitives is that what Io provides doesn’t fulfil your use case.
Going into detail about your use case, and potential ways Io could fulfil it would be helpful.
it’s the problem - Io could not
with current approach Io replaces os.
Let’s suppose i am working with different threads, don’t need any io (not Io)
which Io i should to use Io.Threaded, Io.Evented in order to start non-io working thread and send information to it ?
What is added value of Io in this case - additonal level of hard abstractions?
Io also is the “client” of os, but it force me to use his abstractions.
The value is the abstraction over the method of concurrency. It allows you to seamlessly change the units of concurrency by only changing the implementation of Io you instantiate and use.
And all the higher and lower level tools for managing that concurrency the API provides for you.
This would allow you code to run in environments where traditional threading is not available. Provided that there is an Io implementation for that environment.
Io is not tied to working under an OS, it is a platform abstraction over I/O and concurrency/asynchrony.
With it you can write code that is compatible with synchronous exection but still be able to take advantage of concurrency if its available to speed things up.
This line from the changelog is important:
When upgrading code, if you find yourself without access to an
Ioinstance, you can get one like this…
It hints at what is within reach - ideally the easy stuff is easy (ex: mutex, stdout, init.gpa) and the hard stuff is possible (io/alloc customization or avoidance entirely). The language is well positioned for many different use cases. Juicy main a confident step in the right direction IMO.
That’s exactly what I’m afraid of.
Why are you afraid of that?
Platform specific API’s will still exist; there just won’t be, in std, an abstraction over multiple platform concurrency/IO primitives.
I hope what I am asking here isn’t extremely obvious - I am very inexperienced with Io - but I have asked about it in the past and I didn’t get a counterexample.
Why are we having these interfaces runtime known?
Allocators were when I thought about this, I have not personally used allocators in a context where the allocator is not comptime known. If the implementation is known at comptime then the compiler can easily know which functions are and aren’t called. (example functions that take in anytype where that anytype needs to adhere to a specific interface).
As far as I know, this would also not stop runtime polymorphism (a Vtable allocator would still fulfill that “anytype”).
I’ll try to argue against myself here - but when I personally used comptime interfaces, the downsides I noticed where:
- The lack of support from zls (can’t help autocomplete)
- You don’t know information about the type when you read the function declaration.
- You have to not forget to write the code that checks the type for compile errors.
But it seems to me that these are problems that could be solved and even make comptime more powerful.
What am I missing here?
I can think of at least a few more:
-
Binary and compile time bloat: the compiler has to analyze and emit code for each instantiation of a generic function. It’s possible that LLVM may be smart and dedupe some instantiations if possible, before they make it into the final binary, but if nothing else, it makes compile times worse depending on how many types you use in the generic parameter.
-
It makes it no longer possible to use pointers to such functions at runtime. You can’t use a
*const fn(anytype) void, etc. unless it’s known at compile time which function it points to, since otherwise the compiler has no way of knowing which function it needs to instantiate or even call (since functions with different types in that parameter might need to be called differently depending on the type). -
There would no longer be a good way to store an unknown type of
Allocator,Io, etc. in a struct without making it also generic (it becomes a viral property) or using something like theGenericWriter.any()function that used to bridge a generic writer into a type-erased one, but came with a lot of footguns: Complicated ownership using AnyReader · Issue #17458 · ziglang/zig · GitHub
@vulpesx is right if you have any concern with the new Io interface you are more then welcome to explain exactly what, where and how it is problematic to you and they will most likely listen and try to find a solution that works for everyone, I can tell you from experience, when the Io interface was still a WIP I went on zulip and told @andrewrk that the API was lacking a way to integrate with control plane api and readiness in the broader scope, and he asked for examples, and started thinking of an api, and now i’m quite satisfied with the operation/operate which I’m not saying wouldn’t have make it’s way into the interface without me sharing concern, but at the very least it probably helped make sure my use case was covered. So try to formulate exactly what’s wrong, and why, and with specifics I’m sure they will consider a solution that might satisfy your usecase
I’m also a bit concerned about the impact of the vtable on runtime performance.
For the writer / reader the choice was make to out the buffer above the vtable to avoid the vtable jump overhead most of the time. ‘
For allocator, if it’s a base allocator then its always expensive, so the vtable hit is minor. But for arena I generally use ctx.arena.allocator().alloc(u8, 512) so the compiler can easily devirtualize the function call.
For IO I don’t have necessary information yet. At least the team is aware of the issue because it’s part of the reason why “restricted types” is explored.
I believe the easy fix would be for you to copy paste the code you need in a new repo. It’s hard to comment on your benchmark without understanding it. My hunch is that atomic based queues may work better than futex based.
I’d also say the IO stuff is pretty ground breaking, and I love how it’s essentially a better POSIX. I recently had to work on a niche target with a closed source OS, and by wrapping a few syscall in a IO trenchcoat I could reuse a lot of code whereas std.posix would just fail to compile.
undefineduser/innellc-queue - Codeberg.org
Use
zig build benchmark --release=fastto run a simple benchmark.
The lockfree wait queue is a lock-free queue based on atomic variables. It waits via futex.
I’m not against Io; it’s great.