I hope your not insulted by my answer, but I was kinda shocked how error-prone Zig is. It’s one thing to need to create a buffer, another thing to manually flush the buffer. I expected that something like a destructor would flush the buffer.
This is not what I have expected in 2024.
I expect that even in (a larger) code review, the missing flush would not have been noticed. Also the missing buffer.
Hey @StefanoD, welcome to Ziggit.
For context, this topic was split from the previous help topic: Why is my Zig program so much slower than my C program?
The subject of this was about buffered IO and whether or not users would know to use buffers and remember to flush buffers. Constructive criticism is important to discuss.
I’ll start off by saying that we can split your critique into two sections:
- Providing sensible defaults that have anticipated behavior
- Language features and what Zig offers
Let’s start on the language feature portion of your concern:
I think this is a misunderstanding of what Zig is all about. There are no destructors in Zig. Many people, including myself, are very glad this is the case and your post actually gives a good reason as to why.
A destructor (by its name) should be a function that runs at the end of an object’s lifetime that cleans up its resources. In this case, the destructor is doing more than that, actually. It’s now making a syscall to print outputs. Destructors can be abused and extended beyond their advertised purpose. This can greatly complicate reasonability by making functionality non-local in an invisible manner.
Remember that Zig is a C-like language, not a C++ or Rust like language.
To your first point about providing proper defaults, there is an ongoing discussion about that. I believe that the IO model for Zig is still an open question and there will be revisions to this as time goes forward. I don’t work with IO as much as some people here, so I’ll let others comment on this portion of the topic.
This is a crucial point to keep in mind when programming with Zig, and is a relevant response to:
As far as I’ve observed, in 2024 you have plenty of languages that bend over backwards to “protect” the programmer and not be error-prone. But yet, C (a language more than 50 years old) is still the dominant player when it comes to the lowest level programming field (think operating systems, embedded devices, compilers, virtual machines, etc.) And C is orders of magnitude more error-prone than all those other languages, including Zig.
So with all these safe, guard-rails-included languages that we have today, why are so many crucial low-level projects still reaching for C? In my opinion, because it gives you full control over the hardware while staying relatively simple to learn and use. And here’s where Zig comes in. It strives to still give you this total control and being simple to learn and use, but it removes a whole lot of the error-prone parts of C.
As I see it, there’s no other language that has taken this approach.
What’s error prone is flushing in a destructor, where you cannot handle failure.
Welcome to the forum, @StefanoD. Have you written any Zig code yet?
Maybe I have to completely shift my mental model of programming.
I have not yet written Zig. I was actually watching for a new language to learn.
I use C++ as a daily driver and it becomes a nuisance to work with it as it is error-prone and a very complex and big language where you need to know all the traps and the compile time is just insane…
Not to speak of build systems like CMake which are overly complex…
You can do something like
var buffered_writer = std.io.bufferedWriter(my_writer);
defer try buffered_writer.flush();
to get functionality along the lines of an explicit “destructor” that runs at the end of the scope that the buffered writer was declared in. This also lets you explicitly handle what happens if the flush encounters an error, e.g. just return the error in what I wrote above.
That code won’t compile - you can’t return from a defer expression.
Both destructors and defer are the wrong tool for the job when it comes to flushing. You don’t want to flush no matter what. You want to flush at the end, only if control flow has proceeded there without any errors occurring. Plus, you want to handle flush errors at that point in the function.
Syscalls don’t always succeed. Python, for instance, will flush a buffer for you at the end of a with
block, or when you call f.close()
. But if this fails, it will throw an OSError
. Mostly, code does not in fact deal with this gracefully. So it mostly works, until all of a sudden it doesn’t. Using a C++ dtor to flush a buffer is the same thing, worse actually, but for the same sort of reason.
Zig doesn’t let errors happen silently. If you wanted the behavior which most Python and C++ programs provide if you don’t use a try block, namely, blowing up if the flush fails, you could do this:
defer writer.flush() catch @panic("flush write failed");
And that’s fine if the only sensible thing to do is crash. If there’s something better to do, then you handle the error, and now your program works all the time, not just sometimes. If it’s a GUI, you can pop up the error as an alert, and the user can try and save again, for example.
I prefer that. I would rather work with a language which insists I cope with the fact that writing a file doesn’t always work, even if that means crashing. Rather than one which will throw an exception from a destructor, and makes me do extra work if I want to handle it correctly.
“Code doesn’t flush buffer, file is incomplete” is a shallow bug, which good practice will quickly catch with a test. “Code randomly crashes for mysterious reasons, in production, every few days / weeks / a few times a year” is not a shallow bug. That kind of bug can stay on the issue board for a very long time.
So no, I wouldn’t say Zig is error prone. Software, and software systems, are error prone. Zig makes you deal with that.
We have actually such a case. Sometimes a file didn’t get written (in C++) and the file is just empty. What causes did you experience in your case?
This makes me want to share an anecdote. About 10 years ago, I wrote a piece of LabVIEW code. From LabVIEW, you can control hardware. And just ignore errors. Some crash the program, some don’t. You’ll be wiser after they happen. Back then, that didn’t bother me much, I was inexperienced. And so I ignored an error, one from another piece of code that essentially retrieved temperature readings from hardware. The program was controlling a heater (a rather powerful one) and it ran unattended for days. Years later, a former colleague called me to tell me that they had a major “problem” with this system. The unhandled error caused the program to stall while the heater was active. Fortunately, nothing too serious happened, just some components that needed to be replaced. I was sane enough to install a temperature fuse which eventually blew. But proper error handling could have prevented this incident in the first place.
That was my lesson in error handling. There are nuances to this of course; controlling hardware is a whole different category than a one-shot Python script that processes some data and fails to write to disk. Nothing is stopping you from controlling hardware with Python though… but back to the topic. I’d emphasize that some languages push you towards error handling and some don’t. Zig is in the first category, with an excellent error model in my opinion, and I think this is a very reasonable approach. You can still write code that crashes where it shouldn’t with Zig (“try
too hard”), or forget to flush buffers, but it pushes you towards being aware of errors, and that to me is a great feature.
This is what I was alluding to by describing the C++ situation as worse. A C++ destructor for e.g. std::ofstream
will flush a write when it triggers, but again, syscalls fail. The problem there is that throwing an exception from a destructor is undefined behavior.
So you have to check for some internal flag bits to determine what happened. But if the destructor is triggered at the end of a function call, then the calling site doesn’t necessarily even know the filename, which it would need in order to check the flags.
So a C++ auto-flush isn’t “convenient” unless silent failure is ok (how often is that true? ever?), it involves a bunch of defensive coding, all of which is actually easier if you call the flush manually. To have a correct program, you have to do all the same things Zig requires, it’s just that C++ makes it easy for your program to be incorrect.
There are a lot of reasons fsync can fail: most of them are rare, but on a long enough timescale, rare things happen. But if, say, the failure is that the process doesn’t have write permissions, it’s still going to silently fail, just now, every time. You won’t know it happened unless you check.
The TL;DR here is that the error-prone system is the one which flushes from a destructor, it trades temporary convenience for long-term flakiness.
The Zen of Zig says: resource allocation can fail. A flush allocates disk resources, therefore it can fail, and your code has to decide what to do about that. It also says “resource deallocation must succeed”, which is part of why you can’t use try
in a defer
. This is also loosely related to why Zig doesn’t have destructors, although “No hidden control flow” is probably more central there.
This is arguably less convenient, yet definitely more robust. If your code is missing a flush, it will consistently fail and you will notice and fix it. When it’s automatically flushing at some hard-to-determine point in the code, implicitly, when the resource goes out of scope, that’s just going to break occasionally, and those bugs are awful. Personally I consider not having that class of bug to be very convenient!
Thx! This was an eye opening reply!