Friction in programming language design

LOOK AT YOUR OWN EXAMPLE, this syntax means something, not using it means something, what you are suggesting is to make the two indistinguishable without external tools, how the hell is it not harder to read.

this is extremely frustrating, you do not seem to understand what we are trying to convey, we have explained multiple reasons why this is a bad idea, and it seems to be going over your head.

at this point i think this conversation is ridiculous

It seems as though you are used to languages with exceptions, if you prefer that way of doing things that’s fine, but it has its issues. zig doesn’t want those issues end of discussion.

I’ll explain it again.
Explicit syntax for error handling, makes the code more clear to read, especial outside an IDE or editor with LSP support, which is very important.
not everyone used an IDE
not everyone uses an LSP
even people that often do don’t like having to rely on them
for something that can be so trivially expressed with syntax.
Most importantly many developers, especially maintainers read code outside of IDE/LSP such as in pull requests, in those situations understand the full extent of what code is doing is very important

Furthermore, having explicit syntax allows more control over what the code is actually doing. This kind of explicitness is something that zig is built around, it is fundamental to the language, it’s not going to change at this point. The syntax might change, semantics might change, but it will still be explicit.

ZLS is not an official part of the Zig project

Ah. Zig has a grammar, and the compiler requires a language that implements that grammar.

However, the compiler can also infer - variables, parameter names, etc already, and error handling often involves errors because people haven’t grokked the error system.

The error system, at lower levels of a system, is often explicit and discrete. As we build layer upon layer, from low level library, to middle integration library that abstracts lower levels, we eventually bubble up to the application layer.

At the application layer, we. have a whole stack of underlying dependencies, which to the average person, is totally hidden from view–requirements, errors, everything but the functional endpoints they need to use.

The unique combination of error sets is a total unknown. People, on average, do not know what the possible errors are, where they come from, nor how to handle…typically, on average, at the application layer. And they don’t care, because how can they learn and still be productive?

So while I completely understand the need for explicit error handling, a lack of knowledge at the app layer means a developer is more comfortable letting the compiler handle more. There is just no value in knowing what is going on in the constantly shifting sands of frameworks and very deep dependency trees.

Enter the language server. The language server has perfect knowledge. We can separate the zig language that is required for a person to key from the language that is rendered by an IDE because the LS can fill in the gaps.

To me, this is a very powerful concept…there is no loss of clarity (LS+IDE, or for the explicit gestapo, zig fmt --explicit_err ), and yet compiler errors due to a person miskeying repetitive instruction - explicitly typing what the compiler already knows, a form of matching game “i know what you know, see?” - would evaporate.

The code itself is thinned. The app layer may not care about a bunch of try. Very likely a high degree of the logic is going to be try, big whoop. The app layer may not care about !. Again, don’t know why it could fail, guess it could.

But people do care “hey! This function you called, could return an error, go put ! in the return type!” … it’s like, why do that to people? !void, at the app layer, says “something bad could happen here.”

This is true for just about every statement at the app layer.

In the application layer of a system, failure is everywhere.

1 Like

you can clearly see when that inference is happening,
as you can with !void, the point is its explicit that its inferred

zig explicitly targets these layers

that’s why inferred error sets exist

it does NOT have perfect knowledge, to expect it to is silly. That doesn’t mean it can’t be good, really good, but to expect perfection is ridiculous.

you are still ignoring the fact not everyone uses an IDE or LSP, even those that do won’t always have access to such tooling. Forcing people to rely on such tooling when its realistically common for people to be without it is a terrible idea

!voidvoid some real meaningful thinning right there
try foo()foo() so much thinning, so much less code it’s incredible, not only that it also has less meaning, after all no one will ever need to read code, why bother making it easy for them.

this is again, against the philosophy of zig, zig wants explicit, readable and MAINTAINABLE code, knowing what can fail and how is so incredibly important to achieve that. Software should care about this, ignoring makes the resulting software objectively worse, this might not matter for small tools, silly side quest projects, but if you’re making something that will be used by others ignoring failure is unacceptable. This is the philosophy of zig.

Again this is ridiculous, this is not productive anymore, you are refusing to learn from this, and I don’t see anything for anyone else to gain from this.

the next reply better be about the actual topic, and not continuing this annoying conversation

1 Like

It’s ok to disagree; let’s tone down the snarkiness.

4 Likes

all good…give it a rest and let the woes of the app layer percolate.

i am not a fan of exceptions. At all.

And…adding try and ! to every function at the app layer isn’t great - it drives needless compiler errors. The compiler already knows, the person does not. This friction isn’t clear, but it does impact adoption.

knowing what can fail

people don’t. this is a hard fact. so let it swirl around. give it time. a system built by people is influenced by the reality of people building the system. different levels of the system have different standards. when the top two levels of a zig system have every line “try” and returning !, what have we accomplished?

peace…

1 Like

the only difference between exceptions and what you are suggesting is that exceptions arent return values.

that is correct, and while it would be nice to not have it, it is useful, and it is unfortunate that it is the easier path to handle errors, thus people are guided towards it. But it is common for it to be the correct solution which is why zig has try even though you can do catch |err| return err.

just because they aren’t necessary doesn’t mean they aren’t useful. you have been suggesting tooling to get information, these errors are information.

this is true, the language should make possible failures clear, such as making them part of the return type, and requiring propagation be explicit.

yes, this is why zig can infer errors.

visible error propagation without the use of external tools

I apologise if I was rude, but this is frustrating, at this point we are both repeating ourselves, and you have stopped addressing my actual points.

It seems like neither of us are willing to just let it go though

4 Likes

I was at the same point about a year ago. In my game I would always bubble up the errors into the main function and I was annoyed by having to write try and changing function signatures all the time.

But it’s a trap. If you bubble up the error, you are not handling the error. And if you are not handling the error, it could have been a panic in the first place.
Good error handling requires you to think about and fix it locally, because otherwise you have no clue what even caused it, let alone how to fix it.

With these 2 things in mind I reworked my error handling entirely, and nowadays out of my ~2000 functions only 105 return an error union. And only 1% of my lines contain try.

I think this trap is one of the biggest flaws in Zig’s design. Every beginner is teached to handle errors by bubbling them up into the main. The biggest contributor to this is the OutOfMemory error. It’s the first error beginners encounter, and the first thing they get teached is to just use try to get rid of it. And it’s only natural that they continue to use this pattern for other errors as well.

13 Likes

Hey, it’s all good, I am sorry too. No apologies needed, it is all me.

You, and others, make excellent and fair points that I didn’t explicitly acknowledge, and I should have, and I do now–very very good points (ZLS not zig, impairs non IDE viewing / code reviews, grepping etc ).

We were in a recursive loop, perhaps because we are missing context cues and it’s hard to know…I see the zig community as a friendly place, all smiles, and grins, but reflecting, I see that there was more pointedness in my posting then there should have been.

So I am sorry, no apologies needed from you or others!

I would never want to cause discomfort. There is too much in this grand world of ours.

Take Away
For me, the aha moment is a language server can supplement the language a person needs to know. Inference made explicit, which is relatively novel…ish.

I have always considered a singular compiler grammar with a singular compiler language (implementing said grammar) as a 1:1 pair.

However, with a compiler grammar, it is possible to have an explicit / read only system-generated language layered on top of an implicit read-write language provided by a person…or a generative artificial intelligence…feeding the compiler.

A potential best of many worlds: what does the compiler know, that you need to see, that you don’t want type?

Whether or not Zig is an appropriate use case, I suspect we will see more of this holy trinity—IDE, language server, generative AI* - an enhancement (?) of the output of the person sitting at the keyboard.

It is an opportunity space…

*the non-deterministic element of LLM/generative AI is still … problematic.

3 Likes