I say this because achieving that level of memory safety is a significant engineering challenge for Zig. Without a borrow checker like Rust’s, Zig relies heavily on developer discipline and runtime checks. How can we ensure it reaches the same level of reliability for mission-critical systems without adding too much complexity?
To be fair, adding a borrow checker is not the solution to memory safety - it’s just a way to impose a philosophical strong opinion about ownership, which incidentally prevents some types of memory bug in some common situations. It’s a subtle difference.
pony’s capability restrictions are a different approach again, which prevents some memory safety issues, with the added bonus of not requiring locks to avoid race conditions.
Erlang achieves very strong memory safety by having every object operate in its own hermetically sealed environment. No borrow checker needed.
Intercal achieves perfect memory safety by making it almost impossible for the programmer to write a program that compiles, let alone runs.
none of these memory safe tools solve memory safety - they all provide a restricted subset of ways to build applications, and fence off the more dangerous areas that are needed for a true general purpose tool aimed at supplementing C
On the contrary. It invited me in my short rust life to write insanely unsafe code ![]()
Maybe this is pedantic, but it seems you’ve adopted your own definition for “memory safety” that is not a common one. There is no other definition I know of that says a solution that provides memory safety may not make any restrictions.
I guess you’re saying that you don’t consider any solution to memory safety acceptable, unless there are no restrictions added. Is that right?
Writing correct programs is a significant engineering challenge, yes. Zig is a very capable tool for engineering of significance.
I don’t like Rust’s ownership checking mechanism because it isolates the code into unsafe blocks, but to me, it doesn’t make much sense: the unsafe parts involve very commonly used data structure designs, not some advanced programming techniques that only a few people should touch. This design makes the experience of writing Rust quite inconsistent, and the mental burden of writing Rust code in the unsafe sections is even heavier.
But I still hope there could be some kind of ownership annotation mechanism, similar to C++'s smart pointers. I often feel uncomfortable when using the C wrapper of RocksDB: its C API seems to always pass pointers, yet some take references while others transfer ownership, and I have to rely on reading its C++ implementation to understand. Although this can be attributed to documentation maintenance issues, overall, I find it hard to trust the timeliness of the documentation.
Another direction I prefer is algebraic effects. This relatively flexible annotation can not only apply to pointers, but can even track the lifetimes of any values, including indices and handles.
They never said anything close to that.
The issue is, none of those languages solve memory safety, you can still do unsafe things, even without a bug or edge case with their safety mechanisms.
Memory safety is a simple concept, but actually preventing or catching all unsafe things in practice is an absurdly difficult problem. A problem that has never been solved.
That is not to say rust, or any other language, has not made steps in that direction.
What they did say:
is that the way those languages improve memory safety is by restricting what you can do, and that such restrictions are not conducive to a general purpose language.
A general purpose language is not defined by memory safety, so I’m not sure how you came to your interpretation that their opinion is “real memory safety doesn’t restrict you”
It’s more so that a true memory safe language is not useful if it can’t be used to implement most or even some things.
But I also think that there won’t be a true general purpose language for a long, long time. So I am not fussed if a memory safe language is not always useful, so long as there is a language that can be used for the task.
One of zigs goals is to be as widely useful as C, that means it can’t be too restrictive. This is much of the reason many avid users of rust and other languages, e.g. myself and even @matklad have come to prefer zig.
Andrew does want to improve memory safety in zig, but not at the cost of being too restrictive.
Now the thread has moved: what is actually the definition of memory safety?
I would like to clarify this a touch! It’s not that I prefer one over the other, I love them both (Zig And Rust), and I would pick one or the other depending on the context. Rust is great at negative space, disallowing bad things from happening. So, if I have a team-of-teams that needs to ship fast with little coordination, baseline security, and good performance ceiling, I’d pick Rust and its interfaces and borrow checker. Zig is great at positive space, allowing good things to happen. So, if I am in a tightly-knit team that uses static allocation, deterministic simulation testing, and doesn’t need to solve coordination problems via compiler-checked interfaces, I’d pick Zig.
That being said, on purely aesthetics feels, Zig is certainly significantly more elegant than Rust. This alone would push me towards Zig for cases where I don’t really care, if Zig was stable :0)
But than again, relative to its time, Rust was a major increase in elegance! Its not a coincidence that Zig’s syntax is more or less a subset of Rust.
Sorry, I shouldn’t have put you in an opinion
, i more intended to mention you as someone who like zig, not that you liked it more than or disliked rust.
Adding another tangent on this, there’s going to be a point in the future where memory safety will start getting more fine-grained hardware checks. For example RISC-V CHERI can be viewed as adding a bunch of CPU instructions that make slices enforced by hardware.
Languages like Zig that have those kinds of structures as first-class citizens could benefit quite a lot, gaining run-time hardware checks for a lot of the things that are currently software-checks or that Rust may already enforce. It’ll need a little compiler work, but much less than pointer-only languages like C.
I have to disagree. If you stick to safe Rust, or you avoid FFI in many managed languages like Java, C#, OCaml, etc, then the language is memory safe.
That depends how you define general purpose language. Many languages, such as those I mentioned with the restrictions I mentioned, are general purpose by most definitions I know of. I know this forum has many people who consider that if you can’t do everything you can in C, the language is not good enough. But many other people/projects do not have that requirement or consider it necessary for a language to be general purpose.
You mean the program is memory safe.
You just conceded that the languages are not: a subset of a language is not a language.
Memory safety is, in fact, a property of programs. Talking about memory-safe languages is, at best, a sort of handwavey shorthand for “languages which make it easy to write memory-safe programs”.
Which is useful, until it isn’t: and it’s exactly in this kind of conversation where it stops being useful.
That’s true, I will try to be more precise: Languages that support (by design) creating memory safe programs have solved memory safety in the sense that if you use them with the stated restrictions, the program is memory safe. That’s may sound like a partial solution, but is very useful for those who need to ensure their program is memory safe – it provides a solution to that problem. That is normally the problem people are referring to when they talk about memory safety.
The point is that Rust only allows to write a very small and very opinionated subset in the entire space of memory safe programs (this subset is what the Rust memory safety system can prove to be correct). The downside is that this language design philosophy requires a vast ‘semantic surface’ and very strict type system which makes Rust programs overly ‘rigid’ and change-resistant (which clashes with the ever-changing requirements of actively developed real-world applications). For life-or-death critical systems or sandboxes which need to run untrusted code such restrictions are fine and a good thing, but arguably not for regular applications running inside those sandboxes.
Everything you said is true, and those restrictions in Rust may be why we’re using Zig instead (this is true for me). But I think the OP is not asking whether Rust or Zig is currently better, but whether memory safety can be improved in Zig.
I don’t have links to the issues, but I believe there are Zig plans to prevent returning the address of local vars, starting with the simple case as a compile time check, and eventually detecting this in debug mode for all cases. There are already Debug-mode checks for detecting use-after-free and double-free, as well as leaks. Others may know of addition checks that are planned or being considered.
In general memory safety problems can be detected statically by the compiler (as with a borrow checker), or dynamically at runtime. Zig does some safety checks at compile time, some at runtime in Debug mode only, and some at runtime in Debug or ReleaseSafe mode. My perception is that Zig is mainly focused on the runtime checks, because the limitations imposed by compile time checks add restrictions that conflict with the requirement to allow complete freedom to use the hardware as the programmer sees fit. (Note that this is not a requirement for all programming languages, in spite of them being considered “general purpose”. EDIT: In the case of Rust, it is only a requirement for unsafe Rust.)
Perhaps there will be compile time ways of doing additional checks without adding such restrictions in the future, but doing all desirable checks at compile time without restrictions is still an unsolved problem and whether it can be done in the future remains to be seen.
And it covers only rust’s definition of memory safety, which might be not what people even generally agree on. It protects generally against corruption, but especially with async rust getting code that does not leak memory seems to be a real issue. Valgrind to the the rescue like with c or c++ …
That’s actually already implemented ![]()
It’s quite easy to trick though, but still better than nothing.
… well that escalated quick ![]()
Gotta love the internet
I don’t have a crystal ball either, but I think the fix for memory safety is likely to come from future hardware solutions, not software.
I agree that “memory safety” (however you define it) is a property of systems and programs - not languages.
I don’t agree that mission critical infers either safe or reliable. They often get packaged together, but they are orthogonal as requirements. Plenty of mission critical things have extremely short life expectancy and low safety requirements - they just need to do their job better than the next guy.
Since I was one of the people who stirred this thing up by mentioning mission critical computer programs, I think I should step in and say something.
In my understanding, memory safety itself is already a very broad and complicated topic to study and to get closer. It contains a lot of areas: memory leaking, overflow, out of range, data racing, etc.. Some programming languages implement more restrictions in some areas than others to get closer to memory safety. Some languages use less restrictions but introduce better tools to help the programmers to achieve this goal. This is for compile time checks only though. Now we talk about runtime checks. Based on current technologies we already have, it’s still almost impossible to achieve even a small part of memory safety without runtime checks, such as “out of range”. Neither Rust nor Zig can do that. For example, in Rust, the my_list[i]is always a runtime check, even in –release mode. To get rid of runtime check, this is what you need: unsafe { my_list.get_unchecked(i) }. Similar in Zig, to not using runtime check, you may need the -O ReleaseFast
Computers also face a lot of hardware reliability issues, which is not a memory safety issues caused by programming languages and programmers. Sometime people spend days or even weeks to debug thinking that they have a memory safety issue, and eventually found out the problem is caused by unreliable hardware. The only solution is to design a reliable hardware, such as motherboard and memory with ECC.
error: could not compile `serde_core` (lib)
Caused by: process didn't exit successfully: `/home/qs/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc ... ...
...
(signal: 11, SIGSEGV: invalid memory reference)
This is an example of perfect codes can’t compile due to memory corruption. It had been discussed several times in Rust community. Replacing a memory stick solved this problem. Zig and any other programming languages, and any programs may have the similar issues. When a program crashed, there could be a hardware reliability issue, instead of a bug in source codes.
Sometime, especially in mission critical computers, reverting to a previous version of the software can temporarily solve some hardware reliability issues too. I am not kidding. The recent event of Airbus encountering computer systems problems affected by solar radiation is one of those examples. While waiting for Airbus providing a permanent solution, many airlines reverted the software to a previous version. The reason is that, mission critical computers, especially lightweight and embedded systems, often use additional specific chips to process and store certain data when implementing new features. (Remember those good old days when Japanese market NES had more chips than American market version to process better sound and graphics? Same idea…) So the old version simply not using that specific chip and not requiring those data affecting by the solar radiation works normally, although the new features do not work.
In sum, safety of a mission critical computer is a very broad topic, memory safety is just a part of it, and memory safety itself is already very difficult. As I already mentioned on the post of “Future dreams” ( Future dreams - #14 by IceGuye ) , Zig community attracts a lot people to learn more about computer system programming, computer hardware, embedded, etc.. Zig is a pretty good gateway to start learning those different computer and low level fields. When people actually work for a mission critical computer field, the knowledge will be easily transferred, which will help them to write better programs and reduce bugs, doesn’t matter which programming languages they will use. Andrew also said something similar in his post here Zig vs Rust vs Odin - #7 by andrewrk .