Since I was one of the people who stirred this thing up by mentioning mission critical computer programs, I think I should step in and say something.
In my understanding, memory safety itself is already a very broad and complicated topic to study and to get closer. It contains a lot of areas: memory leaking, overflow, out of range, data racing, etc.. Some programming languages implement more restrictions in some areas than others to get closer to memory safety. Some languages use less restrictions but introduce better tools to help the programmers to achieve this goal. This is for compile time checks only though. Now we talk about runtime checks. Based on current technologies we already have, it’s still almost impossible to achieve even a small part of memory safety without runtime checks, such as “out of range”. Neither Rust nor Zig can do that. For example, in Rust, the my_list[i]is always a runtime check, even in –release mode. To get rid of runtime check, this is what you need: unsafe { my_list.get_unchecked(i) }. Similar in Zig, to not using runtime check, you may need the -O ReleaseFast
Computers also face a lot of hardware reliability issues, which is not a memory safety issues caused by programming languages and programmers. Sometime people spend days or even weeks to debug thinking that they have a memory safety issue, and eventually found out the problem is caused by unreliable hardware. The only solution is to design a reliable hardware, such as motherboard and memory with ECC.
error: could not compile `serde_core` (lib)
Caused by: process didn't exit successfully: `/home/qs/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc ... ...
...
(signal: 11, SIGSEGV: invalid memory reference)
This is an example of perfect codes can’t compile due to memory corruption. It had been discussed several times in Rust community. Replacing a memory stick solved this problem. Zig and any other programming languages, and any programs may have the similar issues. When a program crashed, there could be a hardware reliability issue, instead of a bug in source codes.
Sometime, especially in mission critical computers, reverting to a previous version of the software can temporarily solve some hardware reliability issues too. I am not kidding. The recent event of Airbus encountering computer systems problems affected by solar radiation is one of those examples. While waiting for Airbus providing a permanent solution, many airlines reverted the software to a previous version. The reason is that, mission critical computers, especially lightweight and embedded systems, often use additional specific chips to process and store certain data when implementing new features. (Remember those good old days when Japanese market NES had more chips than American market version to process better sound and graphics? Same idea…) So the old version simply not using that specific chip and not requiring those data affecting by the solar radiation works normally, although the new features do not work.
In sum, safety of a mission critical computer is a very broad topic, memory safety is just a part of it, and memory safety itself is already very difficult. As I already mentioned on the post of “Future dreams” ( Future dreams - #14 by IceGuye ) , Zig community attracts a lot people to learn more about computer system programming, computer hardware, embedded, etc.. Zig is a pretty good gateway to start learning those different computer and low level fields. When people actually work for a mission critical computer field, the knowledge will be easily transferred, which will help them to write better programs and reduce bugs, doesn’t matter which programming languages they will use. Andrew also said something similar in his post here Zig vs Rust vs Odin - #7 by andrewrk .