USA presidency says "Future Software Should Be Memory Safe "

In this report, the White House says that future software should be written in memory-safe languages. Rust is mentioned but not Zig. It is uncommon to see official statements about programming.


Is unsafe Rust against the law now? :grinning:


I’ve already looked into this. You have to read the report carefully:

“While in some distinct situations, using a memory safe language may not be feasible – this report examines space systems as a unique edge case and identifies memory safe hardware and formal methods as complementary ways to achieve a similar outcome”

This includes techniques like destructors and RAII. The C++ community had two convention talks about this that I’ve watched after the NSA published guidelines following the pipeline hack. It’s very widely inclusive because the reality of a hard-ask is entirely unfeasible and you’d have to abandon assembly code by definition. Thankfully, they aren’t that incompetent.

Slight correction here - they talk about software all the time. It may be uncommon to see mainstream information about this, but there’s an entire industry surrounding this sort of thing (such as the intelligence community).

1 Like

I’d also like to throw in an additional two cents here. A big part of the reason we’re seeing all this conversation surrounding these things is due to years of neglectful code practices and design choices even when the utilities have existed. Take stack-overflow errors and using shadow stacks.

When I google “shadow stack flag” everything I get on the subject is completely explanatory and has no direct relationship to the flag I am looking for. After adding the term GCC, I get an article on the subject from Redhat that finally reveals that -mshstk works for GCC and Clang: Use compiler flags for stack protection in GCC and Clang | Red Hat Developer

Why does it take that much googling to find a fundamental safety feature? Furthermore, why I am not getting linked immediately to official documentation? This is absurd, frankly. If you think I’m overstressing the importance of this, try to search for flags that are even slightly off the typical O1/O2/O3 type flags and see how easy it is for yourself. Please make no mistake - it’s not an accident that we’ve ended up with this situation.

Zig, on the other hand, will check your indexing in debug by default. This is just one example of the many defaults that Zig gets right.

What we’re really talking about here is the legacy of C/C++ and the unyielding commitment to not break existing code. Thousands of hours have gone into making utilities like shadow stacks over 40 years and you can hardly find the thing even if you know what it’s called. Essentially, low level developers are getting pulled into this conversation when it’s really a C and C++ issue that needs to be addressed here. That’s just my opinion, and I’m sticking to it.

Furthermore, issues like alloc/free are not the core of the problem here… if it was, this wouldn’t be an issue. Memory usage becomes more complicated when you involve things like threads. I watched a talk by Shawn Parent where he talked about an invisible memory leak that couldn’t be located for (I believe he said) years because a second thread was involved in Photoshop. If we want to handle these issues more seriously, we need to tackle issues like developing for multi-threaded environments to actually make progress on these important problems.


To be fair, UAF is fairly dangerous.

1 Like

Sure, no doubt - but what causes that? Is it because someone forgot to write the word free? Hardly - especially not when you’re dealing with languages that have destructors. It’s other factors that cause that issue to become more complicated such as shared pointers and atomic reference counting or threads that don’t know how to coordinate on tasks. We’re really talking about a system problem here at that level.

In terms of these more basic checks, however, you can have sensible defaults that will handle most of your issues right out of the gate or by working with different memory models.

Take python, for instance - is that memory safe? It tries to do all the cleanup for you but I’ve segfaulted in python code plenty of times even though it’s garbage collected. It’s really a broader conversation about design - that’s my point.


No, it’s because someone wrote the word free and forgot that there was still a reference to that object, which is a very easy mistake to make. To be fair, Zig reduces the likelihood of this somewhat with the = undefined pattern, but it is not enough.


Right, and that’s a fair point, I was talking about memory leaks more than UAF in that example. I still consider even that example you provided to be a matching problem. That’s not hard in a linear program that allocates stuff, uses it, and then frees it.

Most user-visible programs are not that, and in these cases, you are usually better off using an arena allocator.

Exactly - and that’s my point. We’re talking about vastly more complicated programs where other factors are actually causing the confusion (such as multi-threading) and your example perfectly illustrates what I am saying when I mention that different memory models are often neglected.

1 Like

Not quite — they say that software should be memory safe. Writing software in a memory safe language is but one way to achieve memory safety. The report also lists a couple of over approaches:

  • using formal methods to prove software correct (and hence memory safe)
  • use memory safe hardware (e.g., CHERI)

So there are different avenues to memory safety. For example, here’s how we at TigerBeetle tackle this:

  • Using Zig, mostly solves spatial memory safety issues
  • To solve most of the temporal memory safety, TigerBeetle:
    • Allocates all data statically — can’t mess up malloc&free if you don’t have malloc implementation in your address space :point_up:
    • Uses a single thread, so there’s no worrying about data races
  • On top of that, there’s randomized simulation testing (which is a form of formal method). For TigerBeetle, it’s not enough to be mere memory safe, correctness is also table stakes, and simulation testing is what allows us to achieve a reasonable degree of confidence in correctness.
  • Finally, the above is multiplied by huuuuge amount of assertions, which reliably crash the process if something somewhere goes terribly wrong despite our efforts (eg, miscompiles were historically caught by assertions down the line). While we intentionally don’t try to protect from random RAM errors, and they can cause us to loose data (so, ECC RAM is required), chances are in practice some assertion will catch the nasty thing.

Do you run multiple copies of the process?

No — we solve a problem where the fastest solution is actually single threaded. For highly contended workloads (like ours) synchronizing multiple threads ends up being slower than running in a single thread real quick.


I guess even safe Rust (until it gets fixed) :wink:
But yeah, the clickbaiting aside, it is an example for how difficult safety is.


(that’s google’s problem. google search is going downhill so fast, it’s freefalling.

on kagi, “shadow stack flag” produces your redhat link as its second result. it even gives GCC and Clang add shadow stack through this flag: -mshstk. in its summary. It then gives this and this for its next results.

it’s really just google being google.)


You are forgetting that Google search results are based on user context.

In my case, using chromium, shadow stack flag returned this and this, also including other correct links.

I looked at your links but they’re just more proof of what I was saying. Recall that the flag to use shadow stacks in GCC and Clang is -mshstk. You won’t find a single reference to that in either document you posted. That’s exactly what I was saying - you can easily find explanatory information that doesn’t actually tell you how to compile code with shadow stacks. For a feature that addresses a massive security concern (buffer overflow-attacks) there’s still too much digging to be done.

That’s great news but doesn’t address the issue either in my opinion. Kagi is a paid-for search engine that gives you a limited number of searches for free per month. For a large scale security problem, why is this the best option? Still seems absurd to me.

With all do respect, I’m going to bow out of this subject after this last comment because my time here is focused on helping people move to Zig more so than trying to address what’s wrong with C/C++. If you don’t agree that this is an issue, then I will respectfully disagree and let you have the last word. I’ll conclude with the following…

There’s a lot that could have been done for a very long time to make these languages more secure (especially given that debugging exists). I’ll give two straight forward examples about memory safety:

  • Smart pointers have unchecked dereferences - so it’s only smart until you want to use it. Recall that std::unique_ptr’s default constructor provides nullptr.

  • std::optional has the same problem with arrow operators - so why bother making it optional if there’s no enforcement that the value is actually set?

There’s plenty more but I don’t honestly feel like going down this road any further. My point is that security and sensible debugging is not a top priority and we can see the result of that.


An explicit reference to -mshstk is showed in the 6th link, but I agree with you.

1 Like

A possible solution to the problem is to have “live” books, like the Computer Networks: A Systems Approach · GitHub series.

Computer engineering evolves too fast, so traditional books are not enough.

I have a few books that are already obsolete. They are still useful (and well written) but they are missing modern improvements.


oh, I do agree wholeheartedly. i think there are topics which should, no, must have strong and deep reference works filled with good and bad examples and techniques and patterns and their histories. safety is definitely a top candidate for such work. though, i’m not sure we have even discovered a satisfactory format for presenting such works.

i’m just saying that google is a proprietary service with its own agendas which now, sadly, do exclude providing a good search service. so while it is the most obvious, i don’t feel it works well as an example anymore – it has become difficult to separate the effects of low quality of the google’s engine from actual availability of material.