Bun is being ported from Zig to Rust

That’s the point, so that the compiler can optimize it. If you remove the unreachable and replace with a panic, all those assertions still happen in the final release, bringing performance down, and it shows you’re not very confident your program doesn’t have undefined behavior.

yes but afaik it defaults to this behavior, allow_assert is defined here

In my opinion the only reasonable approach when you have falsifiable asserts that the test suite doesn’t cover, and that you can’t figure out how to trip, is to make ReleaseSafe builds and use those to get actionable bug reports.

And then more in general software meant to be exposed on the internet (such as a node replacement) should probably not ship ReleaseFast, especially when it’s so full of bugs.

6 Likes

This needs more than a pinch of “it depends” and “know what you are doing”, but “always” asserts also have their place: The Use Of assert() In SQLite

(to clarify, this statement has zero bits of information related to Bun, and deals exclusively with design space around assertions :stuck_out_tongue: )

1 Like

I mean, if users are able to trip those asserts, but you can’t, what other solution do you suggest other than giving them a build that can then provide you with a repro?

There are two problems here:

  • How you figure out what’s wrong and fix the bug
  • How you damage control existing deployments of software with the bug (including future or not yet discovered bugs)

My thrust here is that the second problem exists, and that “make falsified assertions not actually crash the default build of the program deliver to users” sometimes can be a legitimate part of the solution there.

For the first one, my ideal normal-style approach is actually telemetry on pre-releases — as much as people like to hate on telemetry (for different and valid reasons), getting a database of backtraces when stuff crashes is invaluable. (tiger style approach is to treat any “user hit a bug” situation as a “bug in the fuzzer”).

I can see the point in general terms, but I’m having trouble coming up with a concrete example.

That said, I will 100% accept the criticism that I made a universal statement while thinking mainly about the Bun usecase.

Actually as I’m typing this I’m realizing that an example could be a videogame where a falsified assert means that some secondary thing (an non-key item for example) stops working and that’s it. As you try to figure out the bug it would make sense to temporarily turn off the assert so that people don’t lose game progress if they try to interact with that thing. Although in this context it would probably still be better to cut a bit above the rot (i.e. remove or replace the item) because even if the disabled assert won’t induce UB, by construction your understanding is not complete enough to really know what else that issue could enable (e.g. in a game it could turn out that the bug can leveraged to duplicate other items).

In Bun’s case specifically (feel free to skip this part :^)) this is not what happened though, the setting still defaults to silencing asserts in release-fast, and afaik it’s been like this for a long time.

1 Like

I think this is more of a difference in philosophy.
While yes, you should have a testsuite which covers pretty much everything (which applies to you), it’s also very hard to actually test that in every possible edge case.
After all, weird and unexpected cases exist and thinking of every possible case with software you ship to users is imo impractical.

So one has to instead look at the possible outcomes of the assertion being wrong and either crashing or not crashing, and depending on the problem domain one or the other can be the right answer.

This discussion essentially boils down to the same one the C++ community had about contracts and if they should be recoverable, not recoverable and when they should be on or off. And different industries all answered differently and is the reason why you can chose the strategy for every compilation unit independently of the optimisation level.

In my opinion that does go for every kind of software which has network access, even ones which have provably no bugs.

Yeah! I think the example that SQLite devs have in mind is similar, but worse: at some point, somewhere, there will be (an extremely poorly and irresponsibly designed) self-driving car whose self-driving component will live in the same process as the component that uses SQLite database to serve the adds to the “driver”, and it would be bad for a crash in this secondary SQLite database to bring down the whole safety critical process. SQLite is embeddable, which means that people might embed it into places where such things shouldn’t be embedded, but that’s not something SQLite devs can control.

A direct-experience example is rust-analyzer. It’s not problem at all if completion list misses some particularly crazy item, or a niche refactor isn’t shown in the menu for a tricky file. In contrast, if the whole thing crashes on you, it can be pretty painful. So we have three modes in rust-analyzer:

  • By default, programming errors in parts that can’t corrupt the data are just silently logged (including things like out of bounds access, taking advantage of Rust’s support for unwinding on panic), and there’s a relatively fine-grained granularity of recovery.
  • In the nightly release, a dialog box is additionally shown, explaining how to submit an issue
  • In from source builds, the thing crashes outright (this is primarily aimed at rust-analyzer developers, to motivate them go and fix bugs, rather than work on the newest shiny feature).

Though, this setup is a product of me treating rust-analyzer as an experiment for the eventual goal of refactoring rustc in place. If I new that it’ll be a long-lived project, I’d invest way more time into building a smith and making sure there are no bugs.

It’s not that Bun put Zig on the map — it’s Zig that built the foundation for Bun.