How to manage mutex locks during the writer's writes?

Hi folks! It’s nice to write it here again!

I’m trying to learn about managing locks for the following “logging writer”: The Zig Pastebin (a bit contrived version)

My “inspiration” for using un/lockStdErr() locks is due to std.log here: zig/lib/std/log.zig at 3e9ab6aa7b2d90c25cb906d425a148abf9da3dcb · ziglang/zig · GitHub
I have a few questions (as a complete newbie to mutex stuff):

  1. Why should I even bother with locks?
  2. What is the use and meaning of the nosuspend keyword?
  3. Should I consider doing clearWrittenWithEscapeCodes() of std.Progress right after the lock as does the std.debug.lockStdErr()?
  4. Why zig libraries call it lockStdErr(), e.g. in std.debug or std.Progress, when it seems there is nothing to do with stderr and it is a rather generic std.Thread.Mutex.lock()?

Because it’s a lock that protects the stderr. In order to acess stderr, you have to acquire this lock.

If multiple threads try to write to a stream without acquiring the lock, their messages get mangled.

It has to do with async, which is currently disabled, and it’s unclear whether it will return to the language, so don’t worry about it.

3 Likes

Because it’s a lock that protects the stderr .

So here the stderr in lock names bears purely instructive/information purposes?

What about the third question:

  1. Should I consider doing clearWrittenWithEscapeCodes() of std.Progress right after the lock as does the std.debug.lockStdErr()?

Also, is any difference between doing:

var mutex: std.Thread.Mutex = .{};

// somewhere inside the struct's method:
    mutex.lock()
    ...stuff
    mutex.unlock()

vs

// somewhere inside the struct's method:
    std.Progress.lockStdErr()
    ...stuff
    std.Progress.unlockStdErr()

? (with the caveat that Progress’s lockStdErr() doesn’t have clearWrittenWithEscapeCodes() catch {}; line)

Correct.

Everyone that is trying to access the stream need to compete for the same lock, otherwise the lock is meaningless. If only your code is using the stream, and you consistly use the same lock, than it’s fine. But the standard library doesn’t know about your custom lock, so you can’t use the std’s logging function concurrently with your own code.

Any mutation to the stream needs to be protected by a lock. If you’re asking which function to call to clear the stream, I don’t know.

1 Like

In addition to @LucasSantos91’s answers, there is a really informative video by @kristoff discussing printing and logging.

2 Likes

Also, don’t forget that you can defer std.Progress.unlockStdErr() - this is helpful because you’ll always unlock as you exit scope.

1 Like

Does it mean that the only reason I need this:

std.Progress.lockStdErr();

is to be sure that if I’m going to use std.Progress as part of my process, I won’t get a race condition? (ie. I don’t write to stderr at the same time as Progress when it is active in another thread.)

Thank you guys @AndrewCodeDev @dimdin for your suggestions.

1 Like

Yes. Well, the point of any lock is to prevent race conditions, but std.Progress.lockStdErr just locks the normal lock that whole library is using, so you’re preventing a race condition with the whole std library, not just std.Progress.

1 Like

Can I take an address of that mutex the std lib is using? So that I can do something like:

fn safeWrite(writer: anytype, mutex_ptr: *std.Thread.Mutex, bytes: []const u8) !void {
    mutex_ptr.lock();
    defer mutex_ptr.unlock();
    try writer.writeAll(bytes);
}

pub fn main() !void {
    safeWrite(std.io.getStdErr().writer(), &std.default_stderr_mutex, "Hello world!");
}

It seems it is at the end of Progress.zig:

var stderr_mutex: std.Thread.Mutex = .{};

but it is not public.

Also, I discovered this in std.debug:

pub fn getStderrMutex() *std.Thread.Mutex {
    @compileError("deprecated. call std.debug.lockStdErr() and std.debug.unlockStdErr() instead which will integrate properly with std.Progress");
}

But I honestly didn’t get the message :grimacing: :slight_smile:.

1 Like

If you go to the log source file in the standard library, you’ll see examples of how they lock and prepare std err.

If you look at my original post, I pointed out the place in the log which you’re probably referring to. However, I’m interested exactly in the mutex address and if it isn’t available (ie. intentionally exposed), I wonder why…

1 Like

Yup, I glossed over that line when I read your post - I see that now. We are talking about the same thing.

From a design standpoint, I can understand why they would choose to not expose the lock itself. One potential reason is actually somewhat related to proper uses of getters/setters. They can change the implementation and details of the lock (or locking procedure) in future versions without people depending on the exact type of lock itself. They probably have other reasons, but if you’re looking for at least one plausible reason then I’d say that’s a possibility.

2 Likes

Wasn’t there a flag to force single-threaded mode, maybe it would then just default to do nothing? But that is just an idea, haven’t checked.


I was thinking of Mutex - SingleThreadedImpl
So because that already has a SingleThreadedImpl you still could use the mutex without it causing overhead, so not really a reason to avoid mutex / make it possible to switch implementations.
But it could still be useful if future versions need to lock other things.

2 Likes

Ok, thank you guys. Your answers helped a lot. Indeed, as per zig 0.13.0, the stderr_mutex was defined as:

var stderr_mutex: std.Thread.Mutex = .{};

and as for now, it is std.Thread.Mutex.Recursive. So the implementation indeed has changed. Also, using defer unlockStdErr() instead of just running it at the end of the fn is a type of mistake that led me to a deadlock, as I exited somewhere before reaching the end of fn.