Working with Bit Flags

I’m working with the Windows Console API and need to call SetConsoleMode using bit flags. After scraping together some code from the GetConsoleMode API, I managed to cobble together a working solution:

const windows_c = @cImport(@cInclude("windows.h"));
const h_in = std.io.getStdIn().handle;
var original_input_mode: windows_c.DWORD = 0;
_ =  windows_c.GetConsoleMode(h_in, &original_input_mode);

var requested_in_mode = original_input_mode;
    requested_in_mode &= ~@as(windows_c.DWORD, @bitCast(windows_c.ENABLE_LINE_INPUT));
    requested_in_mode &= ~@as(windows_c.DWORD, @bitCast(windows_c.ENABLE_ECHO_INPUT));
    requested_in_mode &= ~@as(windows_c.DWORD, @bitCast(windows_c.ENABLE_PROCESSED_INPUT));
    requested_in_mode &= ~@as(windows_c.DWORD, @bitCast(windows_c.ENABLE_QUICK_EDIT_MODE));
    requested_in_mode &= ~@as(windows_c.DWORD, @bitCast(windows_c.ENABLE_MOUSE_INPUT));
    requested_in_mode |= windows_c.ENABLE_VIRTUAL_TERMINAL_INPUT;

However, I did some digging on the forums and I found folks recommending using std.bit_set.IntegerBitSet. Here’s a second working example with it:

const windows_c = @cImport(@cInclude("windows.h"));
const h_in = std.io.getStdIn().handle;
var original_input_mode: windows_c.DWORD = 0;
_ =  windows_c.GetConsoleMode(h_in, &original_input_mode);

var bitset = std.bit_set.IntegerBitSet(@sizeOf(c_ulong) * 8){ .mask = original_input_mode };
    bitset.unset(@ctz(windows_c.ENABLE_LINE_INPUT));
    bitset.unset(@ctz(windows_c.ENABLE_ECHO_INPUT));
    bitset.unset(@ctz(windows_c.ENABLE_PROCESSED_INPUT));
    bitset.unset(@ctz(windows_c.ENABLE_QUICK_EDIT_MODE));
    bitset.unset(@ctz(windows_c.ENABLE_MOUSE_INPUT));
    bitset.set(@ctz(windows_c.ENABLE_VIRTUAL_TERMINAL_INPUT));
const requested_in_mode = bitset.mask;

As you can tell, I’m quite rusty working with bit flags and I could use some help. What is Zig’s idiomatic approach to working with bit flags like these?

The idiomatic solution is to just use a packed struct:

const ConsoleMode = packed struct(windows_c.DWORD) {
    ENABLE_PROCESSED_INPUT: bool, // least significant bit 0x0001
    ENABLE_LINE_INPUT: bool, // 0x0002
    ENABLE_ECHO_INPUT: bool, // 0x0004,
    ...
    ifYouNeedSomePaddingBits: u2 = undefined,
    ...

    pub const empty: ConsoleMode = @bitCast(@as(windows_c.DWORD, 0); // Might be handy
};

Then using it becomes trivial:

var requested_in_mode: ConsoleMode = @bitCast(original_input_mode);
requested_in_mode.ENABLE_LINE_INPUT = false;
requested_in_mode.ENABLE_ECHO_INPUT = false;
...
requested_in_mode.ENABLE_VIRTUAL_TERMINAL_INPUT = true;

SetConsoleMode(..., @bitcast(requested_in_mode)); // On usage
7 Likes

Thank you! Here’s what I came up with:

 const ConsoleInputMode = packed struct(windows_c.DWORD) {
        ENABLE_PROCESSED_INPUT: bool, // 0x0001
        ENABLE_LINE_INPUT: bool, // 0x0002
        ENABLE_ECHO_INPUT: bool, // 0x0004
        ENABLE_WINDOW_INPUT: bool, // 0x0008
        ENABLE_MOUSE_INPUT: bool, // 0x0010
        ENABLE_INSERT_MODE: bool, // 0x0020
        ENABLE_QUICK_EDIT_MODE: bool, // 0x0040
        ENABLE_EXTENDED_FLAGS: bool, // 0x0080
        ENABLE_AUTO_POSITION: bool, // 0x0100
        ENABLE_VIRTUAL_TERMINAL_INPUT: bool, // 0x0200
        padding_bits: u22 = undefined,
    };

    var requested_in_mode: ConsoleInputMode = @bitCast(original_input_mode);
    requested_in_mode.ENABLE_PROCESSED_INPUT = false;
    requested_in_mode.ENABLE_LINE_INPUT = false;
    requested_in_mode.ENABLE_ECHO_INPUT = false;
    requested_in_mode.ENABLE_MOUSE_INPUT = false;
    requested_in_mode.ENABLE_QUICK_EDIT_MODE = false;
    requested_in_mode.ENABLE_VIRTUAL_TERMINAL_INPUT = true;

I have a follow-up question for the padding bits. I attempted to leave them off, but that creates a compiler error. Is there anyway to deduce the number of padding bits necessary to satisfy the compiler?

1 Like

There isn’t a really easy way, you may think you could do meta programming like this:

const std = @import("std");

const windows_c = struct {
    const DWORD = u32;
};

const ConsoleInputMode = packed struct(windows_c.DWORD) {
    const info = @typeInfo(ConsoleInputMode).@"struct";
    const used_fields = info.fields.len - 1;
    const needed_padding = @bitSizeOf(info.backing_integer) - used_fields;
    const PaddingInt = std.meta.Int(.unsigned, needed_padding);

    ENABLE_PROCESSED_INPUT: bool, // 0x0001
    ENABLE_LINE_INPUT: bool, // 0x0002
    ENABLE_ECHO_INPUT: bool, // 0x0004
    ENABLE_WINDOW_INPUT: bool, // 0x0008
    ENABLE_MOUSE_INPUT: bool, // 0x0010
    ENABLE_INSERT_MODE: bool, // 0x0020
    ENABLE_QUICK_EDIT_MODE: bool, // 0x0040
    ENABLE_EXTENDED_FLAGS: bool, // 0x0080
    ENABLE_AUTO_POSITION: bool, // 0x0100
    ENABLE_VIRTUAL_TERMINAL_INPUT: bool, // 0x0200
    padding_bits: PaddingInt = undefined,
};

pub fn main() !void {
    const original_input_mode: u32 = 0;

    var requested_in_mode: ConsoleInputMode = @bitCast(original_input_mode);
    requested_in_mode.ENABLE_PROCESSED_INPUT = false;
    requested_in_mode.ENABLE_LINE_INPUT = false;
    requested_in_mode.ENABLE_ECHO_INPUT = false;
    requested_in_mode.ENABLE_MOUSE_INPUT = false;
    requested_in_mode.ENABLE_QUICK_EDIT_MODE = false;
    requested_in_mode.ENABLE_VIRTUAL_TERMINAL_INPUT = true;

    std.debug.print("requested_in_mode: {}\n", .{requested_in_mode});
}

However that won’t work because the calculation has a dependency loop, it depends on the type ConsoleInputMode which then depends on the result of the computation.

paddingbits.zig:15:5: error: dependency loop detected
    const PaddingInt = std.meta.Int(.unsigned, needed_padding);
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
referenced by:
    ConsoleInputMode: paddingbits.zig:11:33
    main: paddingbits.zig:33:28

Would be interesting if the compiler could get a more fine-grained implementation that would not trigger such a dependency loop, because strictly speaking the calculation only depends on the number of fields, which isn’t influenced by the calculation itself. But because it triggers the dependency loop, there isn’t a simple implementation.


One way (I don’t find very satisfactory) would be to define the fields in a normal struct and use that to create the actual packed struct. The reason why I don’t find it very good, is that I prefer structs that weren’t created through meta programming, because they can contain methods and declarations (and are easier to read).

const std = @import("std");

const windows_c = struct {
    const DWORD = u32;
};

const ConsoleInputModeFields = struct {
    ENABLE_PROCESSED_INPUT: bool, // 0x0001
    ENABLE_LINE_INPUT: bool, // 0x0002
    ENABLE_ECHO_INPUT: bool, // 0x0004
    ENABLE_WINDOW_INPUT: bool, // 0x0008
    ENABLE_MOUSE_INPUT: bool, // 0x0010
    ENABLE_INSERT_MODE: bool, // 0x0020
    ENABLE_QUICK_EDIT_MODE: bool, // 0x0040
    ENABLE_EXTENDED_FLAGS: bool, // 0x0080
    ENABLE_AUTO_POSITION: bool, // 0x0100
    ENABLE_VIRTUAL_TERMINAL_INPUT: bool, // 0x0200
};

const BackingInt = windows_c.DWORD;

fn PaddingInt() type {
    const info = @typeInfo(ConsoleInputModeFields).@"struct";
    const used_fields = info.fields.len;
    const needed_padding = @bitSizeOf(BackingInt) - used_fields;
    return std.meta.Int(.unsigned, needed_padding);
}

const ConsoleInputMode = blk: {
    const info = @typeInfo(ConsoleInputModeFields).@"struct";
    const len = info.fields.len + 1;
    var fields: [len]std.builtin.Type.StructField = undefined;
    for (info.fields, 0..) |f, i| {
        fields[i] = f;
        fields[i].alignment = 0;
    }
    fields[len - 1] = .{
        .name = "padding_bits",
        .type = PaddingInt(),
        .default_value_ptr = &@as(PaddingInt(), undefined),
        .is_comptime = false,
        .alignment = 0,
    };
    break :blk @Type(.{
        .@"struct" = .{
            .layout = .@"packed",
            .fields = &fields,
            .decls = &.{},
            .backing_integer = BackingInt,
            .is_tuple = false,
        },
    });
};

pub fn main() !void {
    const original_input_mode: u32 = 0;

    var requested_in_mode: ConsoleInputMode = @bitCast(original_input_mode);
    requested_in_mode.ENABLE_PROCESSED_INPUT = false;
    requested_in_mode.ENABLE_LINE_INPUT = false;
    requested_in_mode.ENABLE_ECHO_INPUT = false;
    requested_in_mode.ENABLE_MOUSE_INPUT = false;
    requested_in_mode.ENABLE_QUICK_EDIT_MODE = false;
    requested_in_mode.ENABLE_VIRTUAL_TERMINAL_INPUT = true;

    std.debug.print("requested_in_mode: {}\n", .{requested_in_mode});
}

So basically I don’t think this solution is very good / worth it. I think the best way would be to just go with your original solution and type the correct number there.

If the number is wrong you already get a compile error, so you can’t really pick the wrong number. (at least if you have tried to compile for the target, for cases where the backing type changes for different targets)

Hmm, you make a good argument. I agree that I don’t typically like struct generated with meta programming (or at comptime) for the same readability concern. I lean pretty heavily on ZLS and it gets tripped up fairly quickly with these types of structs.

With that being said, I’m a hypocrite and spent some time chipping away at a general solution using meta programming :sweat_smile:. Here’s where I landed:

fn PaddedStruct(_struct: type) type {
    const old_fields = @typeInfo(_struct).@"struct".fields;
    var new_fields: [old_fields.len + 1]std.builtin.Type.StructField = undefined;
    for (0..old_fields.len) |i| {
        new_fields[i] = .{
            .name = old_fields[i].name,
            .type = old_fields[i].type,
            .default_value_ptr = old_fields[i].default_value_ptr,
            .is_comptime = old_fields[i].is_comptime,
            .alignment = 0,
        };
    }

    new_fields[old_fields.len] = .{
        .name = "padding_bits",
        .type = std.meta.Int(.unsigned, @bitSizeOf(windows_c.DWORD) - old_fields.len),
        .default_value_ptr = null,
        .is_comptime = false,
        .alignment = 0,
    };

    return @Type(.{
        .@"struct" = .{
            .layout = .@"packed",
            .backing_integer = windows_c.DWORD,
            .fields = &new_fields,
            .decls = &.{},
            .is_tuple = false,
        },
    });
}

It seems like we landed on near-identical solutions :person_shrugging: At least it is was a fun thought experiment. Thanks for your input!

1 Like

At least you know DWORD is 32 bits, right? Is it ever not? So you subtracted your 10 bits to get the padding. Could be worse.

1 Like

If you don’t want to deal with padding, you could just coerce the smaller backing int into a windows_c.DWORD when you need it:

const ConsoleInputMode = packed struct {
    ENABLE_PROCESSED_INPUT: bool, // 0x0001
    ENABLE_LINE_INPUT: bool, // 0x0002
    ENABLE_ECHO_INPUT: bool, // 0x0004
    ENABLE_WINDOW_INPUT: bool, // 0x0008
    ENABLE_MOUSE_INPUT: bool, // 0x0010
    ENABLE_INSERT_MODE: bool, // 0x0020
    ENABLE_QUICK_EDIT_MODE: bool, // 0x0040
    ENABLE_EXTENDED_FLAGS: bool, // 0x0080
    ENABLE_AUTO_POSITION: bool, // 0x0100
    ENABLE_VIRTUAL_TERMINAL_INPUT: bool, // 0x0200

    const BackingInt = @typeInfo(ConsoleInputMode).@"struct".backing_integer.?;

    pub fn dword(self: ConsoleInputMode) windows_c.DWORD {
        const self_int: BackingInt = @bitCast(self);
        return self_int;
    }
};
3 Likes

I’m having trouble getting your code snippet to compile.

First off, your snippet as-is doesn’t compile. I had replace the references to ConsoleInputMode with @This().

Secondly, when I did start using it, I get my original compile error:

src\Windows.zig:38:47: error: @bitCast size mismatch: destination type 'Windows.init.ConsoleInputMode' has 10 bits but source type 'c_ulong' has 32 bits
    var requested_in_mode: ConsoleInputMode = @bitCast(original_input_mode);

Here’s a minimal example:

const std = @import("std");
const windows_c = struct {
    const DWORD = u32;

    pub fn GetConsoleMode(_: ?*anyopaque, _: [*c]DWORD) c_int {
        return 0;
    }
    pub fn SetConsoleMode(_: ?*anyopaque, _: DWORD) c_int {
        return 0;
    }
};

const ConsoleInputMode = packed struct {
    ENABLE_PROCESSED_INPUT: bool, // 0x0001
    ENABLE_LINE_INPUT: bool, // 0x0002
    ENABLE_ECHO_INPUT: bool, // 0x0004
    ENABLE_WINDOW_INPUT: bool, // 0x0008
    ENABLE_MOUSE_INPUT: bool, // 0x0010
    ENABLE_INSERT_MODE: bool, // 0x0020
    ENABLE_QUICK_EDIT_MODE: bool, // 0x0040
    ENABLE_EXTENDED_FLAGS: bool, // 0x0080
    ENABLE_AUTO_POSITION: bool, // 0x0100
    ENABLE_VIRTUAL_TERMINAL_INPUT: bool, // 0x0200

    const BackingInt = @typeInfo(ConsoleInputMode).@"struct".backing_integer.?;

    pub fn dword(self: ConsoleInputMode) windows_c.DWORD {
        const self_int: BackingInt = @bitCast(self);
        return self_int;
    }
};

pub fn main() !void {
    const h_in = std.io.getStdIn().handle;
    var original_input_mode: windows_c.DWORD = 0;
    _ = windows_c.GetConsoleMode(h_in, &original_input_mode);

    var requested_in_mode: ConsoleInputMode = @bitCast(original_input_mode);
    requested_in_mode.ENABLE_PROCESSED_INPUT = false;
    requested_in_mode.ENABLE_LINE_INPUT = false;
    requested_in_mode.ENABLE_ECHO_INPUT = false;
    requested_in_mode.ENABLE_MOUSE_INPUT = false;
    requested_in_mode.ENABLE_QUICK_EDIT_MODE = false;
    requested_in_mode.ENABLE_VIRTUAL_TERMINAL_INPUT = true;

    if (0 == windows_c.SetConsoleMode(h_in, requested_in_mode.dword())) {
        return error.SetConsoleModeFailure;
    }
}

I’m a beginner, but I reckon one problem is that you define the struct within the main function for no specific reason.

2 Likes
const std = @import("std");
const windows_c = struct {
    const DWORD = u32;

    pub fn GetConsoleMode(_: anytype, _: [*c]DWORD) c_int {
        return 0;
    }
    pub fn SetConsoleMode(_: anytype, _: DWORD) c_int {
        return 0;
    }
};

const ConsoleInputMode = packed struct {
    ENABLE_PROCESSED_INPUT: bool, // 0x0001
    ENABLE_LINE_INPUT: bool, // 0x0002
    ENABLE_ECHO_INPUT: bool, // 0x0004
    ENABLE_WINDOW_INPUT: bool, // 0x0008
    ENABLE_MOUSE_INPUT: bool, // 0x0010
    ENABLE_INSERT_MODE: bool, // 0x0020
    ENABLE_QUICK_EDIT_MODE: bool, // 0x0040
    ENABLE_EXTENDED_FLAGS: bool, // 0x0080
    ENABLE_AUTO_POSITION: bool, // 0x0100
    ENABLE_VIRTUAL_TERMINAL_INPUT: bool, // 0x0200

    const BackingInt = @typeInfo(ConsoleInputMode).@"struct".backing_integer.?;

    pub fn fromDword(input_mode: windows_c.DWORD) ConsoleInputMode {
        const int: BackingInt = @truncate(input_mode);
        return @bitCast(int);
    }

    pub fn dword(self: ConsoleInputMode) windows_c.DWORD {
        const self_int: BackingInt = @bitCast(self);
        return self_int;
    }
};

pub fn main() !void {
    const h_in = std.io.getStdIn().handle;
    var original_input_mode: windows_c.DWORD = 0;
    _ = windows_c.GetConsoleMode(h_in, &original_input_mode);

    var requested_in_mode: ConsoleInputMode = .fromDword(original_input_mode);
    requested_in_mode.ENABLE_PROCESSED_INPUT = false;
    requested_in_mode.ENABLE_LINE_INPUT = false;
    requested_in_mode.ENABLE_ECHO_INPUT = false;
    requested_in_mode.ENABLE_MOUSE_INPUT = false;
    requested_in_mode.ENABLE_QUICK_EDIT_MODE = false;
    requested_in_mode.ENABLE_VIRTUAL_TERMINAL_INPUT = true;

    // if (0 == windows_c.SetConsoleMode(h_in, requested_in_mode.dword())) {
    //     return error.SetConsoleModeFailure;
    // }
}
1 Like

Good catch. I amended my previous reply.

@bitCast size mismatch: destination type 'Windows.init.ConsoleInputMode' has 10 bits but source type 'c_ulong' has 32 bits

I don’t understand, what’s wrong with this error? It’s perfect IMO, it tells you that you messed up the padding, so you do a bit of manual arithmetic (one time) to change the padding size and then you’re good forever. It’s not like the windows DWORD size changes that often :stuck_out_tongue:

When I first looked at your question I cobbled together a similar error myself using @compileError, but then I realized the default behavior is already exactly what I want.

Btw, you can set the size of a packed struct, which makes the size assertion explicit:

const ConsoleInputMode = packed struct(windows_c.DWORD) {
    ENABLE_PROCESSED_INPUT: bool, // 0x0001
    ...
};

With padding_bits: u21 = undefined, the error is:

padding.zig:8:49: error: backing integer type 'u32' has bit size 32 but the struct fields have a total bit size of 31
const ConsoleInputMode = packed struct(windows_c.DWORD) {
                                       ~~~~~~~~~^~~~~~

I wasn’t arguing that the error was incorrect. I was merely showing @milogreg that I was receiving this error from their code snippet.

Fair point. As others like @theo have pointed out, the odds of DWORD ever changing away from 32 bits are basically zero, since Windows supports fewer architectures.

I was more curious in the general case, where the bit size wasn’t consistent across architectures. For example, I was always taught that Linux supports many exotic architectures dating back decades. So assuming c_unit is always 32 bits can be incorrect.

2 Likes

EDIT: Sorry, this was already explained above, I just missed it.

Ahh I see, that’s a fair point (and sounds like quite a headache!)

This doesn’t solve the issue of needing to know how wide DWORD is relative to the fields, but I just want to add that you don’t have to name the padding_bits field. This is perfectly valid:

const ConsoleInputMode = packed struct(windows_c.DWORD) {
    ENABLE_PROCESSED_INPUT: bool, // 0x0001
    ENABLE_LINE_INPUT: bool, // 0x0002
    ENABLE_ECHO_INPUT: bool, // 0x0004
    ENABLE_WINDOW_INPUT: bool, // 0x0008
    ENABLE_MOUSE_INPUT: bool, // 0x0010
    ENABLE_INSERT_MODE: bool, // 0x0020
    ENABLE_QUICK_EDIT_MODE: bool, // 0x0040
    ENABLE_EXTENDED_FLAGS: bool, // 0x0080
    ENABLE_AUTO_POSITION: bool, // 0x0100
    ENABLE_VIRTUAL_TERMINAL_INPUT: bool, // 0x0200

    _: u22,
};
2 Likes