Bitshifting fails

Hi,
I have problems with bitshifting:

const std = @import("std");

pub const BitBoardState = u64;

pub const Square = enum(u8) { a1, b1, c1, d1, e1, f1, g1, h1, a2, b2, c2, d2, e2, f2, g2, h2, a3, b3, c3, d3, e3, f3, g3, h3, a4, b4, c4, d4, e4, f4, g4, h4, a5, b5, c5, d5, e5, f5, g5, h5, a6, b6, c6, d6, e6, f6, g6, h6, a7, b7, c7, d7, e7, f7, g7, h7, a8, b8, c8, d8, e8, f8, g8, h8, undefined = 255 };


pub fn is_occupied(board: BitBoardState, square: Square) bool {
    const state: BitBoardState = @as(u64, 1) << @as(u8, @intFromEnum(square));
    return (board & state) == state;
}

pub fn main() !void {
    const bitboard: BitBoardState = 0;
    
    const occupied = is_occupied(bitboard, Square.a1);
    
    std.debug.print("Is occupied {any}", .{occupied});
}

This fails with following error:

prog.zig:9:49: error: expected type ‘u6’, found ‘u8’
const state: BitBoardState = @as(u64, 1) << @as(u8, @intFromEnum(square));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
prog.zig:9:49: note: unsigned 6-bit int cannot represent all possible unsigned 8-bit values
referenced by:
main: prog.zig:16:33
posixCallMainAndExit: zig/lib/std/start.zig:660:37
4 reference(s) hidden; use ‘-freference-trace=6’ to see all references

When I change this to:

pub fn is_occupied(board: BitBoardState, square: Square) bool {
    const state: BitBoardState = @as(u64, 1) << @intCast(@intFromEnum(square));
    return (board & state) == state;
}

then it works.
Why does @intCast magically fix the problem?

Thanks in advance!

This is because the right operand of a bit shift is expected to be an unsigned integer of the bit length of log₂ of the bits of the left operand (see the bit shift operator in the docs). Your left operand is defined to be a u64, so then log₂(64) = 6, i.e. Zig is expecting a u6 as the right operand.

With the @as(u8, @intFromEnum(square)) expresion, you are trying to assign a u8 to that u6, which is a lossy conversion, so Zig does not implicitly allow it. The @intCast “fixes” the problem because that is what @intCast communicates: it allows the lossy conversion, with you the programmer taking responsibility that the conversion is in fact valid. In fact, if square was a value larger than 64, you’d be getting illegal behaviour (safety-checked in Debug and ReleaseSafe modes).

2 Likes

The reason why this is the case, is because that makes the maximum value of the right operand one less than the bit size of the left operand.

This is because shifting more than that is technically undefined behaviour, even if many CPUs in practice have the same and expected behaviour of filling with 0, it is not guaranteed for every CPU to do that.

1 Like

Tangential to your question, but I think the stdlib’s EnumSet might be helpful in accomplishing what you seem to be doing, without needing to worry about the bit-twiddling yourself.

2 Likes

Yo another chess program.
That is why I defined a 6 bit Square in mine.
I also stuffed it into a packed union to avoid all kinds of casts and have easy increment / decrement / comparisons.

In general the math in a chess program with all its tricks is not too easy because of the type-strictness in Zig.

I also contemplated making a bitboard a packed union for convenience functions. But that makes the initialization cumbersome because Zig has no operator overload like bitboard = 42 so I kept that one a raw u64.

pub fn test_bit_64(u: u64, bit: u6) bool {
    const one: u64 = @as(u64, 1) << bit;
    return u & one != 0;
}

pub fn contains_square(bitboard: u64, sq: Square) bool {
    return test_bit_64(bitboard, sq.u);
}

pub const Square = packed union {
    pub const Enum = enum(u6) {
        a1, b1, c1, d1, e1, f1, g1, h1,
        a2, b2, c2, d2, e2, f2, g2, h2,
        a3, b3, c3, d3, e3, f3, g3, h3,
        a4, b4, c4, d4, e4, f4, g4, h4,
        a5, b5, c5, d5, e5, f5, g5, h5,
        a6, b6, c6, d6, e6, f6, g6, h6,
        a7, b7, c7, d7, e7, f7, g7, h7,
        a8, b8, c8, d8, e8, f8, g8, h8,
    };
    /// The enum value.
    e: Enum,
    /// The numeric value
    u: u6,
};

Altough Zig is quite smart, I tried a few of these std things in my program. It was always slower than handcrafted bit twiddling.
I also noticed that with @intFromEnum etc. Somehow somewhere cpu cycles got lost.

Thank you for your explanations and insights! I will try out some advices you gave me! :slight_smile: