Hi,
I’m trying to get used to the new IO.Reader system in Zig 0.15.
I’m reading the pixel data of a FITS astronomy image (BITPIX = -32, so 32-bit float big-endian).
I have two versions of my file-reading code.
1
This is the simple version that allocates a buffer and reads directly into it:
pub fn readImageData(
allocator: std.mem.Allocator,
file: std.fs.File,
hdu: PrimaryHDU,
) ![]u8 {
const buf = try allocator.alloc(u8, hdu.data_size);
errdefer allocator.free(buf);
try file.seekTo(hdu.data_offset);
var reader = file.reader(&.{});
try reader.interface.readSliceAll(buf);
return buf;
}
This reads the correct amount of bytes, but the float32 values become garbage after decoding:
=== Image Data Analysis ===
Pixel type: f32
Total pixels: 2334784
Min value: 0.000000
Max value: 78012644000000000000000000000000000.000000
Mean value: 34545638973025680000000000000.000000
These numbers are impossible for this type of image, so something is off in the read.
2
This second version works but allocates twice, which should not happen (I assume):
pub fn readImageData(
allocator: std.mem.Allocator,
file: std.fs.File,
hdu: PrimaryHDU,
) ![]u8 {
var buf = try allocator.alloc(u8, hdu.data_size);
errdefer allocator.free(buf);
try file.seekTo(hdu.data_offset);
var file_reader = file.reader(buf);
const reader = &file_reader.interface;
try file_reader.seekTo(hdu.data_offset);
allocator.free(buf);
buf = try reader.allocRemaining(allocator, .unlimited);
return buf;
}
This version is not ideal (allocating twice, not really how IO should be used),
but the pixel values become correct:
=== Image Data Analysis ===
Pixel type: f32
Total pixels: 2334784
Min value: 0.000000
Max value: 64340.543000
Mean value: 893.507832
So my float decoding is fine — the problem is how the bytes are read.
I am still trying to understand the new IO Interface but I struggle a lot.
Does anyone see what I am missing here? or what is the ideal approach in this case?
Thank you very much.