Hey again - so beyond allocating something in place (which would be a direct memory leak), you can’t.
Literals and pointers to literal things are always const. You can try to cheat fate with @constCast but that will have undefined behaviour because the underlying literal itself is constant.
Essentially, that memory has to live somewhere away from that call-site for it to be a mutable slice. If you dislike the aesthetics at it and it’s a small request, you can just return the array directly and copy out.
The thing is… to my eye… this looks correct to me. The memory layout is clear, you can see the qualifiers clearly, etc… I’ve been writing Zig for a while and I’ve come to think of this as clear instead of verbose.
I mean my untrained eye would have expected &.{1} to be allocated on the stack just like byte is, but in the stack scope of send_and_recv, so &.{1} would disappear when send_and_recv returns.
(not sure if I am using the correct terms here but I hope that makes sense)
The annoyance comes from me having to pick a name for something that I fundamentally don’t care about, and I would rather not double the number of functions in my API just so I can have a const parameter…
Yeah, I gotcha. I’m going to write out a few things that you are probably aware of but through the context of Zig.
Conceptually, based on the function you’ve provided, this isn’t even really doable by design.
pub something(data: []u8) void {
data[0] = 1;
}
Say for example I could use some kind of “in situ” syntax:
something(mutable &[1]u8{ 0 });
Well… now what? How do I get my array back? That function returns void so I just kinda threw it down a hole. This may be because the example you’re posting is a simplified one but in this case, it’s really not understandable. Not trying to nitpick ya, but it’s hard to know what the goal is here.
Fundamentally, remember what a slice is - it’s a length and a pointer. Slices don’t contain stuff, they point to stuff and tell you how much stuff is there. So if I create a slice to something, it needs to be in a defined location.
Creating literals like in your example actually does put it somewhere! It’s just in data segment of your binary. If you want to see this in action, please read this thread: Diving deep into anonymous struct literals - #3 by bnl1
So as it stands, as far as slices are concerned, I don’t think this really makes sense. Hopefully you get what I’m driving at here.
Okay, tell me if I am wrong here - is your intention not to modify an array that the caller holds onto, but instead give your function the memory that it’s going to work on? Hence the void returning function?
The void was a simplification, the true use case is to enable a zero-allocation API. The data I am providing the function is both sent over a network interface and used as the memory area to de-serialize data into on the response.
fn send_and_recv(*port: Port, data: []u8, timeout_us: u32) !u16 {
// send data (ethernet frame) through a ring of subdevices, returning back to me on the same network interface
try port.send(data)
// recv the data back, modified by the devices.
// we recv that data back into the "data" parameter so that
// we can have a zero-allocation API (recv'd data is always the same size as sent data)
// the devices that did something increment a counter in the frame that is used
// as a basic check that everything is ok
// this is called the working counter "wkc"
const wkc: u16 = try port.recv_with_timeout(data, timeout_us)
return wkc
}
test {
// in this scenario, we don't care what the subdevices did to the frame
// other than the working counter
var byte: [1]u8 = .{1};
const wkc = try send_and_recv(&byte);
if (wkc == 0) {
// uh oh! maybe the ethernet cable broke or a subdevice lost power!
// do something about it!
handle_subdevice_error();
}
}
That example definitely helps - now I get your issue. Please clarify one more thing for me. How big is that data object pointhing to in bytes usually? Is there a ceiling? I have a few suggestions but I want to make sure I’m on track.
The details come into play when you want to have multiple frames “in flight”.
The protocol has an idx byte in the frame that I can use to identify the frames with so I can re-order un-ordered frames should they occur (some NIC’s have the potential to re-order frames when there are mulitple in their recv buffer). This means you can have a max number of 128 identified simultaneous frames in flight.
So in more detail:
Send frames (each frame always less than 1514 bytes). Use unique idx for each.
Recv frames, identify each using idx and deserailize them into appropriate memory areas.
My first pass at the API was that I allocated memory in the port struct to enable 128 frames, but then I figured out I could just give the port the memory it needed from the stack for each call to send_and_recv (there are mutex’s inside the port to make sure each frame receives a unique idx)
Okay, so if it were me designing this, I’d split that data argument into two arguments - one for send and one for recv (or something like that). You wouldn’t have to change much, but here’s why I’m saying this: it would achieve the behavior you’re looking for, afford another option, and even keep the same implementation you have now. I’m just going to focus on the slice arguments:
In this case, I can get the same behaviour currently with the following:
// send and read to the same buffer
const wkc = try send_and_recv(data[0..], data[0..]);
// only send and don't read because we don't care
const wkc = try send_and_recv(&.{1}, null);
// or send/recv on different buffers
const wkc = try send_and_recv(send[0..], recv[0..]);
The issue you’re hitting is a design problem and think it’s annoying you because something is bugging you on a fundamental level here. Since that one argument has a duel purpose, you lose the ability to just send literal data. However, you can make two arguments have a singular purpose by making them both the same and split the difference too.
I will definitely change to this dual argument approach.
And for another reason too, I was getting annoyed when I wanted to send the same data twice, because the data would get modified on receipt, so I had to re-declare the data to send etc.
Absolutely - you can even dispatch to optimized algorithms for one or the other using if (send.ptr == recv.ptr)… could be some interesting optimizations there