It’s not possible to convert that way, you have to create a new slice of the appropriate type (allocating on the heap) and copy the elements over. Something like this:
If you’re looking to interpret each individual u8 value as its equivalent u16 value, then I believe the above answers are correct; but if instead you’re looking to simply bend the memory to your will, then this works:
If you’re asking about utf8 → utf16, there are a variety of stuff in std.unicode, note that it doesn’t provide complete Unicode support, just what is necessary to work with windows.
If this is what you are asking, then the solutions up to now are not helpful, utf16 is not utf8 with larger integers, it is a different format.
Just a minor nitpick: Windows uses WTF-16, which is UTF-16 but allowing for invalid surrogate pairs. So @f-tuason should use the wtf functions in std.unicode and not the utf ones when dealing with windows paths.
Theoretically, Linux filenames are opaque byte strings. But in many cases, you have to represent them to users or read the file names from somewhere else. Thus you need to know the encoding for these cases, and so the filenames can be invalid UTF-8 sequences.
Besides, I have also seen valid encodings but wrong nevertheless, eg an Latin letter A and a diaresis, but not a combining diaresis.
File name encoding is a constant source of trouble in the real world.