Suggestion: change Duration conversion funcs to f64 (except nanoseconds)

I just rewrote my timing code to the new std.Io.Clock stuff, and while the new convenience types Timestamp and Duration are quite nice I was surprised that the unit conversion functions on Duration take and return integers.

E.g. when I convert a duration to seconds I’m usually also interested in the fractional part (and if not I can still cast/round to integer, but there’s no way to get the fractional bits back from an integer).

Of course I can simply always take a higher resolution integer, cast to float and divide to get to wanted time unit with fractional part (for this the missing microseconds unit would be much more useful than nanoseconds (too high resolution) or milliseconds (too low resolution, since frame durations usually have a fractional millisecond part)) - but this again adds extra code noise.

So my suggestion while the new interfaces are still fresh:

  • toMilliseconds, toSeconds => return f64 instead of i64
  • add the missing toMicroseconds
  • fromMilliseconds, fromSeconds => take f64 instead of i64
  • add the missing fromMicroseconds

Keep the nanoseconds functions as is, since this is special case already anyway (i96 instead of i64), and fractional nanoseconds really doesn’t make much sense.

…alternatively make those methods take a comptime type parameter (but make sure that when providing a float type that the fractional part is populated).

Also, fwiw, in sokol_time.h this minimalistic API has served me well (mainly used for measuring durations from the highest resolution system clock for game-type apps), uint64_t tick is of an unspecified unit (usually nanoseconds, but the application shouldn’t rely on that and instead use the function which convert to a time unit at the end of tick computations:

uint64_t stm_now(void);
uint64_t stm_since(uint64_t start_ticks);
uint64_t stm_diff(uint64_t new_ticks, uint64_t old_ticks);
double stm_sec(uint64_t ticks);
double stm_ms(uint64_t ticks);
double stm_us(uint64_t ticks);
double stm_ns(uint64_t ticks);

PS: for the potential argument that those float results have variable precision which then can lead to increasing accumulated errors when used in computations (e.g. large values lose fractional precision): you wouldn’t use the output time-unit values for further computations, only for displaying/logging. E.g. all time/duration computations should happen with the highest-resolution Timestamp and Duration types (e.g. i96), and a conversion to time units would only happen for ‘human consumption’ (e.g. displaying the value in a UI or log output), but never be used as input for further computations.

2 Likes

Feels like a better way to solve this is by allowing to customize Duration to string formatting function, rather than going through intermediate floats.

3 Likes

I totally agree that not having the decimals is annoying, I just commented it in another post. I don’t know if making return floats is too much for what Timestamp is supposed to do, but I would appretiate a function which casts the integers into the floats for sure.

Feels like a better way to solve this is by allowing to customize Duration to string formatting function

Hmm, but wouldn’t this mean to entangle Duration with the string formatting code? IMHO it would be better to keep them separate, and just use the regular floating point formatting options.

E.g. to print frame duration in milliseconds with microsecond accuracy (e.g. 16.667ms):

std.debug.print("Frame duration: {d:.3}ms\n", .{ frameDuration.toMilliseconds() });

I’m used to Go and really like their formatter for durations. Zig already supports the format() method for printing arbitrary types. I think would best served by having Duration.format() so you can use it like this:

std.debug.print("Frame duration: {f}\n", .{ frameDuration });

If you really need it in milliseconds, you will need to format it yourself, but I think format() would cover most uses.

Zig already supports the format() method for printing arbitrary types.

But Duration (and Timestamp) doesn’t (and IMHO shouldn’t) have an implicit time unit (and even if, that would be nanoseconds). So there would need to be different formatters for seconds, milliseconds, microseconds and nanoseconds. Going through a generic float for printing definitely makes more sense to me.

Tbh, I’m also not a fan of adding a .format() method to all sorts of stdlib types (way too OOP, and where to draw the line - e.g. “formatting to a string” is only one of many data conversion options - that way different areas of the stdlib become too entangled with each other).