I just rewrote my timing code to the new std.Io.Clock stuff, and while the new convenience types Timestamp and Duration are quite nice I was surprised that the unit conversion functions on Duration take and return integers.
E.g. when I convert a duration to seconds I’m usually also interested in the fractional part (and if not I can still cast/round to integer, but there’s no way to get the fractional bits back from an integer).
Of course I can simply always take a higher resolution integer, cast to float and divide to get to wanted time unit with fractional part (for this the missing microseconds unit would be much more useful than nanoseconds (too high resolution) or milliseconds (too low resolution, since frame durations usually have a fractional millisecond part)) - but this again adds extra code noise.
So my suggestion while the new interfaces are still fresh:
- toMilliseconds, toSeconds => return f64 instead of i64
- add the missing toMicroseconds
- fromMilliseconds, fromSeconds => take f64 instead of i64
- add the missing fromMicroseconds
Keep the nanoseconds functions as is, since this is special case already anyway (i96 instead of i64), and fractional nanoseconds really doesn’t make much sense.
…alternatively make those methods take a comptime type parameter (but make sure that when providing a float type that the fractional part is populated).
Also, fwiw, in sokol_time.h this minimalistic API has served me well (mainly used for measuring durations from the highest resolution system clock for game-type apps), uint64_t tick is of an unspecified unit (usually nanoseconds, but the application shouldn’t rely on that and instead use the function which convert to a time unit at the end of tick computations:
uint64_t stm_now(void);
uint64_t stm_since(uint64_t start_ticks);
uint64_t stm_diff(uint64_t new_ticks, uint64_t old_ticks);
double stm_sec(uint64_t ticks);
double stm_ms(uint64_t ticks);
double stm_us(uint64_t ticks);
double stm_ns(uint64_t ticks);
PS: for the potential argument that those float results have variable precision which then can lead to increasing accumulated errors when used in computations (e.g. large values lose fractional precision): you wouldn’t use the output time-unit values for further computations, only for displaying/logging. E.g. all time/duration computations should happen with the highest-resolution Timestamp and Duration types (e.g. i96), and a conversion to time units would only happen for ‘human consumption’ (e.g. displaying the value in a UI or log output), but never be used as input for further computations.