This is a benchmark of some of the popular web frameworks in the Zig world.
If I’ve missed any, let me know and I’ll add them. zzz isn’t included for now since it hasn’t been updated to Zig 0.15.1 yet.
Are you sure you aren’t comparing apples and oranges? The way you implemented std.http.Server only allows one connection at a time. I don’t onow the load testing tool you are using, but I’m assuming it’s doing parallel requests. I’ll send you a change tomorrow, to use the std.http server using my zio runtime and I’m sure it’s going to be a lot faster.
Hi, I have also done a bit of benchmarking of Zig web io-runtimes in the past, put shortly zzz won (dont have any records so you must take my word for it).
I am glad you took your time with this, but i have some recommendations for doing this kind of benchmark:
You chose probably the single worst platform to benchmark on, most servers will run on Linux and possibly Windows (windows server is somehow still a thing). So all the benchmarks should ideally target io_uring or iocp. Not sure if anything has changed, but back then iocp was kind of poorly supported in Zig ecosystem, io_uring was a bit better. Overall even if not targeting those, it would be nice to determine how each implementation does IO on your platform, otherwise you compare apples and oranges.
Its really hard to determine performance when no real traffic is happening. Some of the implementations are expecting you to send certain amount of data and are optimized around that (pre-allocated provision arenas etc.). Also when doing workloads, i recommend trying sending several things.
For example try to send:
file (to test FD to FD capabilities)
some static site to test ability to construct site from rodata but not directly a file
heavily dynamic page that does a lot of allocations and is dependant on frequently variable state (but try to avoid contention)
the previous one but use the framework provided serialization stuff (if it provides any) to handle contention, you can try several levels of contention, and see how it scales
This really stress tests the implementation of the web framework (for example zzz did really well on the third point, if i remember correctly).
Its nice to compare what transfer protocols do these web frameworks support. Also whether you are using TLS encryption can hugely impact performance, again testing each framework implementation/integration would be nice.
This all would be a lot more interesting after the std async stuff happens, that will most likely change the web frameworks a lot + possibly make more of them?
As a final note, i dont expect you or anyone else to do this, it would take a lot of time and effort. Also the ecosystem is not mature enough yet.
That being said i am no expert, so if i forgot anything, or made mistakes please correct me.
This is for the std benchmark from your repository, as a baseline on my laptop:
Summary:
Success rate: 90.45%
Total: 53036.1832 ms
Slowest: 19.5281 ms
Fastest: 0.4035 ms
Average: 2.9136 ms
Requests/sec: 18855.0521
Total data: 1.73 MiB
Size/request: 2 B
Size/sec: 33.31 KiB
Response time histogram:
0.404 ms [1] |
2.316 ms [196597] |■■■■■■■■■
4.228 ms [660426] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
6.141 ms [46429] |■■
8.053 ms [522] |
9.966 ms [277] |
11.878 ms [139] |
13.791 ms [58] |
15.703 ms [14] |
17.616 ms [1] |
19.528 ms [2] |
Response time distribution:
10.00% in 1.0130 ms
25.00% in 2.5400 ms
50.00% in 3.2093 ms
75.00% in 3.5787 ms
90.00% in 3.9523 ms
95.00% in 4.2483 ms
99.00% in 4.8460 ms
99.90% in 6.4420 ms
99.99% in 11.6338 ms
Details (average, fastest, slowest):
DNS+dialup: 0.8094 ms, 0.0250 ms, 8.9380 ms
DNS-lookup: 0.0061 ms, 0.0016 ms, 4.1034 ms
Status code distribution:
[200] 904466 responses
Error distribution:
[75808] connection closed before message completed
[19273] operation was canceled
[453] connection error
This is for my http-server example from Zio, which is also using std.http.Server:
Summary:
Success rate: 100.00%
Total: 8785.0153 ms
Slowest: 3.7615 ms
Fastest: 0.0177 ms
Average: 0.4375 ms
Requests/sec: 113830.1944
Total data: 194.55 MiB
Size/request: 204 B
Size/sec: 22.15 MiB
Response time histogram:
0.018 ms [1] |
0.392 ms [623868] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.766 ms [211219] |■■■■■■■■■■
1.141 ms [138711] |■■■■■■■
1.515 ms [20623] |■
1.890 ms [2710] |
2.264 ms [1931] |
2.638 ms [764] |
3.013 ms [160] |
3.387 ms [8] |
3.761 ms [5] |
Response time distribution:
10.00% in 0.1750 ms
25.00% in 0.2010 ms
50.00% in 0.3054 ms
75.00% in 0.6271 ms
90.00% in 0.9204 ms
95.00% in 1.0408 ms
99.00% in 1.3519 ms
99.90% in 2.2492 ms
99.99% in 2.7309 ms
Details (average, fastest, slowest):
DNS+dialup: 0.7460 ms, 0.1471 ms, 1.0282 ms
DNS-lookup: 0.0100 ms, 0.0017 ms, 0.1430 ms
Status code distribution:
[200] 1000000 responses
So the same code for handling the HTTP request/response cycle from the standard library, just different networking library, running 6x faster. If I extrapolatd your results, that would be faster than any other library.
Hey, yeah the std implementation I just added as a baseline. And thank you so much for your observation and providing test result with zio. The currently implementation of std doesn’t have multi worker or multithreading as well like other framework. I’m trying to match the config of each framework as close as possible. Should I add std + zio as another framework in the benchmark?
By the way, created a site for better visibility and comparison. Will keep on improving the benchmark to be useful.
I think you might find zio+std to be the fastest in this trivial benchmark, so I’d consider adding it. Performance of the future std.io interface, which will be how you are supposed to use std.http.server, should hopefully match the performance. So you might just call the std+zio version as “std”. Alternatively, at least use this as std:
const std = @import("std");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
var thread_pool: std.Thread.Pool = undefined;
try thread_pool.init(.{ .allocator = allocator });
defer thread_pool.deinit();
const address = try std.net.Address.parseIp4("0.0.0.0", 5000);
var server = try address.listen(std.net.Address.ListenOptions{});
defer server.deinit();
std.debug.print("Started on port {d}\n", .{5000});
while (true) {
const conn = try server.accept();
errdefer conn.stream.close();
try thread_pool.spawn(handleConnectionMain, .{conn});
}
}
fn handleConnectionMain(connection: std.net.Server.Connection) void {
handleConnection(connection) catch |err| {
std.debug.print("Error: {}\n", .{err});
};
}
fn handleConnection(connection: std.net.Server.Connection) !void {
defer connection.stream.close();
var recv_buffer: [4000]u8 = undefined;
var send_buffer: [4000]u8 = undefined;
var conn_reader = connection.stream.reader(&recv_buffer);
var conn_writer = connection.stream.writer(&send_buffer);
var server = std.http.Server.init(conn_reader.interface(), &conn_writer.interface);
while (true) {
var req = try server.receiveHead();
try req.respond("OK", std.http.Server.Request.RespondOptions{});
if (!req.head.keep_alive) break;
}
}
Loved your detailed observation, and yes, these are the goals for sure.
Yeah, definitely macOS is not a realistic environment. I rebenched on my Pi 5. Ideally, I will include benching on each of the 3 major OS.
Yep, this is right. It is currently not the whole picture, but it does give an impression of how it might perform. I’ve work in progress to have benching on each of these different use cases, and I’ve some code where I used the framework-specific parameter/query/body parser serializer.
For now, TLS is all turned off since most of the time applications will be behind a proxy, but yeah, I will probably add another category for that alone.
I think a lot of the async staff are already on master, no? I saw a video from Andrew on that.
I really appreciate your thoughts, and yes, I would love to keep this benchmark updated, as it also provides me a chance to explore different Zig libraries, hence learning Zig in the meantime as well.
Created a site for better visibility, will keep on improving, thanks again for your feedbacks! https://zigweb.nuhu.dev