Ctrl+C differs for different server API Code

Zig TCP Server SIGINT Handling Issue

Overview

This document describes an issue with two Zig TCP server implementations (server.zig and server1.zig) designed for learning purposes. Both servers listen on port 8080 and handle SIGINT (Ctrl+C) to exit gracefully, but they exhibit different behaviors when interrupted. The goal is to understand and resolve the issue where server.zig hangs on Ctrl+C while server1.zig exits cleanly.

Problem Description

Both programs set up a TCP server that:

  • Listens on 127.0.0.1:8080.
  • Uses a controller: bool flag to manage the event loop.
  • Registers a SIGINT handler to set controller = false and print a shutdown message.
  • Accepts client connections in a while(controller) loop.

However, their behavior on Ctrl+C differs:

  • server.zig: Prints the shutdown message multiple times but does not exit, hanging in the accept call.
  • server1.zig: Prints the shutdown message once and exits cleanly.

Code Listings

server.zig

This implementation uses Zig’s std.posix API for socket operations.

///epoll server implementation for learning purposes
const std = @import("std");
const linux = std.os.linux;
const posix = std.posix;
const net = std.net;

// server configurations
const port: u16 = 8080;

const tpe: u32 = posix.SOCK.STREAM;
const protocol = posix.IPPROTO.TCP;

const backlog: u31 = 128;

// controller controls the event-loop
var controller: bool = true;

pub fn handleCtrlC(signum: i32) callconv(.C) void {
    controller = false;
    std.debug.print("\n[!] Caught Ctrl+C (signal {}), Shutting down...\n", .{signum});
}

pub fn main() !void {
    // signal handling
    var sa = linux.Sigaction{
        .handler = .{ .handler = handleCtrlC },
        .mask = linux.sigemptyset(),
        .flags = 0,
    };
    _ = linux.sigaction(linux.SIG.INT, &sa, null);

    const address = try std.net.Address.resolveIp("127.0.0.1", port);

    // socket configuration
    const listener = try posix.socket(address.any.family, tpe, protocol);
    defer posix.close(listener);

    // binding socket with the port
    try posix.bind(listener, &address.any, address.getOsSockLen());

    // listen in the background
    try posix.listen(listener, backlog);

    std.debug.print("[+] Server listening on port {d}\n", .{port});

    // event-loop
    while(controller) {
        std.debug.print("[*] Waiting for client...\n", .{});
        var clientAddr: net.Address = undefined;
        var clientAddrLen: posix.socklen_t = @sizeOf(net.Address);

        // accept a new connection
        const client = try posix.accept(listener, &clientAddr.any, &clientAddrLen, 0);
        defer posix.close(client);
        std.debug.print("[✓] Client Connected \n", .{});
    }

    std.debug.print("[✓] Server stopped cleanly.\n", .{});
}

Output:

$ zig build-exe server.zig
$ ./server
[+] Server listening on port 8080
[*] Waiting for client...
^C
[!] Caught Ctrl+C (signal 2), Shutting down...
^C
[!] Caught Ctrl+C (signal 2), Shutting down...
^C
[!] Caught Ctrl+C (signal 2), Shutting down...
^C
[!] Caught Ctrl+C (signal 2), Shutting down...
...

Observation: The server prints the shutdown message repeatedly but does not exit, requiring multiple Ctrl+C presses and still hanging.

server1.zig

This implementation uses Zig’s lower-level std.os.linux API for socket operations.

///epoll server implementation for learning purposes
const std = @import("std");
const os = std.os.linux;

var controller: bool = true;

const port: u16 = 8080;

pub fn handleCtrlC(signum: i32) callconv(.C) void {
    controller = false;
    std.debug.print("\n[!] Caught Ctrl+C (signal {}), Shutting down...\n", .{signum});
}

pub fn main() !void {
    const domain = os.AF.INET;
    const socket = os.SOCK.STREAM;
    const protocol: u32 = 0;

    // signal handling
    var sa = os.Sigaction{
        .handler = .{ .handler = handleCtrlC },
        .mask = os.sigemptyset(),
        .flags = 0,
    };
    _ = os.sigaction(os.SIG.INT, &sa, null);

    // socket configuration
    const serverFd = os.socket(domain, socket, protocol);
    defer _ = os.close(@as(i32, @intCast(serverFd)));

    const addr = os.sockaddr.in{
        .family = os.AF.INET,
        .port = port,
        .addr = 0,
        .zero = [_]u8{0} ** 8,
    };
    const len: os.socket_t = @sizeOf(os.sockaddr.in);

    // binding socket with the port
    _ = os.bind(
        @as(i32, @intCast(serverFd)),
        @as(*const os.sockaddr, @ptrCast(&addr)),
        len
    );

    _ = os.listen(@as(i32, @intCast(serverFd)), os.SOMAXCONN);

    while(controller) {
        var client_addr: os.sockaddr.in = undefined;
        var client_addr_len: os.socklen_t = @sizeOf(os.sockaddr.in);

        // Accept a new connection
        const clientFd = os.accept(
            @as(i32, @intCast(serverFd)),
            @as(*os.sockaddr, @ptrCast(&client_addr)),
            &client_addr_len
        );
        defer {
           if (clientFd < std.math.maxInt(i32)) {
                _ = os.close(@as(i32, @intCast(clientFd)));
            }
        }
    }
}

Output:

$ zig build-exe server1.zig
$ ./server1
^C
[!] Caught Ctrl+C (signal 2), Shutting down...

Observation: The server exits cleanly after a single Ctrl+C press.

Environment

  • OS: Linux
  • Zig Version: Not specified (assumed recent, e.g., 0.15.0 or later)

How can i exit by pressing ctrl + c in case of posix API ??

sigaction is a POSIX API and available in zig as std.posix.sigaction

Maybe it’s this:

That’s weird, I just spent some time educating myself about how POSIX signals really work, and filled in my understanding of EINTR, and when I read this I was like “uhh maybe the handler fails to run because the blocking accept has to bail out with EINTR to let the kernel resume the thread in usermode and if the posix.accept function retries on EINTR then you’ll never escape the loop,” which I would absolutely not have spotted before that educational episode so I really recommend anyone who feels a bit less than completely lucid about how POSIX signals and EINTR work to spend some time reading about it—by the way, speaking of Lucid, it’s actually literally the exact UNIX quirk that is the focal example of Richard P. Gabriel’s legendary The Rise of Worse is Better, an essay which the author himself recursively refuted and reasserted over the years using different pseudonyms:

However, despite the apparent enthusiasm by the rest of the world, I was uneasy about the concept of worse is better, and especially with my association with it. In the early 1990s, I was writing a lot of essays and columns for magazines and journals, so much so that I was using a pseudonym for some of that work: Nickieben Bourbaki. The original idea for the name was that my staff at Lucid would help with the writing, and the single pseudonym would represent the collective, much as the French mathematicians in the 1930s used “Nicolas Bourbaki” as their collective name while rewriting the foundations of mathematics in their image. However, no one but I wrote anything under that name.

In the Winter of 1991-1992 I wrote an essay called “Worse Is Better Is Worse” under the name “Nickieben Bourbaki.” This piece attacked worse is better. In it, the fiction was created that Nickieben was a childhood friend and colleague of Richard P. Gabriel, and as a friend and for Richard’s own good, Nickieben was correcting Richard’s beliefs.

In the Autumn of 1992, the Journal of Object-Oriented Programming (JOOP) published a “rebuttal” editorial I wrote to “Worse Is Better Is Worse” called “Is Worse Really Better?” The folks at Lucid were starting to get a little worried because I would bring them review drafts of papers arguing (as me) for worse is better, and later I would bring them rebuttals (as Nickieben) against myself. One fellow was seriously nervous that I might have a mental disease.

In the middle of the 1990s I was working as a management consultant (more or less), and I became interested in why worse is better really could work, so I was reading books on economics and biology to understand how evolution happened in economic systems. Most of what I learned was captured in a presentation I would give back then, typically as a keynote, called “Models of Software Acceptance: How Winners Win,” and in a chapter called “Money Through Innovation Reconsidered,” in my book of essays, “Patterns of Software: Tales from the Software Community.”

You might think that by the year 2000 I would have settled what I think of worse is better - after over a decade of thinking and speaking about it, through periods of clarity and periods of muck, and through periods of multi-mindedness on the issues. But, at OOPSLA 2000, I was scheduled to be on a panel entitled “Back to the Future: Is Worse (Still) Better?” And in preparation for this panel, the organizer, Martine Devos, asked me to write a position paper, which I did, called “Back to the Future: Is Worse (Still) Better?” In this short paper, I came out against worse is better. But a month or so later, I wrote a second one, called “Back to the Future: Worse (Still) is Better!” which was in favor of it. I still can’t decide. […]