Zero sIX; 06;
A network library written in zig.
Where the wire meets the will.
Every byte owned, every thread deliberate, every route explicit.
No hidden cost. Just clean metal and honest code - predictable by principle
You are the thinker. Tinker.. Assembler... The builder, Not just as user/coder....
1. Explicit Over Implicit.
2. Modular & Maintanable.
3. Performance-First Architecture.
4. Practical Features, Ready to Use.
5. Modern Efficient Concurrency Model.
6. Predictable, Transparent Memory Management.
- Zig 0.16.x
- Always push Zig and their std.
- Single file, single responsibility.
- Use Data-Oriented Design approach first.
- A "nice to have" and "maybe we need this" is tertiary.
- Narrowing down the system thinking first then be explicit.
- Always fix from our side first rather than Zig feature/s side.
| Document | Description |
|---|---|
docs/hld-http.md |
HTTP -- goals, runtime model, API, router, WebSocket, memory model |
docs/hld-udp.md |
UDP -- goals, runtime model, API, packet model, endianness, disconnect |
docs/lld-http.md |
HTTP -- internal data structures and algorithms |
docs/lld-udp.md |
UDP -- internal data structures and algorithms |
docs/concurrency.md |
Concurrency models -- Model 1 and Model 2 for all protocols; Channel note |
docs/hld-channel.md |
Channel -- goals, model, proposed API (not yet implemented) |
docs/lld-channel.md |
Channel -- proposed internal structure (not yet implemented) |
docs/adr.md |
Architecture Decision Records |
docs/headers.md |
Response header cap -- tiers, security, error handling |
docs/tests.md |
Test coverage and how to run |
For more examples see the examples directory.
Auto I/O (work-queue thread pool, default):
const std = @import("std");
const zix = @import("zix");
pub fn homeHandler(req: *zix.Http.Request, res: *zix.Http.Response, ctx: *zix.Http.Context) !void {
_ = req;
_ = ctx;
try res.send("hello from zix");
}
pub fn main(process: std.process.Init) !void {
var arena = std.heap.ArenaAllocator.init(std.heap.smp_allocator);
defer arena.deinit();
var server = try zix.Http.Server.init(4096, .{
.io = process.io,
.allocator = arena.allocator(),
.ip = "127.0.0.1",
.port = 9000,
.max_kernel_backlog = 1024 * 4,
.max_client_request = 1024 * 4,
.max_allocator_size = 1024 * 4,
.max_client_response = 1024 * 4,
});
defer server.deinit();
server.registerHandler("/", homeHandler);
try server.run();
}Manual I/O (single-threaded, explicit concurrency limit via io.concurrent):
pub fn main() !void {
var arena = std.heap.ArenaAllocator.init(std.heap.smp_allocator);
defer arena.deinit();
var threaded = std.Io.Threaded.init(std.heap.smp_allocator, .{
.concurrent_limit = std.Io.Limit.limited(4), // pin to 4 concurrent tasks
// .concurrent_limit = .unlimited // let runtime decide
});
defer threaded.deinit();
var server = try zix.Http.Server.init(4096, .{
.io = threaded.io(),
.allocator = arena.allocator(),
.ip = "127.0.0.1",
.port = 9000,
.max_kernel_backlog = 1024 * 4,
.max_client_request = 1024 * 4,
.max_allocator_size = 1024 * 4,
.max_client_response = 1024 * 4,
.workers = 1, // stay on model 1 -- use the caller's io directly
});
defer server.deinit();
server.registerHandler("/", homeHandler);
try server.run();
}Three explicit functions, each with a distinct purpose:
server.registerHandler("/about", aboutHandler);
// exact — matches only /about
server.registerPrefixHandler("/api", apiHandler);
// prefix — matches /api, /api/foo, /api/foo/bar, NOT /apiv2
server.registerParamHandler("/users/:id", userHandler);
// param — matches /users/alice, captures id="alice"
// read inside handler: req.pathParam("id")
server.registerParamHandler("/:tenant/:branch", branchHandler);
// multi-param — req.pathParam("tenant"), req.pathParam("branch")Priority:
exact > param > prefix (longer prefix beats shorter)
Exact and prefix priority is independent of registration order. Param routes are the exception — when two patterns have the same segment count and both match, the first-registered wins. Register more-literal patterns before all-param patterns of the same depth:
// Correct order — /path/user/:id wins for /path/user/alice
server.registerParamHandler("/path/user/:id", userHandler); // more literals first
server.registerParamHandler("/path/:tenant-id/:branch", tenantHandler); // all-param second| Registered | Request | Winner | Reason |
|---|---|---|---|
/path/info (exact) + /path/:id (param) |
/path/info |
/path/info |
exact beats param |
/path/:id (param) + /path (prefix) |
/path/alice |
/path/:id |
param beats prefix |
/api/v2 + /api (both prefix) |
/api/v2/foo |
/api/v2 |
longer prefix wins |
/path/user/:id (1st) + /path/:a/:b (2nd) |
/path/user/alice |
/path/user/:id |
more literals registered first |
Regex-like matching — zix has no regex engine. registerPrefixHandler is the equivalent of /prefix/(.*): it covers the prefix and any sub-path below it. Additional filtering is done inside the handler with plain string operations on req.path():
// Intent: /secret/(.*) -> only serve files with ?sec=abc123
server.registerPrefixHandler("/secret", secretHandler);
// Inside secretHandler — extract sub-path and apply custom logic
const sub = req.path()["/secret/".len..]; // e.g. "file.txt"
// check extension, depth, query params, headers, etc.Any match logic expressible as string operations — extension checks, version parsing, depth limits — belongs in the handler body, not the route pattern.
Two modes, selected via config.workers:
Model 2 — work-queue thread pool (default, workers = 0):
Dedicated accept threads push connections to a shared ConnQueue. Pool threads pop and handle each connection synchronously with blocking I/O — no scheduler overhead. SO_REUSEPORT lets all accept threads listen on the same port.
pub fn main(process: std.process.Init) !void {
var server = try zix.Http.Server.init(4096, .{
.io = process.io,
// workers = 0 -> 2 accept threads (auto)
// pool_size = 0 -> max(10, cpu_count * 2) pool threads (auto)
...
});Model 1 — single accept, io.concurrent dispatch (workers = 1):
One accept thread dispatches each connection via io.concurrent(). Use this when you need explicit control over the concurrency backend or limit.
pub fn main() !void {
var threaded = std.Io.Threaded.init(std.heap.smp_allocator, .{
.concurrent_limit = std.Io.Limit.limited(4),
});
defer threaded.deinit();
var server = try zix.Http.Server.init(4096, .{
.io = threaded.io(),
.workers = 1, // use the caller's io directly
...
});See docs/concurrency.md for architecture details and thread counts.
Two independent timeout layers, both disabled by default (0):
conn_timeout_ms -- network-level connection guard (Layer D). The timer thread shuts down connections that have been open longer than this without completing. Protects pool threads from clients that stall before or during header send. Effective in model 2 only.
handler_timeout_ms -- per-handler execution budget (Layer B). Sets ctx.deadline before each dispatch. Handlers opt in by calling ctx.timedOut() between expensive steps.
var server = try zix.Http.Server.init(4096, .{
.io = process.io,
.allocator = arena.allocator(),
.ip = "127.0.0.1",
.port = 9000,
.conn_timeout_ms = 30_000, // close stalled connections after 30s
.handler_timeout_ms = 5_000, // handler budget: 5s
});Handler using the budget:
pub fn slowHandler(req: *zix.Http.Request, res: *zix.Http.Response, ctx: *zix.Http.Context) !void {
_ = req;
doStep1(ctx.io);
if (ctx.timedOut()) {
res.setStatus(.REQUEST_TIMEOUT);
return res.sendJson("{\"error\":\"timeout\"}");
}
doStep2(ctx.io);
if (ctx.timedOut()) {
res.setStatus(.REQUEST_TIMEOUT);
return res.sendJson("{\"error\":\"timeout\"}");
}
try res.sendJson("{\"result\":\"ok\"}");
}ctx.timedOut() is a no-op (always returns false) when handler_timeout_ms == 0. conn_timeout_ms should be >= handler_timeout_ms to avoid the connection being cut before the handler can send a 408. See examples/http_timeout_resp.zig and docs/adr.md (ADR-018) for design rationale.
Middleware is composed at comptime using wrapper functions. Each wrapper takes a comptime next: zix.Http.HandlerFn and returns a new HandlerFn — no heap allocation, no runtime chain runner.
fn withOriginCheck(comptime next: zix.Http.HandlerFn) zix.Http.HandlerFn {
return struct {
fn handle(req: *zix.Http.Request, res: *zix.Http.Response, ctx: *zix.Http.Context) anyerror!void {
const origin = req.header("origin") orelse "";
if (!isAllowedOrigin(origin)) {
res.setStatus(.FORBIDDEN);
try res.sendJson("{\"error\":\"forbidden origin\"}");
return;
}
return next(req, res, ctx);
}
}.handle;
}
fn withBasicAuth(comptime next: zix.Http.HandlerFn) zix.Http.HandlerFn {
return struct {
fn handle(req: *zix.Http.Request, res: *zix.Http.Response, ctx: *zix.Http.Context) anyerror!void {
// validate Authorization: Basic <base64(user:pass)>
// ...
return next(req, res, ctx);
}
}.handle;
}Compose left-to-right — the outermost wrapper runs first:
// origin check only
server.registerHandler("/public", withOriginCheck(publicHandler));
// origin check -> basic auth -> handler
server.registerHandler("/private", withOriginCheck(withBasicAuth(privateHandler)));# curl examples
curl -H "Origin: http://localhost" "http://localhost:9008/public" # 200
curl "http://localhost:9008/public" # 403
curl -H "Origin: http://localhost" -u "admin:secret" "http://localhost:9008/private" # 200
curl -H "Origin: http://localhost" "http://localhost:9008/private" # 401
curl "http://localhost:9008/private" # 403
For a full working example see examples/http_middleware.zig.
Room-based broadcast over RFC 6455. A param handler upgrades the connection and enters a per-task frame loop — no separate thread needed.
var ws_rooms: zix.Http.WebSocket.RoomMap = undefined;
pub fn wsHandler(req: *zix.Http.Request, res: *zix.Http.Response, ctx: *zix.Http.Context) !void {
const room_id = req.pathParam("room-id") orelse return;
// Read query params BEFORE upgrade() — unavailable after the 101 handshake.
const display_name = req.queryParam("name") orelse "anonymous";
// extract Sec-WebSocket-Key from headers, then handshake
var accept_buf: [64]u8 = undefined;
const accept = try zix.Http.WebSocket.acceptKey(ws_key, &accept_buf);
try zix.Http.WebSocket.upgrade(ctx.stream, ctx.io, accept); // writes 101 directly
// heap-allocate conn, join room, both are cleaned up via defer (LIFO)
const conn = try std.heap.smp_allocator.create(zix.Http.WebSocket.Conn);
conn.* = .{ .stream = ctx.stream, .io = ctx.io };
defer std.heap.smp_allocator.destroy(conn);
ws_rooms.join(room_id, conn, ctx.io);
defer ws_rooms.leave(room_id, conn, ctx.io); // runs before destroy
// frame loop:
// text/binary -> broadcast "[display_name] payload" to room
// ping -> pong
// close -> echo close frame + break
// EOF / error -> best-effort close frame + break
_ = display_name;
}
pub fn main(process: std.process.Init) !void {
ws_rooms = zix.Http.WebSocket.RoomMap.init(std.heap.smp_allocator);
defer ws_rooms.deinit();
// ...
server.registerParamHandler("/ws/:room-id", wsHandler);
}# Connect with wscat or websocat, ?name sets the broadcast display name
wscat -c "ws://localhost:9008/ws/lobby?name=alice"
websocat "ws://localhost:9008/ws/lobby?name=alice"
# ?name is optional — omit for "anonymous"
wscat -c "ws://localhost:9008/ws/lobby"
Priority: exact > param > prefix — /ws/:room-id is a param route, so /ws/lobby captures room-id = "lobby".
ctx.stream is the raw TCP stream exposed via Context. The server sets it for every connection before calling any handler — HTTP handlers ignore it, WebSocket handlers use it after the 101 upgrade.
Combining HTTP, static, and WebSocket in one server — register all handler types together, routing handles dispatch. Unmatched routes fall through to static serving:
server.registerHandler("/", homeHandler);
server.registerPrefixHandler("/api", apiHandler);
server.registerParamHandler("/ws/:room-id", wsHandler);
// + set public_dir in HttpServerConfig for static filesOne-way server push over HTTP/1.1 — no WebSocket handshake, browser-native EventSource reconnect.
// GET /events — streams "tick N" once per second
pub fn eventsHandler(req: *zix.Http.Request, res: *zix.Http.Response, ctx: *zix.Http.Context) !void {
_ = req;
const sse = try res.stream(); // sends SSE headers, returns SseWriter
var i: u32 = 0;
while (i < 10) : (i += 1) {
var buf: [32]u8 = undefined;
const msg = std.fmt.bufPrint(&buf, "tick {d}", .{i}) catch break;
sse.writeEvent(msg) catch break; // data: tick N\n\n
std.Io.sleep(ctx.io, std.Io.Duration.fromMilliseconds(1000), .awake) catch break;
}
// handler returns -> connection closes -> EventSource auto-reconnects
}SseWriter method |
Wire format |
|---|---|
writeEvent(data) |
data: <data>\n\n |
writeNamedEvent(event, data) |
event: <event>\ndata: <data>\n\n |
comment(text) |
: <text>\n (keepalive) |
Model requirement: use workers = 1 (Model 1). SSE connections are long-lived — they would exhaust a blocking pool (Model 2) one thread per open stream.
var server = try zix.Http.Server.init(4096, .{
.io = process.io,
.workers = 1, // Model 1 — io.concurrent() per connection
...
});
server.registerHandler("/events", eventsHandler);curl -N http://localhost:9010/eventsSee examples/http_sse.zig for a full example with a browser-compatible HTML page.
Set public_dir in HttpServerConfig to enable static file serving. server.run() returns error.PublicDirNotFound if the directory does not exist. Use a createInitDirs helper to create all required directories before Server.init:
fn createInitDirs(io: std.Io) void {
std.Io.Dir.cwd().createDirPath(io, "./public") catch {};
std.Io.Dir.cwd().createDirPath(io, "./public/u") catch {};
}
pub fn main(process: std.process.Init) !void {
createInitDirs(process.io); // idempotent — safe on every start
var server = try zix.Http.Server.init(.{
// ...
.public_dir = "./public", // validated at run(); "" = disabled
.public_dir_upload = "u",
});- Unmatched routes fall through to static serving from
public_dir. - Range requests (
Range: bytes=…) ->206 Partial Content(RFC 7233). - Directory traversal (
..) is rejected.
Upload — parse the multipart body in a handler, optionally rename before saving:
var parser = zix.Http.Multipart.init(ctx.allocator, boundary);
defer parser.deinit();
try parser.parse(try req.body());
if (parser.getField("file")) |f| {
// you can rename file first before save by replacing the filename string, e.g.:
// const filename = "custom_name.txt";
// or build it dynamically:
// const filename = try std.fmt.allocPrint(ctx.allocator, "{s}_{s}", .{ sessionid, f.filename orelse "upload" });
const filename = f.filename orelse "upload";
const path = try zix.utils.file.saveFile(ctx.io, ctx.allocator, "./public/u", filename, f.data);
_ = path; // arena-allocated; valid for this request
}saveFile creates the destination directory if needed and returns a caller-owned path copy.
# curl example: upload a file with JSON metadata
curl -X POST "http://localhost:9005/upload" \
-F "file=@/path/to/file.txt" \
-F 'data={"userid":0,"sessionid":"01944f5a-0000-7000-8000-000000000000"}'
Access-controlled serving — use a prefix handler to gate files behind a required query param. Check file existence before the param so the auth requirement is not revealed for non-existent paths:
// GET /secret/<file>?sec=abc123
// 404 if file not found, 403 if sec param missing or wrong, 200 if both pass
server.registerPrefixHandler("/secret", secretHandler);curl "http://localhost:9005/secret/file.txt?sec=abc123" # 200
curl "http://localhost:9005/secret/file.txt" # 403 (file exists, no param)
curl "http://localhost:9005/secret/missing.txt?sec=abc123" # 404
HttpServerConfig.max_response_headers controls how many custom headers res.addHeader() will accept per response. Pick the tier that matches your deployment:
| Variant | Cap | Typical use |
|---|---|---|
.MINIMAL |
16 | Simple internal APIs, controlled environments |
.COMMON |
32 | Default. Most web apps, single proxy |
.LARGE |
64 | CDN + proxy, load balancers, CORS-heavy APIs |
.EXTRA_LARGE |
128 | k8s, service mesh, heavy forwarding stacks |
.{ .CUSTOM = N } |
N | Explicit cap, arena-allocated to exactly N slots per request |
var server = try zix.Http.Server.init(.{
// ...
.max_response_headers = .LARGE, // 64 headers
// .max_response_headers = .{ .CUSTOM = 48 }, // explicit
});addHeader() returns error.TooManyHeaders when the cap is reached and error.InvalidHeaderName / error.InvalidHeaderValue if the name or value contains CR or LF (header injection guard).
.{ .CUSTOM = N } allocates exactly N slots from the per-request arena — no ceiling, no clamping.
For security guidance and tier selection see docs/headers.md. For a working demonstration see examples/http_xtra_headers.zig.
Type-safe UDP server and client. The user defines their own extern struct packet; zix handles endianness, size validation, and concurrency.
const std = @import("std");
const zix = @import("zix");
const Packet = extern struct {
id: [16]u8,
kind: i32,
register: u32,
position: [3]f64,
};
const MyServer = zix.Udp.Server(Packet);
pub fn main(process: std.process.Init) !void {
var server = try MyServer.init(.{
.allocator = std.heap.smp_allocator,
.ip = "127.0.0.1",
.port = 9100,
.port_mode = .REQUIRED,
.endianness = .LITTLE,
.broadcast = true, // relay each packet to all connected clients
.auto_ack = false,
.disconnect_timeout_ms = 5000,
.poll_timeout_ms = 2000,
});
defer server.deinit();
try server.run(process.io);
}Client (concurrent send + receive):
const MyClient = zix.Udp.Client(Packet);
pub fn main(process: std.process.Init) !void {
const io = process.io;
var client = try MyClient.init(.{
.server_ip = "127.0.0.1",
.server_port = 9100,
.bind_port = 9101,
.port_mode = .REQUIRED,
.endianness = .LITTLE,
.send_every = 1000,
}, io);
defer client.deinit();
// spawn receive task alongside send loop
_ = io.concurrent(receiveLoop, .{&client}) catch {};
const p = Packet{ .id = [_]u8{0} ** 16, .kind = 1, .register = 0, .position = .{ 0.0, 0.0, 0.0 } };
while (true) {
client.send(p) catch {};
try std.Io.sleep(io, std.Io.Duration.fromMilliseconds(1000), .awake);
}
}See examples/udp_server.zig and examples/udp_client.zig for a full working example with broadcast and configurable ports. For design details see docs/hld-udp.md.
zig build unit-test # unit tests
zig build integration-test # integration tests
zig build test-all # bothzig build alone does not run tests. See docs/tests.md for coverage details.
| Scope | Allocator | Lifetime |
|---|---|---|
| Router route list | config.allocator (caller-owned) |
Process |
| Read / write I/O buffers | smp_allocator |
Connection |
Per-request allocations (ctx.allocator) |
Per-connection ArenaAllocator, reset each request |
Request |
Handlers receive ctx.allocator -- an arena reset between requests. Any allocation made inside a handler is automatically reclaimed at the end of the request without any free call.
config.allocator (router storage) is append-only — ArenaAllocator is the recommended choice. All route allocations are freed together when server.deinit() is called.
| Scope | Allocator | Lifetime |
|---|---|---|
| Client record list | config.allocator (caller-owned) |
Server process lifetime |
| Peer snapshot (broadcast) | config.allocator |
Single packet dispatch |
| Receive buffer | Stack | Single receive loop iteration |
config.allocator must be a general-purpose allocator (e.g. std.heap.smp_allocator). ArenaAllocator is not suitable: the broadcast peer snapshot is allocated and freed per packet — ArenaAllocator.free() is a no-op, so snapshots accumulate unboundedly until the server stops. See docs/hld-udp.md for the full explanation and PoC.
For full memory details see docs/hld-http.md and docs/hld-udp.md. For threading models see docs/concurrency.md.