zunit: A Zig Test Runner with Lifecycle Hooks, JUnit XML, and CI-Ready Reporting
If you have written anything non-trivial in Zig, you have probably run into the same wall I did: the built-in zig test is great for a quick unit test, but the moment your suite grows beyond arithmetic you start missing things every other language’s test framework gives you for free — beforeAll, afterEach, per-test timing, a JUnit XML report your CI can actually read, the ability to run setup once per file instead of repeating it in every test body.
This is exactly why I built zunit: a custom test runner and lifecycle library for Zig that replaces the built-in runner, gives you a full hook lifecycle (global and per-file), writes JUnit-compatible XML for GitHub Actions / Jenkins / GitLab, and — as of v2.1 — handles multi-binary test suites cleanly so you can fan zig build test out across many executables without losing reports.
This post walks through what zunit is, why it exists, how it hooks into Zig’s build system, and how you’d use it in a real project. If you write Zig and you care about testing, this is for you.
Why the Built-In Zig Test Runner Is Not Enough
Let me be direct: the Zig standard library’s testing story is excellent for what it is — it gives you test "name" { ... } blocks, std.testing.expect*, and a runner that compiles them all into a single binary and executes them. That’s wonderful for a library with a dozen tests.
But once your project grows, you start feeling the gaps:
- No setup/teardown hooks. Every test that needs a database, a temp directory, or a shared fixture has to build it itself. Forgetting to tear down means your tests leak state between runs.
- No per-file “setup once” mechanism. If you want to seed a dataset that ten tests depend on, you do it ten times. Or you fight Zig’s lack of
@setCold-style suite context. - No machine-readable report. GitHub Actions, Jenkins, GitLab, TeamCity — they all read the de-facto-standard JUnit XML schema. Zig’s default runner produces human-readable text and an exit code. Your CI just sees “green” or “red” and nothing per-test.
- No per-test timing. You can’t tell whether your suite has a 2ms outlier dragging the p99 without writing timing scaffolding yourself.
- No multi-binary consolidation. As soon as you split tests into multiple
b.addTest(...)binaries (which is a totally reasonable thing to do in a real project), any shared output file gets clobbered by whichever binary finishes last.
These aren’t fatal flaws — the Zig core team is deliberate about keeping the standard library small, and test frameworks belong in the ecosystem, not in std. That’s the gap zunit fills.
What zunit Gives You
Here’s the feature set in one glance:
- A full test runner that replaces Zig’s built-in one. You get all the test functions via
builtin.test_functions, drive them yourself, and own the exit code. - Per-file hooks —
beforeAll,afterAll,beforeEach,afterEachdeclared as named test blocks, automatically scoped to the file they live in. - Global hooks —
zunit:beforeAll,zunit:afterAll, etc., that run once for the entire suite. You can also pass them as function pointers in config. - Configurable failure handling — when a hook errors, choose whether to abort the process, skip the affected scope, or continue.
- Three output styles — minimal summary, verbose per-test, or verbose with nanosecond-precision timing.
- JUnit-compatible XML report — drop it into GitHub Actions’
dorny/test-reporter, Jenkins’ JUnit plugin, or GitLab’sjunitartifacts and get per-test pass/fail in your CI dashboard. --output-fileCLI flag — set the report path at runtime without recompiling.- Memory leak detection — resets
std.testing.allocator_instancearound every test, reports leaks asLEAKfailures, matches the behaviour of the built-in runner. - Multi-binary consolidation (v2.1) — fan out
zig build testacross many binaries, each writes a fragment, the last one to finish merges them atomically into a single JUnit file. No races, no clobbering, no-- --output-filepassthrough needed.
It’s MIT-licensed, has zero dependencies outside std, and lives at github.com/dariogriffo/zunit.
How It Actually Works: Replacing Zig’s Default Test Runner
This is the part that surprises people the first time they see it — Zig lets you swap the default test runner by pointing your build at a file that provides pub fn main(). That file is the runner. It receives every test function as a slice via @import("builtin").test_functions, and it is responsible for calling them, tracking results, printing output, and setting the process exit code.
zunit’s job is to be that file for you. When you wire up your project, you write a tiny test_runner.zig like this:
const std = @import("std");
const zunit = @import("zunit");
pub fn main(init: std.process.Init) !void {
try zunit.run(init.io, .{
.on_global_hook_failure = .abort,
.on_file_hook_failure = .skip_remaining,
.output = .verbose_timing,
.output_file = try zunit.outputFileArg(
init.arena.allocator(),
init.minimal.args,
),
});
}
That’s it. zunit.run(...) walks builtin.test_functions, classifies each one (normal test vs. hook vs. global hook), runs them in the correct order, manages the std.testing.allocator_instance lifecycle, captures errors, formats output, and writes the XML report if you asked for one.
Why
std.process.Init? In Zig 0.16, clocks and file I/O go through thestd.Iointerface, and command-line arguments arrive viastd.process.Init. zunit needs both, so itsmaintakes the fullInitand forwardsinit.ioandinit.minimal.args. If you’re on Zig 0.15.2, usev1.0.0of zunit — the API there takes no parameters.
Installation
zunit pins against specific Zig versions because the standard library has evolved meaningfully between releases. Pick the tag that matches your compiler.
Zig 0.16.0 and later
zig fetch --save git+https://github.com/dariogriffo/zunit#v2.0.0
Zig 0.15.2
zig fetch --save git+https://github.com/dariogriffo/zunit#v1.0.0
zig fetch writes both the URL and the integrity hash to your build.zig.zon, so you don’t have to compute it yourself. If you need Zig itself on Debian, I have a separate guide: How to Install Zig on Debian.
Then in build.zig, pull in the module and point your test step at the runner:
const zunit_dep = b.dependency("zunit", .{ .target = target, .optimize = optimize });
const zunit_mod = zunit_dep.module("zunit");
const tests = b.addTest(.{
.root_module = b.createModule(.{
.root_source_file = b.path("src/root.zig"),
.target = target,
.optimize = optimize,
}),
.test_runner = .{
.path = b.path("test_runner.zig"),
.mode = .simple,
},
});
tests.root_module.addImport("zunit", zunit_mod);
const run_tests = b.addRunArtifact(tests);
if (b.args) |args| run_tests.addArgs(args); // forward -- ... to the runner
const test_step = b.step("test", "Run tests");
test_step.dependOn(&run_tests.step);
The if (b.args) |args| run_tests.addArgs(args); line is the bit people miss. Without it, zig build test -- --output-file results.xml silently drops the flag before it ever reaches the binary.
Writing Hooks: The Two-Tier Lifecycle
zunit gives you two orthogonal axes for hooks: global vs. per-file, and naming-convention vs. programmatic. Here’s how each one plays out.
Per-file hooks (naming convention)
Drop a test "beforeAll" block anywhere in a .zig source file. zunit walks builtin.test_functions, looks at the module path prefix embedded in each test’s fully-qualified name, and scopes the hook to tests in that same file. No macros, no registration, no attributes.
const std = @import("std");
test "beforeAll" {
std.debug.print("[db] setting up\n", .{});
}
test "afterAll" {
std.debug.print("[db] tearing down\n", .{});
}
test "beforeEach" {
// runs before every test in this file
}
test "afterEach" {
// runs after every test in this file
}
test "insert: single row" {
// actual test — preceded by beforeEach, followed by afterEach
}
Global hooks (naming convention)
Prefix with zunit: to run across every file in the suite. Put them wherever makes sense — src/root.zig is a good default:
test "zunit:beforeAll" {
// runs once before the entire suite starts
}
test "zunit:afterAll" { /* once at the end */ }
test "zunit:beforeEach" { /* before every test in every file */ }
test "zunit:afterEach" { /* after every test in every file */ }
Global hooks (programmatic)
If your setup logic is better expressed as a function (because it needs to be shared, or because you want static analysis on the reference), pass a function pointer in the config. These run before the corresponding zunit:... naming-convention hooks, so you can layer them.
fn setupDatabase() !void { /* spin up a test DB */ }
fn teardownDatabase() !void { /* tear it down */ }
fn resetState() !void { /* reset per-test */ }
fn flushLogs() !void { /* after each */ }
pub fn main(init: std.process.Init) !void {
try zunit.run(init.io, .{
.before_all = setupDatabase,
.after_all = teardownDatabase,
.before_each = resetState,
.after_each = flushLogs,
});
}
Execution order
Here’s the full order zunit uses per run. This is worth keeping in mind when a hook fires at a time you didn’t expect:
[suite start]
config.before_all ← programmatic, once
zunit:beforeAll ← named global, once
[for each file, in discovery order]
beforeAll ← named per-file, once per file
[for each test in this file]
config.before_each ← programmatic global
zunit:beforeEach ← named global
beforeEach ← named per-file
>>> TEST <<<
afterEach ← named per-file
zunit:afterEach ← named global
config.after_each ← programmatic global
afterAll ← named per-file, once per file
zunit:afterAll ← named global, once
config.after_all ← programmatic, once
[suite end]
Hook blocks (beforeAll, afterAll, etc., with or without the zunit: prefix) are never counted in the pass/fail/skip totals — they exist to set up the environment, not to be reported as tests.
What Should Happen When a Hook Fails?
Different teams have different opinions here, so zunit makes it configurable. The OnHookFailure enum has three values:
| Value | Behaviour |
|---|---|
.abort | Print the error and exit the process immediately |
.skip_remaining | Skip all remaining tests in the affected scope (file for per-file hooks, entire suite for global hooks) |
.@"continue" | Log the error and keep running |
The defaults are what I’ve found work best in practice: on_global_hook_failure = .abort (if your DB failed to start, there’s no point running anything), on_file_hook_failure = .skip_remaining (if one file’s fixture is broken, skip that file’s tests but keep running the rest of the suite).
Output Styles and Per-Test Timing
zunit supports three console styles via the output config field:
| Value | What you get |
|---|---|
.minimal | A single final line: N passed N failed N skipped |
.verbose | One PASS / FAIL / SKIP line per test |
.verbose_timing | Same as verbose, plus elapsed time per test (487ns / 1.2µs / 1.234ms) |
Here’s what .verbose_timing looks like in practice:
[db] setting up
PASS insert: single row 487ns
PASS insert: batch 1.2µs
FAIL delete: cascade 312ns
SKIP update: soft-delete
[db] tearing down
2 passed 1 failed 1 skipped
The timing comes from std.Io’s monotonic clock, so you get nanosecond precision without giving up the std.Io abstraction that Zig 0.16 standardised on.
CI-Ready JUnit XML Reports
This is probably the feature that has the highest practical return for any team. Zig’s built-in runner tells your CI “pass” or “fail” and that’s it. zunit writes a JUnit-compatible XML report you can feed into any CI dashboard.
Set output_file to any path ending in .xml and you get JUnit XML; anything else gets a plain text mirror of the console output.
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="zunit" tests="4" failures="1" errors="0" skipped="1" time="0.000002001">
<testsuite name="db" tests="4" failures="1" errors="0" skipped="1" time="0.000002001">
<testcase name="insert: single row" classname="db" time="0.000000487"/>
<testcase name="insert: batch" classname="db" time="0.000001200"/>
<testcase name="delete: cascade" classname="db" time="0.000000312">
<failure message="TestExpectedEqual" type="failure"/>
</testcase>
<testcase name="update: soft-delete" classname="db" time="0.000000000">
<skipped/>
</testcase>
</testsuite>
</testsuites>
Times are in seconds with nanosecond precision. Test names and classnames are XML-escaped automatically, so you don’t have to worry about a test called "< > & ' \"" blowing up the report.
GitHub Actions example
zunit’s repo ships a ready-to-use workflow. The short version for your own pipeline:
- name: Run tests
run: zig build test -- --output-file test-results.xml
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: test-results.xml
- name: Publish test report
uses: dorny/test-reporter@v1
if: always()
with:
name: Test Results
path: test-results.xml
reporter: java-junit
fail-on-error: false
You’ll get per-test pass/fail surfaced in the PR Checks tab, an artifact you can download, and a markdown summary on the job page. The same XML works with Jenkins (JUnit plugin), GitLab CI (junit artifact reports), and any other tool that reads the standard JUnit schema.
The Multi-Binary Problem (and How v2.1 Solves It)
Here’s a scenario that looks innocent and ends up being painful:
You have a large codebase. You don’t want one gigantic test binary that recompiles everything on every change, so you add a separate b.addTest(...) per test file — math_test.zig becomes one binary, strings_test.zig another, and so on. zig build test now fans out across N processes running in parallel, and compilation is fast again.
Except… every process writes to the same --output-file test-results.xml path. They race each other. Only the last writer survives. Your “unified” report contains the results of one out of N binaries, and the others are silently lost.
zunit v2.1 fixes this with automatic fragment consolidation. You describe the suite at the build level:
const std = @import("std");
const zunit_build = @import("zunit"); // build-time import
pub fn build(b: *std.Build) void {
const target = b.standardTargetOptions(.{});
const optimize = b.standardOptimizeOption(.{});
const zunit_dep = b.dependency("zunit", .{ .target = target, .optimize = optimize });
const suite = zunit_build.testSuite(b, zunit_dep, .{
.target = target,
.optimize = optimize,
.output_file = "test-results.xml", // final merged output
.output_dir = "zig-out/test-frags", // per-binary fragments
});
suite.addFile("tests/foo_test.zig");
suite.addFile("tests/bar_test.zig");
// add as many as you want
const test_step = b.step("test", "Run all tests");
test_step.dependOn(suite.step());
}
Run it with plain zig build test — no -- --output-file flag, no shell glue. Under the hood:
testSuitegenerates a sharedrun_id(a hex timestamp) at build time.- For each
addFile, it creates a test binary with a generated runner that reads--output-dir,--run-id, and--consolidate-artifactsfrom its argv. - When a binary finishes its tests, it writes a JUnit fragment to
<output_dir>/<run_id>/<pid>.xml. - It then acquires an exclusive file lock on
<output_dir>/<run_id>/.zunit-merge.lock, reads all*.xmlfragments in that directory, merges their<testsuite>elements into a single<testsuites>root with summed totals, and atomically renames the result to<output_file>. - The lock is released. The merged file always reflects the union of all fragments written so far — the last writer is always correct, regardless of which binary finishes first.
The exit code of each binary still reflects that binary’s own failures only, so zig build test fails fast if any binary has a failing test — you don’t lose the fail-fast behaviour just because the reporting is merged.
If you need finer control (custom runners, non-standard build layouts), the underlying CLI flags are public API:
| Flag | Config field | Purpose |
|---|---|---|
--output-file=<path> | output_file | Final report path |
--output-dir=<path> | output_dir | Fragment directory |
--run-id=<id> | run_id | Shared run identifier |
--consolidate-artifacts[=true] | consolidate_artifacts | Enable merge-on-exit |
Memory Leak Detection
Zig’s built-in testing allocator tracks every allocation and surfaces leaks at the end of a test. zunit preserves this behaviour — and in fact makes it more useful, because it resets std.testing.allocator_instance before every test and checks for leaks after. A test that leaks is reported as LEAK and counted as a failure:
LEAK my allocating test
That means a leak in test #3 cannot cascade into a false positive leak report for test #4, which is a real failure mode when the allocator isn’t properly scoped.
Configuration Reference
Everything goes through a single Config struct. All fields have defaults; specify only what you need to change.
try zunit.run(init.io, .{
.on_global_hook_failure = .abort, // default
.on_file_hook_failure = .skip_remaining, // default
.output = .verbose, // default
.output_file = null, // default (no file output)
// multi-binary fragment path (v2.1+)
.output_dir = null,
.run_id = null,
.consolidate_artifacts = false,
// programmatic hooks
.before_all = null,
.after_all = null,
.before_each = null,
.after_each = null,
});
Reading --output-file from argv at runtime uses the outputFileArg helper, paired with the process arena so the parsed path lives until exit without manual cleanup:
.output_file = try zunit.outputFileArg(
init.arena.allocator(),
init.minimal.args,
),
Both --output-file <path> and --output-file=<path> are accepted.
Why I Built This
I’ve been writing backend .NET for fifteen years and distributed systems before that, and when I started picking up Zig seriously I noticed something odd: the language is excellent, the standard library is elegant, the community is sharp — and yet every Zig project I read was either doing heroic manual test scaffolding or shrugging and accepting that their CI would never be able to tell them which test failed.
zunit is the library I wish had existed when I wrote my first Zig project. It leans into Zig’s test-runner swap mechanism instead of fighting it, keeps the hook model close to what JVM/.NET/JS developers already know (xUnit, JUnit, Jest, Vitest), and focuses on the two things the built-in runner doesn’t do well: lifecycle and reporting.
If you write Zig, and especially if you’re on a team where CI dashboards matter and “my test is slow, which one?” is a question anyone ever asks, give it a try.
Getting Started in 60 Seconds
# 1. Fetch zunit (Zig 0.16+)
zig fetch --save git+https://github.com/dariogriffo/zunit#v2.0.0
# 2. Add the wiring to build.zig (see above)
# 3. Create test_runner.zig with the three-line pub fn main shown above
# 4. Run
zig build test
zig build test -- --output-file test-results.xml
That’s it. You now have beforeAll / afterEach / per-test timing / JUnit XML / CI integration in a Zig project.
Resources
- zunit on GitHub — source, issues, releases
- Zig official website and Zig documentation
- How to Install Zig on Debian — if you’re still setting up your toolchain
dorny/test-reporter— the GitHub Actions step that turns JUnit XML into a checks tab report- JUnit XML schema — the de-facto standard zunit’s XML output targets
Issues, feature requests, and PRs are welcome on the zunit repository. If you ship it in a real project, I’d love to hear about it — open a discussion on GitHub and tell me what worked, what didn’t, and what you want next.
Happy testing in Zig!