Skip to content

Instantly share code, notes, and snippets.

@mbrock
Created May 10, 2026 17:24
Show Gist options
  • Select an option

  • Save mbrock/26d70e708269e4cfa933d7943e0f0007 to your computer and use it in GitHub Desktop.

Select an option

Save mbrock/26d70e708269e4cfa933d7943e0f0007 to your computer and use it in GitHub Desktop.
Bun Zig build timing investigation

Bun Zig build timing investigation

Date: 2026-05-10

Repo: oven-sh/bun

Commit tested: fe735f8f0522726c84c50b7473b867a52a2370d4

Zig compiler: Bun's pinned oven-sh/zig fork at 04e7f6ac1e009525bc00934f20199c68f04e0a24, reporting 0.15.2.

Host: Linux x86_64, local development machine.

What was being measured

The focus was the bun-zig build target, not a full linked Bun binary. That target builds the Zig object file or object shards consumed by the larger Bun link.

The investigation compared:

  • normal local release LLVM codegen
  • experimental Zig self-hosted backend via Bun's existing -Dno_llvm path
  • the same native backend with Bun's debug-info stripping path enabled
  • cache-hit behavior versus forced Zig recompilation
  • local sharded release builds versus CI-like single-object LLVM builds

The build-script patch under test added these local flags:

--zig-time-report=true
--zig-no-llvm=true
--zig-strip-debug-info=true

--zig-strip-debug-info=true sets BUN_BUILD_FAST=1 for the Zig build. That reuses Bun's existing build.zig path that strips the root module. It matters because no-LLVM without this currently hits a native-backend DWARF assert:

css.small_list.SmallList(css.values.easing.EasingFunction,1).Data ... : 24 != 20

That assert is from Zig's DWARF emission path, not from Bun's source-level assertions.

Commands

Representative local-release LLVM build:

rm -rf /tmp/bun-prof-llvm-a /tmp/bun-prof-cache-llvm-a
/usr/bin/time -f 'OUTER elapsed=%E status=%x maxrss=%MKB user=%U sys=%S' \
  bun scripts/build.ts \
    --profile=release \
    --build-dir=/tmp/bun-prof-llvm-a \
    --cache-dir=/tmp/bun-prof-cache-llvm-a \
    --target=bun-zig

Forced LLVM recompilation with generated code already present:

rm -f /tmp/bun-prof-llvm-a/bun-zig*.o
rm -rf /tmp/bun-prof-llvm-a/cache/zig/local
/usr/bin/time -f 'OUTER elapsed=%E status=%x maxrss=%MKB user=%U sys=%S' \
  bun scripts/build.ts \
    --profile=release \
    --build-dir=/tmp/bun-prof-llvm-a \
    --cache-dir=/tmp/bun-prof-cache-llvm-a \
    --target=bun-zig

No-LLVM plus stripped debug info:

rm -rf /tmp/bun-prof-nollvm-a /tmp/bun-prof-cache-nollvm-a
/usr/bin/time -f 'OUTER elapsed=%E status=%x maxrss=%MKB user=%U sys=%S' \
  bun scripts/build.ts \
    --profile=release \
    --build-dir=/tmp/bun-prof-nollvm-a \
    --cache-dir=/tmp/bun-prof-cache-nollvm-a \
    --zig-no-llvm=true \
    --zig-strip-debug-info=true \
    --target=bun-zig

CI-like single-object LLVM build, with LTO explicitly disabled so it remains comparable to the local release no-LTO measurement:

rm -rf /tmp/bun-prof-llvm-1cg /tmp/bun-prof-cache-llvm-1cg
/usr/bin/time -f 'OUTER elapsed=%E status=%x maxrss=%MKB user=%U sys=%S' \
  bun scripts/build.ts \
    --profile=release \
    --ci=true \
    --lto=false \
    --build-dir=/tmp/bun-prof-llvm-1cg \
    --cache-dir=/tmp/bun-prof-cache-llvm-1cg \
    --target=bun-zig

Direct native-backend sanity check without Bun's build wrapper environment:

env BUN_BUILD_FAST=1 \
  ZIG_LOCAL_CACHE_DIR=/tmp/bun-direct-nollvm-nosema/cache/zig/local \
  ZIG_GLOBAL_CACHE_DIR=/tmp/bun-direct-cache-nollvm-nosema/zig/global \
  vendor/zig/zig build obj \
    --cache-dir /tmp/bun-direct-nollvm-nosema/cache/zig/local \
    --global-cache-dir /tmp/bun-direct-cache-nollvm-nosema/zig/global \
    --zig-lib-dir vendor/zig/lib \
    --prefix /tmp/bun-direct-nollvm-nosema \
    -Dobj_format=obj \
    -Dtarget=x86_64-linux-gnu \
    -Doptimize=ReleaseFast \
    -Dcpu=haswell \
    -Denable_logs=false \
    -Denable_asan=false \
    -Denable_fuzzilli=false \
    -Denable_valgrind=false \
    -Denable_tinycc=true \
    -Dlto=false \
    -Dno_llvm=true \
    -Duse_mimalloc=true \
    -Dllvm_codegen_threads=0 \
    -Dversion=1.3.14 \
    -Dreported_nodejs_version=24.3.0 \
    -Dcanary=1 \
    -Dcodegen_path=/tmp/bun-prof-nollvm-a/codegen \
    -Dcodegen_embed=true \
    -Dsha=fe735f8f0522726c84c50b7473b867a52a2370d4 \
    --prominent-compile-errors \
    --summary all

The same direct command was repeated with ZIG_PARALLEL_SEMA=1.

Timing matrix

Configuration Cache state Object shape Outer wall Zig summary Max RSS
Local release LLVM fresh temp build/cache 32 shards, bun-zig.{0..31}.o 41.60s 35s 5G
Local release LLVM object + local Zig cache removed, codegen present 32 shards 37.28s 36s 5G
Local release LLVM object removed, Zig cache intact cached reinstall 0.36s cached 15ms 41M in Zig summary
no-LLVM + stripped debug info fresh temp build/cache single bun-zig.o 12.00s 5s 1G
no-LLVM + stripped debug info object + local Zig cache removed, codegen present single bun-zig.o 6.98s 5s 1G
no-LLVM + stripped debug info object removed, Zig cache intact cached reinstall 0.35s cached 16ms 41M in Zig summary
CI-like LLVM, --ci=true --lto=false fresh temp build/cache single bun-zig.o 6m37.40s 6m 9G
Direct no-LLVM + strip, no ZIG_PARALLEL_SEMA fresh temp build/cache, codegen present single bun-zig.o 30.25s 27s 1G
Direct no-LLVM + strip, with ZIG_PARALLEL_SEMA=1 fresh temp build/cache, codegen present single bun-zig.o 7.79s 5s 2G

Output sizes and cache footprint

Local release LLVM, 32-shard output:

267M total du size across bun-zig.{0..31}.o
279,125,696 bytes total across bun-zig.{0..31}.o
366M local Zig cache
40M global Zig cache

CI-like single-object LLVM output:

261M bun-zig.o
360M local Zig cache
40M global Zig cache

No-LLVM stripped output:

176M bun-zig.o
231M local Zig cache
40M global Zig cache

The native-backend object is smaller than the LLVM object in this experiment, but this is not enough to infer runtime quality or production suitability. It only says the object target was emitted successfully.

Analysis

The most important correction from the measurements is that the initial 30s native-backend number and the later 5s Zig summary were not simply filesystem warmth or cache effects.

The major difference was ZIG_PARALLEL_SEMA=1.

Bun's build wrapper sets that environment variable for Zig rules. My earlier direct zig build experiment did not. Repeating the direct no-LLVM command without it reproduced the slow result:

wall 30.25s, Zig compile 27s

Adding only ZIG_PARALLEL_SEMA=1 changed the same direct build to:

wall 7.79s, Zig compile 5s

So the fast no-LLVM result appears real for the patched build-wrapper path, but it depends on Bun's Zig fork and its parallel semantic-analysis behavior.

The local LLVM release build is also not a single monolithic LLVM compile. For non-CI local release builds, Bun sets llvm_codegen_threads to the host's available parallelism. On this machine that produced 32 object shards:

bun-zig.{0..31}.o

That sharding turns a large amount of LLVM work into parallel wall-clock time. The local LLVM release result was roughly:

37-42s wall, about 5 minutes CPU, 5G RSS

The single-object LLVM comparison is much slower:

6m37s wall, 9G RSS

That makes the local sharded LLVM path look like a very effective build-time optimization. It also explains why "release build" needs qualification when talking about Bun's Zig object: local release and CI-like single-object release can have radically different compile-time behavior.

Zig --time-report

--time-report is useful mainly for the LLVM backend. In the local release LLVM run, the report showed time concentrated in standard LLVM optimization passes:

InstCombinePass        about 22%
SimplifyCFGPass        about 8%
SROAPass               about 6%
GVNPass                about 5%
IPSCCPPass             about 5%
DSEPass                about 4%
EarlyCSEPass           about 4%
InlinerPass            about 4%

For the native backend, --time-report did not expose a comparable detailed breakdown. It mostly produced the normal Zig build summary plus the web UI message.

The patched stream wrapper can run --zig-time-report=true without leaving the Zig web UI blocking the build forever: after a successful build summary and a short quiet period, it terminates the Zig process group and treats that as success.

One measurement caveat: in that detached web-UI mode, /usr/bin/time around the wrapper no longer captures child RSS and CPU reliably. For resource numbers in --zig-time-report mode, Zig's own summary line is more useful, or the whole command should be run under a cgroup such as systemd-run.

Current interpretation

The experimental native backend plus stripped debug info is dramatically faster for building the bun-zig object target:

no-LLVM + strip: about 7s forced recompile
local sharded LLVM: about 37s forced recompile
single-object LLVM: about 6m37s fresh compile

But this should not be read as "Bun can switch production builds to no-LLVM now." The measurement only proves that the Zig object target can be emitted quickly under this configuration. It does not validate the final link, runtime behavior, generated code quality, platform coverage, or debug-info support.

The practical build-time takeaways are:

  • Bun's local release Zig build is already optimized heavily by LLVM sharding.
  • ZIG_PARALLEL_SEMA=1 is crucial for fast native-backend compilation in this fork.
  • Native backend plus stripped debug info avoids the observed DWARF crash.
  • Zig cache-hit behavior is extremely fast for both LLVM and native backend.
  • --time-report helps explain LLVM time, but it is not a full build profiler.
  • For whole-build profiling, systemd-run plus cgroup accounting is still a good lightweight complement to Zig's own summaries.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment