PRISM compile pass
A build-time SWC-style pass that folds static / prop-driven JSX
subtrees into direct string concatenations before the bundler
emits jsx() runtime calls. Equivalent in spirit to Marko's compiled
templates, but operating on existing TSX without any authoring
change.
Default-on for any site with [framework] type = "prism". Opt
out per-build with BEXT_PRISM_COMPILE=0. Runtime-verified
byte-equivalent at every layer (source rewrite → JS bundle → V8
render → HTTP TTFB).
What it folds (six tiers)
The pass is additive: anything it can't statically prove safe
falls through to the existing runtime h() path. Every tier is
additive on top of the previous; you get the union by default.
Tier 1 — fully-static subtrees
Element tag is a host tag, every attribute is a string literal, every child is text or another foldable JSX element.
// source
<head>
<meta charSet="utf-8" />
<title>my-page</title>
</head>
// emitted (literal string at this position)
"<head><meta charset=\"utf-8\"><title>my-page</title></head>"
Tier 2 — static-shape with dynamic JSX expression children
Static element + static attrs + at least one {value} child
position. The dynamic value is interpolated verbatim — no escape
(matches runtime semantics).
// source
<h1>Hello, {name}!</h1>
// emitted
"<h1>Hello, " + (name) + "!</h1>"
Tier 3a — dynamic attribute values
Including the canonical conditional className:
// source
<button className={active ? "on" : "off"}>{label}</button>
// emitted
'<button class="' + __bextEsc(active ? "on" : "off") + '">' + (label) + "</button>"
__bextEsc is imported once at file top from
@bext-stack/framework/jsx-runtime — it's the same escapeHtml
function the runtime's formatAttrs uses internally, so attribute
values are escaped byte-identically with or without the pass.
Tier 4 — array.map() unrolling
// source
<ul>{items.map(i => <li>{i}</li>)}</ul>
// emitted
"<ul>" + (items).map((i) => "<li>" + (i) + "</li>").join("") + "</ul>"
The .join("") matters — it makes the surrounding JSX consume one
string instead of an array of strings. Tier 4+ also handles
multi-arg (item, idx) => and block-bodied i => { return <li/>; }
arrows.
Tier 5 — zero-prop component inlining
// source
function Brand() { return <span class="b">bext</span>; }
function Header() { return <header></header>; }
function Page() { return <main></main>; }
// Page's emitted body
return ("<main><header><span class=\"b\">bext</span></header></main>");
Recursive: → one concat. Catches the canonical, , , layout
shell patterns.
Tier 5.5 — prop-bearing component inlining
Components with a single destructured object param fold when the call site supplies every required prop:
// source
function Card({ title, body }: { title: string; body: string }) {
return <div class="card"><h2>{title}</h2><p>{body}</p></div>;
}
function Page({ user }: { user: { name: string; bio: string } }) {
return ;
}
// Page's emitted body
return ("<div class=\"card\"><h2>" + (user.name) + "</h2><p>" + (user.bio) + "</p></div>");
The substitution is identifier-only — {title} in the body gets
replaced with the call-site value (string literal as-is, expression
as a verbatim source span). String-literal call-site attrs like
title="Hello" substitute the unescaped literal; dynamic attrs
become a Dynamic span pointing back to the call-site source.
What it does NOT fold (intentional bails)
| Pattern | Why |
|---|---|
<div {...rest}> |
Spread shape unknowable at build time |
function Card(props) (Ident param) |
props.X member access requires scope analysis; only destructured { a, b } is supported |
{slot} (component with children) |
Slot threading needs scope-aware substitution — deferred |
<main>{props.children}</main> |
props.children can be AsyncIterable<string>; string-coerce would render [object AsyncGenerator] |
style={{...}}, dangerouslySetInnerHTML={{...}} |
Object-shape attribute semantics — runtime applies them, fold can't |
i => { console.log(i); return <li/>; } |
Block bodies with side effects bail; only single-Return blocks fold |
async function Page(), function* Page() |
Async/generator components handled by streaming runtime, not fold |
The props.children rule was once stricter (any props reference
caused the whole subtree to bail). It's now children-specific:
props.title, props.amount, props.user.name etc. all fold.
Empirical results
Source-level (bext-core::transform::prism_compile)
28 unit tests verify the source rewrite is correct on synthetic
fixtures + the actual sites/prism-demo/src/app/page.tsx. On a
representative page (11 jsx() calls in the unfolded output):
| Tier 1 only | Tier 1+2 | Tier 1+2+3a+4+5+5.5 |
|---|---|---|
| 8 of 11 eliminated (73%) | 11 of 11 (100%) | 11 of 11 |
Bundle-level (bext-turbopack::prism::tests)
The pass is integrated into bext-turbopack's compile pipeline at
two places: compile_closure (entry source) and
transform_with_analysis_opts (transitive imports). E2E tests
compile the same fixture with/without the pass, count jsx() calls,
verify the helper import is emitted correctly.
Runtime-verified byte-equivalence
html_byte_equivalence_with_and_without_pass compiles a fixture
twice (env-on / env-off), evaluates both bundles in V8 via
bext_v8::eval::render_prism, and asserts the rendered HTML is
byte-identical. Currently 237 bytes match across all six tiers.
SSR microbench (release mode)
50-row table fixture, 6885 bytes HTML, 500 iters × 2 runs after 100-iter warmup, V8 release mode:
| Mode | avg | p50 | p95 | p99 | Throughput |
|---|---|---|---|---|---|
| PASS = OFF | 140.2 µs | 108 µs | 200 µs | 1,017 µs | 7,131 renders/s |
| PASS = ON | 59.9 µs | 52 µs | 73 µs | 782 µs | 16,684 renders/s |
| Speedup | 2.34× | 2.1× | 2.7× | 1.3× | 2.3× |
The p95/p50 ratio collapses from ~1.9× (high jitter) to ~1.4× — runtime GC variance basically disappears because the rendered code is just string concat.
The relative speedup narrowed from 2.81× → 2.34× over the prior
measurement because the JS runtime got 3-5× faster across-the-board
(commit 2782ae6 —
escapeHtml indexOf bail-out, isAsync typeof short-circuit,
formatAttrs for-in, primitive-children fast path). Both PASS=OFF
and PASS=ON improved absolutely, but PASS=OFF improved more
proportionally — the compile pass's edge is now thinner because the
runtime baseline is much closer to it.
HTTP TTFB on sites/status (live server)
Live ApacheBench-equivalent against the bext-server PRISM dispatcher serving the bext.dev status page:
| Mode | avg | p95 |
|---|---|---|
BEXT_PRISM_COMPILE=0 |
17.59 ms | 23.69 ms |
| default | 6.50 ms | 8.39 ms |
| Speedup | 2.71× | 2.82× |
Matches the V8 microbench within noise. HTTP overhead is ~4-5 ms constant; the SSR delta dominates real-world TTFB.
Activation rule
options.jsx_import_source == Some("@bext-stack/framework")
&& filename ends with .tsx | .jsx
&& BEXT_PRISM_COMPILE != "0" | "false" | "off"
So no config required — every site with [framework] type = "prism"
picks it up automatically on next compile. Two PRISM sites benefit
today: sites/prism-demo and sites/status.
Adding the pass to a new site: nothing required beyond
[framework] type = "prism" in bext.config.toml and
jsxImportSource = "@bext-stack/framework" in tsconfig.json.
Verifying on your machine
# Source-level unit tests (fast, ~1s after warm build)
cd ~/bext && rustup run nightly-2026-04-02 cargo test \
-p bext-core --lib transform::prism_compile
# End-to-end compile-diff + V8 byte-equivalence (~2 min cold rebuild)
cd ~/bext && rustup run nightly-2026-04-02 cargo test \
-p bext-turbopack --lib prism::tests
# Release-mode SSR perf microbench
cd ~/bext && rustup run nightly-2026-04-02 cargo test --release \
-p bext-turbopack --lib prism::tests::bench_render_speedup \
-- --ignored --nocapture
# Eyeball what the rewrite does to sites/prism-demo/src/app/page.tsx
cd ~/bext && rustup run nightly-2026-04-02 cargo test \
-p bext-core --lib transform::prism_compile::tests::dump_rewrite \
-- --nocapture --ignored
How it compares to other engines
Same 38 KB product-listing fixture (Node 25, release mode):
| Engine | avg | Throughput | vs React |
|---|---|---|---|
| Rust string builder (ceiling) | 6 µs | 196K/s | ~600× |
| bext-PRISM-compiled ¹ | 9 µs | 108K/s | ~300× |
| Marko 5 | 44 µs | 22K/s | ~80× |
Solid 1.9 (generate: "ssr") |
65 µs | 15K/s | ~45× |
| PRISM-interpreted (no compile pass) ² | 62 µs | 16K/s | ~45× |
| React 19 | 2,683 µs | 373/s | 1× |
bext-PRISM-compiled is ~5× faster than Marko 5 and ~7× faster
than Solid 1.9 on the same fixture. Both Marko and Solid are
themselves compile-time-template engines (Marko emits direct
out.push(…); Solid's babel-preset-solid with generate: "ssr"
emits _$ssr(template, ...args)) — the PRISM compile pass beats
both because it emits a single growing-accumulator IIFE
((() => { let __bextOut = ""; for (...) __bextOut += "..."; return __bextOut; })())
with for-loops over .map/Array.from patterns instead of the
intermediate-array .map().join("") shape Solid emits.
¹ The compile pass moved from .map().join("") expression form to
imperative IIFE + for-loop in commit c2d26ec next.
That dropped pass output from 59 µs → ~9 µs on the 100-row fixture
— closing the gap to the hand-rolled Rust-equivalent ceiling
(7 µs) to a 1.3× margin.
² The interpreted runtime got 3-5× faster in commit
2782ae6 (was
~326 µs on this fixture). PRISM-interpreted is now in Solid's
ballpark — escapeHtml indexOf-bails, isAsync typeof-short-
circuits, formatAttrs uses for…in, h() has a primitive-
children fast path that skips flat() and Array.join.
The full bench harness lives at harnesses/jsx-shootout/ in the
bext repo — runs all five engines against three identical-DOM
fixtures with one command. See its
COMPILE-PASS.md
for the design discussion that motivated each tier.
Signals tier — auto-wrap reactive JSX expressions
A separate companion pass at
crates/bext-core/src/transform/prism_signals.rs operates on
"use signals" files (signals jsxImportSource) rather than the base
PRISM source. Where the main pass folds static subtrees into string
literals, the signals pass does the inverse: it wraps dynamic
expressions in arrow-function thunks so the signals
runtime sees a re-evaluatable function rather
than a value materialized once at JSX-call time.
Without the pass:
"use signals";
<p>Count: {count.value}</p>
// ^^^^^^^^^^^^ already materialized — not reactive
With the pass (transparent to the developer):
<p>Count: {(() => count.value)}</p>
// ^^^^^^^^^^^^^^^^^^^^^^^^^^ thunk; runtime re-evaluates on signal change
What it wraps
JSX expression containers in "use signals" files where the
expression contains anywhere inside it a read of <knownSig>.value.
"Known" means an identifier the pass identified earlier in scope as
bound by signal(…) or computed(…). Examples that wrap:
{count.value} // bare read
{count.value > 0 ? "yes" : "no"} // conditional
{Math.max(a.value, b.value)} // call expression
{items.value.length} // chained member
{`hello, ${name.value}!`} // template literal
{count.value as number} // type assertion (preserved verbatim)
What it does NOT wrap
| Why | |
|---|---|
count.value = 5 (assignment) |
LHS of assignment is a write, not a tracked read |
<p>{1 + 2}</p> |
no signal in the expression — the pass is precise |
<p>{user.name}</p> (no .value on user) |
user isn't a known signal; pass leaves it alone |
Function bodies (onClick={() => count.value++}) |
callbacks are deferred — wrapping the outer wouldn't capture them anyway |
| Object methods, getter bodies | same — nested function scopes |
Activation rule
The pass fires only when all of these hold:
- The file's first non-whitespace directive is
"use signals"or'use signals'. 2. The file has at least one<character (skips pure.tsfiles with no JSX). 3. The bundler'sjsxImportSourceis@bext-stack/framework/signals. 4. The env varBEXT_PRISM_COMPILEis not0/false/off.
Implementation
crates/bext-core/src/transform/prism_signals.rs ~370 LOC
Same tsc_rs_ast walker pattern as the main PRISM pass: collects
signal-bound identifiers from var/let/const = signal(…) /
= computed(…) declarations, walks all JSX expression containers,
asks expr_contains_signal_read whether to wrap, queues span
replacements, applies them end-to-start so byte offsets stay valid.
What's deferred
| Reason | |
|---|---|
| Tier 6 — static-string atomization | V8 already interns parsed string literals, so atomization saves bundle bytes (modest) but not runtime cost. Implementation cost didn't pencil out for the bundle-size win alone. Documented as deferred in source. |
| Component children threading | {slot} — slot threading into the body's {children} reference needs scope-aware substitution. |
| Ident-style props param | function Card(props) { return <div>{props.title}</div>; } — member-access substitution is more fragile than identifier matching. Use destructured function Card({ title }) to opt into Tier 5.5 inlining. |
| CI HTML-diff regression check | A check that diffs HTML output of every PRISM page with/without the pass on every PR. Cheap insurance against future regressions. |
Implementation reference
| File | Role | LOC |
|---|---|---|
crates/bext-core/src/transform/prism_compile.rs |
The visitor + fold logic | ~1400 |
crates/bext-turbopack/src/direct.rs::compile_closure |
Hook for entry source | ~17 |
crates/bext-turbopack/src/direct.rs::transform_with_analysis_opts |
Hook for transitive imports | ~17 |
crates/bext-server/src/handler.rs::handle_ssr |
Routes bext run standalone through PRISM dispatch |
~25 |
crates/bext-turbopack/Cargo.toml |
bext-core = { path = "../bext-core" } |
1 |
harnesses/jsx-shootout/ |
5-engine bench harness | — |
The pass uses tsc_rs_ast (bext's own TypeScript parser) — no swc
dep. Runs as part of bext-turbopack's existing transform pipeline,
which means no new infrastructure for hot reload, file watching, or
incremental compile — those are already there.