Caching
bext includes a multi-layered caching system designed for server-rendered applications. Out of the box, it provides ISR (Incremental Static Regeneration) with background revalidation, a stampede guard to prevent thundering herds, and vary-aware cache keys. With a Pro license, you can add a Redis L2 tier for distributed caching across multiple instances.
ISR Cache (TTL + Stale-While-Revalidate)
The ISR cache stores rendered HTML with a TTL and a stale-while-revalidate (SWR) window. While the response is within its TTL, it is served directly from cache. Once the TTL expires but the SWR window is still open, bext serves the stale content immediately and triggers a background revalidation to refresh the entry.
[cache.isr]
max_entries = 10000 # LRU eviction when exceeded
default_ttl_ms = 60000 # 60 seconds fresh
default_swr_ms = 3600000 # 1 hour stale-while-revalidate window
You can override TTL and SWR per route using route rules:
[[route_rules]]
pattern = "/blog/*"
render = "isr"
ttl_ms = 300000 # 5 minutes
swr_ms = 86400000 # 24 hours
[[route_rules]]
pattern = "/dashboard/*"
render = "ssr"
cache = false # Disable caching entirely
The timeline for a cached response looks like this:
|-- TTL (fresh) --|-- SWR (stale, revalidating) --|-- expired --|
serve cache serve stale + background re-render
revalidation
Cache Tag Invalidation
Every cached response can carry tags -- arbitrary strings that group related entries. When your data changes, invalidate by tag instead of clearing the entire cache.
Tags are set via the X-Cache-Tags response header from your application:
X-Cache-Tags: product:42, category:electronics, homepage
Invalidate a tag via the admin API:
curl -X POST http://localhost:3061/__bext/cache/invalidate \
-H "Content-Type: application/json" \
-d '{"tag": "product:42"}'
This purges every cached entry tagged with product:42 from both L1 and L2 (when Redis is configured). In edge mode, the invalidation is broadcast via Redis Pub/Sub to all instances.
On-Demand Bundle Cache
When using PRISM on-demand rendering (V8), compiled route bundles are cached at three levels:
| Level | Location | Survives restart | Invalidation |
|---|---|---|---|
| Memory | DashMap (per-process) | No | Source mtime change |
| Disk | .bext/ssr-cache/{route}.js |
Yes | Source mtime change |
| Compile | Bun subprocess | — | Always fresh |
On first request, bext compiles the route via Bun and caches both in memory and on disk. After a server restart, routes load from disk cache without Bun compilation. The disk cache is invalidated when source file mtimes change.
The React base bundle (747 KB) and site shell (31 KB) are also disk-cached at .bext/v8-react-base.js and .bext/ssr-cache/__shell.js.
Configure on-demand cache TTL:
[nextjs]
cache_ttl = 60 # Response cache TTL in seconds
cache_swr = 300 # Stale-while-revalidate window in seconds
Stampede Guard (Coalesced Revalidation)
When a popular page expires, hundreds of concurrent requests could all trigger a re-render simultaneously -- the thundering herd problem. bext's stampede guard ensures only one request performs the revalidation. All other concurrent requests for the same cache key receive the stale response and wait for the single background worker to finish.
The revalidation pool is configurable:
[cache.revalidation]
pool_size = 4 # Background worker threads
max_queue_size = 1000 # Max pending revalidation requests
dedup = true # Deduplicate by cache key (default)
With dedup = true (the default), if 200 requests arrive for /products/42 while it is being revalidated, only one revalidation executes. The other 199 requests serve the stale cached version.
Tiered L1 + L2 Cache (Pro)
With a Pro license and Redis configured, bext operates a two-tier cache:
| Tier | Backend | Latency | Scope |
|---|---|---|---|
| L1 | In-memory DashMap | Sub-microsecond | Per-instance |
| L2 | Redis | ~1ms network RTT | Shared across fleet |
[redis]
url = "redis://localhost:6379"
prefix = "bext:"
[cache.tiered]
write_through = true # Sync write to L2 on every cache set
promote_on_l2_hit = true # Copy L2 hits into L1 for next request
The lookup order is: L1 -> L2 -> render. On an L2 hit, the entry is promoted to L1 so subsequent requests are served at memory speed. Writes go to both L1 and L2 when write_through is enabled.
If Redis becomes unavailable, bext degrades gracefully -- L1 continues to serve and new renders are cached locally. Reconnection is automatic.
Cache statistics are exposed via the Prometheus metrics endpoint:
bext_cache_l1_hits_total
bext_cache_l2_hits_total
bext_cache_l2_promotes_total
bext_cache_misses_total
bext_cache_l2_write_errors_total
Negative Caching
Error responses (404, 429, 502, 503) are cached for short durations to avoid repeatedly hitting the SSR pipeline for pages that are known to be broken or rate-limited.
[cache.negative]
enabled = true
max_entries = 5000
[cache.negative.ttl_by_status]
404 = 30000 # 30 seconds
429 = 5000 # 5 seconds
502 = 10000 # 10 seconds
503 = 5000 # 5 seconds
Each status code has its own configurable TTL. Only status codes listed in ttl_by_status are eligible for negative caching -- a 500 error is not cached by default because it usually indicates a transient bug.
Vary-Aware Cache Keys
HTTP responses can vary based on request headers like Accept-Encoding or Accept-Language. bext respects the Vary header and incorporates the relevant request header values into the cache key, so different representations are stored and served correctly.
[cache.key]
include_query = true # Include query string (default)
include_headers = ["accept-language"] # Extra headers for cache key
ignore_query_params = ["utm_source", "utm_medium", "fbclid"]
include_cookies = ["currency", "region"] # Cookie values in cache key
Tracking parameters (utm_*, fbclid, gclid, etc.) are stripped from cache keys by default so that URLs differing only in analytics parameters share a single cache entry. You can add additional parameters to strip via ignore_query_params.
When a response includes Vary: *, bext bypasses the cache entirely for that response, as the wildcard indicates the content varies on unpredictable factors.
Tenant Cache Isolation
In multi-tenant applications, each tenant's cache is namespaced to prevent cross-tenant data leakage:
[cache.tenant]
ttl_ms = 300000 # 5 minutes
The tenant identifier is extracted from the request (via JWT claim, subdomain, or header) and included in the cache key automatically.