Tracer
The Tracer capability is the shape bext uses to emit tracing spans. The runtime records a span whenever a request enters a handler, when a plugin does interesting work, or when a host function wraps an operation worth timing; the active TracerPlugin decides where those spans go. Swap the plugin and your logs land in stdout, or OTLP, or Honeycomb, or Datadog — without any change to the code that produced the spans.
This page walks through the trait, explains why it is handle-based instead of taking Box<dyn Span>, shows what the SpanHandle::INVALID sentinel is for, notes how the shape lines up with OpenTelemetry semantic conventions without depending on the opentelemetry crate, and finishes with a short end-to-end example of a plugin emitting a custom span.
The trait
The full shape lives in bext-plugin-api::tracer. It is sync, object-safe, and carries no generics on the trait itself:
pub trait TracerPlugin: Send + Sync {
fn name(&self) -> &str;
fn start_span(&self, span: SpanStart<'_>) -> SpanHandle;
fn set_attribute(&self, span: SpanHandle, key: &str, value: AttrValue);
fn set_status(&self, span: SpanHandle, status: SpanStatus);
fn add_event(&self, span: SpanHandle, event: SpanEvent<'_>);
fn end_span(&self, span: SpanHandle, end_time_unix_nanos: Option<u64>);
fn flush(&self) -> Result<(), String>;
}
Every recording method — start_span, set_attribute, set_status, add_event, end_span — is infallible. A tracer that cannot record a span should drop it silently, not propagate an error up through code that never asked to handle one. The only fallible method is flush, because flush is a real I/O boundary where the caller does want to know whether the backend accepted the batch.
start_span takes a SpanStart<'_> — the name, the span kind, an optional parent handle, the initial attributes, and an optional explicit start timestamp in Unix nanoseconds. Passing a parent explicitly, rather than pulling it from a thread-local, keeps the trait safe to call from any execution model bext runs plugins in: sync host code, async tasks, WASM, QuickJS, whichever.
Why handles, not Box<dyn Span>
The obvious alternative design is the one OpenTelemetry itself uses: start_span returns a boxed trait object, and you call .set_attribute(...) on that object. We tried that shape and rejected it for three reasons.
It is not Send + Sync cleanly across FFI. Plugins running inside QuickJS or WASM need to store a reference to the span somewhere and come back to it on a different call into the host. Crossing the FFI boundary with a trait object that owns state means wrapping it in Arc<Mutex<dyn Span + Send + Sync>>, which is noisy in every language binding and bleeds Rust's ownership rules into the guest runtime.
It forces the backend to manage lifetimes. A plugin that forgets to end a span — or panics halfway through — leaks a boxed span on the tracer's heap. With handles, the tracer keeps a HashMap<SpanHandle, InternalSpan> and can reap stale entries on its own schedule.
It conflates identity with storage. A span has a stable identity — its (trace_id, span_id) pair — that is meaningful across process boundaries, and a local representation that is not. Handles separate the two: the handle is a plain Copy value you can log, pass across FFI, or use to look up the same span from a completely different host call.
So SpanHandle is exactly the identity:
pub struct SpanHandle {
pub trace_id: [u8; 16],
pub span_id: [u8; 8],
}
16 bytes of trace id, 8 bytes of span id. Those are the exact widths OpenTelemetry and W3C Trace Context use, which means a handle is a valid wire identifier without any translation.
The INVALID sentinel
Every SpanHandle is Copy, and there is one distinguished value:
impl SpanHandle {
pub const INVALID: SpanHandle = SpanHandle {
trace_id: [0u8; 16],
span_id: [0u8; 8],
};
pub fn is_invalid(&self) -> bool {
self.trace_id == [0u8; 16] && self.span_id == [0u8; 8]
}
}
This is the value a tracer returns when it decided not to record the span — for example, because sampling dropped it, because the tracer is shutting down, or because the plugin passed a parent that did not exist. All the recording methods check is_invalid at the top and return immediately, so calling set_attribute on an invalid handle is a guaranteed no-op.
The upshot is that plugin code never has to branch on "did the tracer decide to record this?" The happy-path code is the same whether the span is real or not:
let span = tracer.start_span(SpanStart {
name: "checkout.verify_stock",
kind: SpanKind::Internal,
..Default::default()
});
tracer.set_attribute(span, "cart.items", AttrValue::from(items as i64));
// ... do the work ...
tracer.set_status(span, SpanStatus::Ok);
tracer.end_span(span, None);
If sampling dropped the span, every line after start_span is a no-op. No if let Some(span) ladder, no Option threading through the call tree.
Lines up with OpenTelemetry without depending on it
The trait uses plain primitive types — byte arrays, string slices, owned Strings, and a small AttrValue enum — and deliberately does not import anything from the opentelemetry crate. That is a hard rule: bext-plugin-api must stay vendor-neutral, because a stable capability surface is effectively permanent. Coupling the trait to one version of one crate would pin the whole ecosystem to that choice forever.
At the same time, the shape lines up exactly with OpenTelemetry semantic conventions, so the OTLP backend can map types one-to-one without inventing anything:
| Bext type | OpenTelemetry equivalent |
|---|---|
SpanHandle.trace_id |
16-byte TraceId |
SpanHandle.span_id |
8-byte SpanId |
SpanKind |
opentelemetry::trace::SpanKind |
SpanStatus |
opentelemetry::trace::Status |
AttrValue |
opentelemetry::Value |
start_time_unix_nanos |
span start_time as SystemTime |
The @bext/tracer-otlp reference plugin is the only place in the workspace that imports opentelemetry::* names at all. It converts them to and from bext's plain-primitive shapes at the trait boundary and forwards everything to the otel SDK underneath.
The two reference plugins
@bext/tracer-stdout
crates/bext-impls/bext-tracer-stdout is the dev tracer. It writes one JSON object per finished span to stdout (or any Write + Send sink you plug in), with zero dependencies beyond serde_json. The sink is a SharedSink = Arc<Mutex<dyn Write + Send>>, which keeps it cheap to build in tests with a shared Vec<u8> buffer — the crate's own test suite does exactly that to assert what the tracer emitted.
start_span allocates a trace id by packing an atomic counter with the current Unix-nano time and a span id from the same counter, then stores the in-progress span in a Mutex<HashMap<SpanHandle, SpanInternal>>. end_span removes the entry, fills in end_time and duration, and serializes one line to the sink. flush is Ok(()) — stdout is already flushed line-by-line.
@bext/tracer-otlp
crates/bext-impls/bext-tracer-otlp is the production tracer. It wraps opentelemetry_sdk::trace::SdkTracerProvider and exposes two constructors: one that takes a fully-assembled provider (so callers can plug in batching, samplers, resource attributes, and multiple exporters themselves) and one that takes any SpanExporter and builds a simple provider around it. The second form is what the crate's own tests use, with an in-memory MemExporter that appends every exported span to a shared Vec instead of sending it over the wire.
Inside, start_span turns the optional parent handle into an opentelemetry::trace::SpanContext, wraps it in a Context, and builds the span through the SDK. The returned opentelemetry_sdk::trace::Span lives in the same Mutex<HashMap<SpanHandle, _>> pattern as the stdout version, and end_span / flush forward to the SDK. Flush maps any error from force_flush to a String so it crosses the trait boundary cleanly.
Config
Picking which tracer is active is one line in bext.config.toml:
[tracer]
exporter = "otlp"
endpoint = "http://localhost:4317"
service_name = "bext-edge-01"
sample_ratio = 1.0
[tracer.resource_attributes]
"deployment.environment" = "production"
"service.version" = "2.3.1"
Swap exporter = "otlp" for exporter = "stdout" during development and you get line-oriented JSON on the process stdout — same trait, same spans, different backend. For multi-tenant setups or fan-out to several collectors, point at a preassembled SdkTracerProvider in Rust and pass it to OtlpTracerPlugin::with_provider directly; the config-driven path is a convenience for the common case.
Emitting a span from a plugin
Plugins get access to the active tracer through the host-function table (host.tracer()), which returns a &dyn TracerPlugin. Here is a minimal example of a plugin wrapping a unit of work in its own span:
use bext_plugin_api::tracer::{AttrValue, SpanKind, SpanStart, SpanStatus};
pub fn handle_checkout(host: &Host, cart_id: &str, items: usize) -> Result<(), String> {
let tracer = host.tracer();
let span = tracer.start_span(SpanStart {
name: "checkout.handle",
kind: SpanKind::Internal,
attributes: vec![
("cart.id".into(), AttrValue::from(cart_id)),
("cart.items".into(), AttrValue::from(items as i64)),
],
..Default::default()
});
let result = verify_stock(host, cart_id);
match &result {
Ok(_) => tracer.set_status(span, SpanStatus::Ok),
Err(e) => tracer.set_status(span, SpanStatus::Error {
description: e.clone(),
}),
}
tracer.end_span(span, None);
result
}
Three things worth noting. First, the plugin does not know which tracer it is talking to — the code is identical whether the active plugin is stdout, otlp, or something the operator wrote themselves. Second, the ..Default::default() tail picks up sensible defaults for parent, start time, and empty initial attributes, so simple spans stay short. Third, the Err arm still calls end_span, because ending the span is what releases the entry in the tracer's handle map — forgetting it is the one real lifetime bug the handle design can produce.
Where to go next
- The capabilities overview covers the promotion ladder the tracer capability will travel once it lands under @bext/.
- Observability shows how the server's built-in spans flow through the active tracer without any plugin code.
- Building a plugin walks through scaffolding a plugin that can declare requires_capabilities = ["tracer"] in its manifest and call host.tracer() at runtime.