The vanta sidecar: how a Rust ZK indexer talks to a C++ Bitcoin node
Sections
A 1-minute-block Bitcoin Core fork with ZK proofs at consensus has a problem the README doesn’t volunteer: you need the validator to check the proofs, but you don’t want to write the proof system in C++. Vanta’s answer is a hybrid. The C++ consensus engine calls a Rust verifier statically linked as libvanta_verifier.a. The L2 indexing, the SMT, the encrypted-note delivery, and the proof-generation hot path all live in a separate Rust process — vanta-node — that talks to vantad over JSON-RPC and to wallets over REST.
Two things share the name “sidecar” in this codebase and I want to disambiguate them up front:
- The FFI verifier (
vanta-verifier-ffi→libvanta_verifier.a) is linked into vantad. It runs in-process. It’s what answers “does this SP1 proof verify” insidesrc/script/interpreter.cpp. - The L2 sidecar (
vanta-node) is a separate daemon. It indexes commitments, holds the SMT, distributes encrypted notes, and serves a REST API to wallets. It does not participate in consensus.
This post is about both, because the architecture only makes sense when you see why one is in-process and the other isn’t.
The audit-surface trade
Bitcoin Core has 280k+ lines of C++ that have been read by more eyes than any other consensus codebase on Earth. Adding a Rust dependency to that build is a non-trivial ask of a future Bitcoin-Core-style review. We made the call up-front: a minimal Rust footprint inside vantad, exposed through a hand-written C ABI, with everything else in a separate process.
The minimal footprint is libvanta_verifier.a. From the zkVM engineering paper, the contract:
The bridge between the ZK proof world and the consensus world is a 440-byte C-compatible structure called
VantaJournal, declared insrc/vanta/verifier.h:typedef struct { uint8_t smt_root[32]; uint32_t input_commitment_count; uint8_t input_commitments[VANTA_MAX_SLOTS][32]; uint32_t nullifier_count; uint8_t nullifiers[VANTA_MAX_SLOTS][32]; uint32_t commitment_count; uint8_t commitments[VANTA_MAX_SLOTS][32]; int64_t value_balance; } VantaJournal;
The C++ never deserializes an SP1 proof. It calls vanta_verify_and_decode() with a byte slice; the function returns a boolean and populates the VantaJournal. From there the consensus engine looks at 32-byte hashes and a signed i64 and makes its decisions on bytes alone.
This is a deliberate cryptographic-engineering posture. The proof system can change underneath the FFI without changing the FFI. SP1 today, Halo 2 someday, whatever-comes-after-that the day after that — the C++ doesn’t have to know.
Why isn’t the L2 logic in vantad too?
This was the design conversation that took the longest to resolve.
Option A was to put everything in-process. One binary, one supervised PID, fewer moving parts. The problem: the L2 index isn’t consensus. It’s an indexed view of commitments, an SMT, and a REST API. Bundling that into vantad would mean every Bitcoin-Core-style operator who wanted to run the chain would inherit an HTTP server, an SQLite-backed index, and an iroh-based gossip layer. That’s a footprint expansion that buys nothing for the consensus path.
Option B was a separate process with a clean network boundary. vanta-node talks down to vantad over standard JSON-RPC (the same getblock/getrawtransaction an explorer would use) and up to wallets over REST and iroh gossip. The footprint cost lives in the operator’s discretion: if you don’t want the L2 services, don’t run vanta-node.
We went with B and I don’t regret it. The trade-off is that the L2 sidecar is a piece of operational machinery to keep alive. The desktop app handles that automatically (see vanta-desktop); a server operator handles it the way they handle any daemon.
What vanta-node actually does
main.rs is a four-task tokio program:
let watcher_handle = tokio::spawn(async move {
if let Err(e) = l1_watcher::run(watcher_config, watcher_state).await {
tracing::error!("L1 watcher error: {e}");
}
});
let gossip_handle_opt = match gossip::start(state.clone(), config.bootstrap_peers.clone()).await {
Ok((handle, router)) => { ... }
Err(e) => {
tracing::warn!("Failed to start gossip (continuing without P2P): {e}");
None
}
};
let api_handle = tokio::spawn(async move {
if let Err(e) = api::serve(api_state, &api_listen).await {
tracing::error!("API server error: {e}");
}
});
let save_handle = tokio::spawn(async move {
let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(10));
loop {
interval.tick().await;
if let Err(e) = save_state.save() {
tracing::warn!("Failed to save state: {e}");
}
}
});
Four jobs: watch L1 for new blocks, gossip with peers over iroh, serve the REST API, and snapshot state to disk every 10 seconds. Each tokio task runs against a shared L2State that’s Arc-cloned across them.
The L1 watcher polls vantad’s RPC every 2 seconds (configurable via VANTA_POLL_MS). For each new block it scans every transaction’s outputs for the OP_RETURN anchor format we use to publish commitments and nullifiers — the byte sequence is OP_RETURN 0xbb 0x00 <32-byte commitment>, defined in the pool’s stratum server where I wrote it in Python and reused the format on the Rust side. Hits get fed into the SMT.
The gossip layer uses iroh — pure-Rust, QUIC-based, NAT-traversing — to share encrypted notes between L2 peers. The bootstrap_peers come from an env var; the desktop app starts with that empty by default. Iroh’s gossip is a per-topic channel and we use one topic per chain (mainnet, regtest). The architecture doc explains the pick:
P2P: iroh.computer — pure Rust, QUIC-based, NAT traversal, gossip protocol, content-addressed blobs. Chosen over libp2p for simplicity, built-in QUIC + NAT hole-punching, and document sync (useful for offline branch-and-merge).
The REST API is the thing wallets actually consume. The endpoints I care most about are /status (commitment count, nullifier count, SMT root, last block), /submit (push new commitments + encrypted notes from the pool or a wallet), /notes/scan (trial-decrypt encrypted notes against a wallet’s secret key), and /proofs/recent (the 500-slot ring buffer of recently-verified proofs the explorer renders).
The save loop is 10-second snapshots. The state file is a bincode’d dump of the SMT plus the nullifier set plus the encrypted-note inbox. Drop on the L2State saves on shutdown too. If the process is killed -9 you lose at most 10 seconds of work, and the L1 watcher rebuilds the state by re-scanning from the last good height.
How the wallet uses the sidecar
The desktop wallet uses both vantad and vanta-node. From the wallet’s perspective:
vantadis the source of truth for L1 — block heights, transparent UTXOs, transaction broadcast.vanta-nodeis the source of truth for L2 — commitments, nullifiers, encrypted notes addressed to me.
When I press “send” on a private transaction in the desktop wallet:
- The wallet asks
vanta-nodefor the current SMT root and the membership proof for the input commitment I’m spending. - The wallet generates an SP1 proof locally (or, for low-end machines, against the SP1 proving network) using the membership proof and my secret key as private witness.
- The wallet builds an L1 transaction that includes the SP1 proof in
witness.stack[0]and an OP_RETURN anchor with the new commitment. - The wallet broadcasts the transaction via
vantad’ssendrawtransactionRPC. vantadvalidates: standard script checks, thenvanta_verify_and_decode()against the SP1 proof in the witness.- After the block is mined,
vanta-node’s L1 watcher picks up the OP_RETURN anchor and the new commitment lands in the SMT.
The recipient wallet /submits an encrypted-note query to its vanta-node, which trial-decrypts using the recipient’s secret. If the trial decrypts cleanly, the note is mine.
This is the architecture the coinbase auto-shield feature also rides on: every miner reward is a witness v2 commitment paying into the miner’s shielded address, with the encrypted note pushed to the L2 via the same /submit endpoint. From the pool’s stratum server:
def save_shielded_note(height, commitment_hex, randomness_hex, value):
"""Persist mining note and submit encrypted note to L2 for wallet discovery.
Called ONLY after a winning block is accepted by submitblock — never from
the per-share job-template path, otherwise the L2 SMT fills up with phantom
commitments for templates that never won the PoW race.
"""
That comment is load-bearing. The first version of the pool submitted the encrypted note on every share — that produced thousands of phantom commitments per block. Submitting only on submitblock accept fixes it.
Failure modes
The sidecar architecture has failure modes the in-process design wouldn’t have. They’re worth naming.
vanta-node is dead, vantad is alive. The wallet’s L1 RPC works fine. The wallet’s L2 calls all 503. The desktop app surfaces this as “L2 disconnected” and lets you keep using transparent functionality. Private send is gated behind L2 reachability.
vantad is dead, vanta-node is alive. L1 RPC fails. vanta-node’s watcher logs polling failures. The L2 state is frozen at whatever block was last seen. The desktop app surfaces this as “L1 disconnected”; sending is impossible (no broadcast endpoint), but the wallet can still display historic state.
Both alive but vanta-node lost its data dir. The L1 watcher detects “I’ve seen no blocks” on startup and re-scans from genesis. On a small chain this is fine. On a large chain this is a known cost of recovery — measured in hours, not days, but not free.
vanta-node is alive but the SMT is corrupted. This one I worry about. Bincode + Drop-save + 10-second snapshots is a defensible steady state, but a partial write during a crash could in principle produce a non-loading state file. Recovery is “rm the state file, restart, let the watcher rebuild.” We have monitoring on the fall-through path. TODO: Dax confirm we ship cryptographic checksums on the state file.
What this isn’t
I want to head off two possible misreadings.
This is not a proof-on-server architecture. The proof generation happens in the wallet (or, optionally, on a remote SP1 prover the user trusts). vanta-node doesn’t generate proofs. It distributes encrypted notes and indexes commitments. The only ZK code in vanta-node is the verifier path it uses to sanity-check proofs before accepting them into the proof event ring buffer.
This is not a custodial sidecar. vanta-node never sees secret keys. The encrypted notes are encrypted to the recipient’s pubkey — vanta-node distributes ciphertext. Trial decryption happens client-side in the wallet using the recipient’s secret. Lose the secret, lose the funds; lose the L2 sidecar, replay the chain. The cryptographic posture is the same as Zcash sapling notes.
What I changed my mind about
The original nullifier-set post hinted at this: “The actual ZK proof verification happens out of process in the Rust sidecar. The C++ node fires off the proof to a local Unix socket and waits for ok or not ok.” That’s how it was originally architected. We changed it.
The Unix-socket sidecar didn’t survive contact with the SP1 backend. Spawning a sub-process every block to verify proofs is fine in regtest where blocks are minutes apart; on mainnet at 1-minute blocks with peak-hour transaction volume, the IPC overhead added up to milliseconds per verify, multiplied by every spend in every block. Statically linking libvanta_verifier.a into vantad brought the verifier into the same address space and the same allocator and dropped the per-verify cost to roughly what an in-Rust call would cost.
The audit-surface concern is real but mitigated by the minimal FFI: 440 bytes of struct, two C functions, deterministic output. A fuzzer can hammer that boundary and you’ll know if it’s broken.
What’s still out of process is the L2 state — the SMT, the nullifier index, the encrypted-note inbox. That’s the thing whose footprint we never want inside vantad, and there it’s stayed.
Further reading
vanta/vanta-node— the L2 sidecarvanta/vanta-verifier-ffi— the in-process FFI verifierpapers/17-zkvm-engineering.md— the design rationale for the SP1/Plonky3 backend- Vanta: a Bitcoin fork with ZK at consensus — the chain itself
- Vanta Desktop: a Tauri wallet that ships its own full node — the binary that supervises both processes
- iroh.computer — the QUIC-based P2P stack the gossip layer uses