Generating mempool with a Rust txbot
Empty blocks lie. A new chain whose miners are mining empty templates is not exercising any of the code that fails in production. The txbot is a 200-line Rust loop that round-robins coins through 114 addresses to keep mempool honest.
- FROM
- Dax the Dev <[email protected]>
- SOURCE
- https://blog.skill-issue.dev/blog/vanta_txbot_synthetic_mempool/
- FILED
- 2026-04-13 17:39 UTC
- REVISED
- 2026-04-15 00:00 UTC
- TIME
- 8 min read
- SERIES
- Vanta In Practice
- TAGS
There is a class of bug in a Bitcoin-style chain that you only ever see when the mempool is non-trivially full. Fee-rate accounting, RBF replacement, package relay, mempool eviction policies — all of it is only ever stressed by real spend pressure. A new chain whose miners are mining empty templates against an empty mempool is, by definition, not exercising any of that code. So you ship a transaction bot.
The Vanta txbot is txbot/src/main.rs, a 200-line Rust loop that round-robins random spends across 114 pre-funded Z-addresses on the testnet wallet. This post is a tour of what it does, what it found, and why the synthetic-load approach is non-negotiable when you’re bringing up an L1.
The problem statement
In April 2026 I had a chain that worked. Bitaxes were finding blocks, the explorer was rendering them, the wallet was sending and receiving. There were also days where the mempool depth was zero for hours at a time, because the only people transacting were me, and I sleep.
A pre-mainnet chain that produces empty blocks is less debugged than one with mempool pressure. Things you don’t notice when blocks are empty:
- Fee estimation has nothing to estimate against and falls through to
fallbackfee. - Coin selection is trivial when there are 12 UTXOs in the wallet. With 1,000 UTXOs across 114 addresses, you start hitting
bnb-vs-knapsack edge cases. - Block-template construction never sees competition between transactions. Every fee policy is moot.
- The mempool’s eviction policy, ancestor/descendant limits, and policy-vs-consensus split — all untested.
The fix isn’t “wait for users.” Users come after the chain is debugged. The fix is to ship synthetic load and let the chain talk to itself.
The bot in 200 lines
The configuration up top sets the spend envelope:
const MAX_SPEND_RATIO: f64 = 0.40;
const MIN_SPEND_RATIO: f64 = 0.05;
const MAX_OUTPUTS: usize = 12;
const MIN_DELAY_MS: u64 = 200;
const MAX_DELAY_MS: u64 = 2000;
Every round, the bot picks a random fraction of its current balance between 5% and 40%, splits it into 1–12 random output amounts, picks 1–12 destination addresses uniformly from the address pool, and sends. Then sleeps a random 200ms–2000ms and goes again.
The address pool is hardcoded inline as a &[&str] of 114 Z-prefix addresses. They’re real addresses owned by the testnet wallet (the bot is running against the wallet RPC), so coins keep round-robining through the same wallet — never net leaving, just churning.
The spend loop is the simplest thing that works:
loop {
round += 1;
let balance = match get_balance(&rpc) { Ok(b) => b, ... };
if balance < 1.0 {
std::thread::sleep(Duration::from_secs(10));
continue;
}
let spend_ratio = rng.gen_range(MIN_SPEND_RATIO..=MAX_SPEND_RATIO);
let total_spend = balance * spend_ratio;
let num_outputs = rng.gen_range(1..=MAX_OUTPUTS);
let amounts = random_split(&mut rng, total_spend, num_outputs);
// ... send each output via sendtoaddress, log txid
let delay = rng.gen_range(MIN_DELAY_MS..=MAX_DELAY_MS);
std::thread::sleep(Duration::from_millis(delay));
}
random_split divides the total spend into n pieces by sampling n-1 uniformly random cut points. This produces uneven splits — most outputs are small, a couple are medium, occasionally one is large. That distribution is closer to organic spending than equal splits would be, and it stresses coin selection harder.
sendtoaddress is called once per output rather than once per n-output transaction. This was a deliberate choice: it produces more transactions per round (which is the point), and it lets the chain pick how it batches them in mempool selection.
What it actually exercised
The bot ran for weeks against the Latitude testnet. Things it surfaced:
Fee estimation falls through. The first time the bot sent a transaction the call returned "Fee estimation failed." The fix was the now-canonical settxfee at startup with a 0.0001 fallback. Same line is in the Axum web wallet’s main.rs for the same reason.
Wallet RPC contention. When the bot rate is high, multiple sendtoaddress calls in flight contend on the wallet’s lock. The bot is single-threaded so it’s only contending against itself plus whatever else uses the wallet (the web UI, occasional manual sends). The lesson: if you’re going to run the bot at high rate, give it a dedicated wallet via loadwallet.
Mempool eviction. With the bot churning 5–10 transactions per round and a 1-minute block time, mempool depth would creep up during slow blocks and drain on fast blocks. This was the first time I watched the eviction policy actually run. It’s fine — Bitcoin Core’s mempool is one of the most-tested pieces of state in the codebase — but watching it from the outside helped me build a model of how it behaves at our parameters (1-min blocks, 100k subsidy, low fee floor).
Pool L2 retry queue. The 2026-04-13 commit ops: vanta-node systemd unit + docker compose + pool L2 retry queue landed a feature where the Stratum server’s L2 submission is enqueued for retry when the L2 sidecar is unreachable. We discovered the need for that retry queue because the txbot was generating enough block-finding pressure that the pool was sometimes hitting submitblock while the L2 sidecar was being restarted. Without the retry queue, those blocks’ encrypted notes would be lost. With it, they get replayed when the L2 comes back.
That’s the synthetic-load test paying for itself in a feature that ended up in the production pool code.
Things the bot is not designed to be
The bot is a stress generator, not a fuzzer. It does not:
- Construct invalid transactions to test rejection paths. (That’s the functional tests in
test/.) - Try to double-spend. (The wallet won’t let it; the chain wouldn’t accept it.)
- Generate shielded transactions. (No SP1 prover in the bot loop. Yet.)
- Negotiate fees adversarially. (Single fixed fallback fee.)
I have thought about adding all of these. The shielded-transaction one is the interesting next step. A txbot that includes some fraction of shielded sends would exercise the SMT growth path, the nullifier-set growth path, and the encrypted-note inbox at vanta-node — all of which currently only get exercised by manual sends from the wallet.
Adding ZK proof generation to the bot loop is the trade-off though. SP1 proofs take 30–60 seconds on CPU, so a bot that does 10 sends per minute can’t be all-shielded. TODO: Dax confirm whether we want a --shielded-ratio 0.2 flag to mix.
Why not synthetic at the protocol level
A reasonable counter-design: instead of running a separate bot, make vantad itself emit synthetic transactions in a regtest-only mode.
Two reasons we didn’t:
- The bot is real. Every transaction the bot sends is signed by a real key, broadcast through real RPC, and validated by real consensus. It exercises the same code paths a user transaction would. A built-in synthetic mode is cheaper but it is at risk of taking shortcuts that a real RPC client wouldn’t take.
- Operational separation. The bot is a thing I can stop, restart, retarget, or add features to without touching
vantad. That separation matters; the consensus binary should not contain test-traffic-generation code.
The bot lives in txbot/, separate Cargo workspace, separate binary. The cost of that separation is a few extra lines of bitcoincore-rpc setup. The benefit is that I can iterate on the bot during a deploy without rebuilding the chain.
What I would change
A list, in order of priority:
- Multiple workers. A single-threaded bot maxes out around 10 tx/sec because of RPC round-trip latency. A 4-worker version with a shared rng seed would 4x the rate without changing the workload shape. Easy.
- Shielded mix. As above. Adds the SP1 dependency and an L2-sidecar URL to the bot’s config; cost is per-tx latency.
- Adversarial replacement. Send a tx, then send a higher-fee replacement before the first confirms. Tests RBF policy. Easy.
- Mempool snapshot logging. After each send, query
getmempoolinfoandgetmempoolancestorsfor the txid. Log the mempool depth and ancestor count. This produces a time series I can graph against block-find events to see how mempool pressure correlates with confirmation latency. Low priority.
The bot is also load-bearing for the explorer. The 2026-04-13 commit explorer: privacy throughput + anonymity charts (recharts) shows transaction-count and mempool-depth charts on the explorer dashboard; those charts are flat without the bot running.
Further reading
txbot/src/main.rs— the entire bot in one filetxbot/Cargo.toml— dependency-light by design- The vanta wallet HTTP API — sister piece on the Axum wallet that talks to the same RPC
- Vanta: a Bitcoin fork with ZK at consensus — the chain the bot is exercising
- Mining VANTA with a Bitaxe BM1368 — the hardware that consumes the bot’s mempool pressure
- Bitcoin Core mempool docs — the policy surface the bot indirectly tests