DOC# RECURS SLUG recursive_proofs_halo_to_nova PRINTED 2026-05-06 03:47 UTC

Recursive proof composition without the abyss: Halo to Nova

The path from Halo's accumulation scheme to Nova's folding scheme, derived from the recurrence relation. Where Halo2, Nova, SuperNova, and HyperNova actually differ, and which one to reach for in 2026.

FROM
Dax the Dev <[email protected]>
SOURCE
https://blog.skill-issue.dev/blog/recursive_proofs_halo_to_nova/
FILED
2026-05-02 16:00 UTC
REVISED
2026-05-02 16:00 UTC
TIME
14 min read
SERIES
ZK SNARKs in production
TAGS
#cryptography #recursive-snark #halo2 #nova #folding #zk #phd #math

A recursive SNARK is a proof that proves another proof was checked correctly. A program that runs for TT steps and produces a proof of correct execution at each step can — recursively — collapse all TT proofs into one. The verifier work goes from O(T)O(T) to O(1)O(1). This is the structural reason ZK rollups exist. It is also the reason “incrementally verifiable computation” stopped being a research curiosity in 2020 and became a deployment target.

The two papers that sit underneath every recursive SNARK shipped today are Halo (Bowe, Grigg, Hopwood 2019) and Nova (Kothapalli, Setty, Tzialla 2022). They take very different routes to the same destination. This post is the math of both, the trade-off table for picking one in 2026, and a Rust skeleton for the Nova folding step.

The problem recursive SNARKs are solving

You have a program that runs for TT steps. At each step you produce a proof that the step was executed correctly. Naively, the verifier checks all TT proofs — verifier cost O(T)O(T), no better than re-running the program. Useless.

The recursive trick: at step ii, instead of producing a fresh proof, you produce a proof that says “step ii was executed correctly, and the proof from step i1i-1 verifies.” The proof for step ii recursively absorbs the proof for step i1i-1. After TT steps you have one proof; the verifier checks one proof; the cost is O(1)O(1) in the program length.

The hardest part is making the inner verification cheap. If the verifier work for one proof is VV and you embed that work in the circuit for the next proof, you’ve blown up the prover cost by VV. Recursion is only useful if VV is constant or near-constant in the original circuit size — which is exactly what Groth16, Halo, and Nova all aim for in different ways.

The math problem reduces to one question: how cheaply can you verify a SNARK inside a SNARK?

The Halo trick: accumulation without recursion-in-circuit

Pre-Halo recursion required a cycle of pairing-friendly elliptic curves. Two curves E1,E2E_1, E_2 with the property that the scalar field of E1E_1 is the base field of E2E_2 and vice versa, so that arithmetic over one curve can be expressed natively in the other curve’s circuit. Pasta (Pallas / Vesta) and MNT4/6 are the canonical cycles. The reason this matters: if you want to verify a Groth16 proof inside a Groth16 circuit, you need pairing-friendly arithmetic inside the circuit, which means the circuit field has to support the pairing curve. A cycle gives you two curves where each can verify proofs over the other.

The cycle constraint is annoying. Pasta’s curves don’t have efficient pairings (they’re cycle-friendly, not pairing-friendly), so they trade pairing efficiency for cycle availability. MNT cycles have very large fields and slow arithmetic. There’s no free lunch.

Halo (Bowe, Grigg, Hopwood 2019) was the first practical example of recursive proof composition that broke this constraint. The insight: instead of verifying the inner proof inside the circuit, you accumulate the most expensive part of the verification (the multiscalar multiplication, MSM) into a running sum, and defer the actual MSM check to the end of the recursion.

Formally: the verifier of an inner-product-argument-based proof has to check an equation of the form

isiGi=P\sum_i s_i \cdot G_i = P

for some derived scalars sis_i and group elements GiG_i. This is the bottleneck — it’s a multiscalar multiplication of size linear in the circuit. The accumulation-scheme trick is: at step kk, instead of checking this equation, you produce a fresh “accumulator” acck=(Gkfolded,Pkfolded)\text{acc}_k = (G_k^{\text{folded}}, P_k^{\text{folded}}) that combines the current step’s MSM with the previous accumulator. After TT steps you have one accumulator and one MSM check. Verifier cost: O(logT)O(\log T) for the recursion plus one final MSM.

The Halo paper formalises this as a polynomial commitment with deferred opening. It works because the recursive composition can defer expensive arithmetic, not because it embeds full verification in-circuit. From the abstract:

We present Halo, the first practical example of recursive proof composition without a trusted setup, using only the discrete logarithm assumption over normal cycles of elliptic curves. Recursion is achieved by amortizing away the expensive verification procedures from within the proof verification cycle, deferring them until the end of the recursion.

Bowe, Grigg, Hopwood ↗ source

Halo2, the Zcash production deployment, uses the same construction over Pasta and ships it in Orchard (Zcash’s NU5 upgrade in 2022). Halo2 is also the basis of Aztec Connect, the Scroll zkEVM, and the Filecoin Snark-pack.

The Nova trick: folding instead of accumulating

Two years after Halo, Nova (Kothapalli, Setty, Tzialla 2022) reframed the problem entirely. Instead of accumulating an MSM, Nova introduces a folding scheme: a primitive that takes two instances of a relation and folds them into a single instance, with prover cost O(F)O(|F|) for some step circuit FF and no SNARK at all in the recursion.

The Nova relation is a relaxed R1CS instance:

AzBz=uCz+e\mathbf{A} \mathbf{z} \circ \mathbf{B} \mathbf{z} = u \mathbf{C} \mathbf{z} + \mathbf{e}

where A,B,C\mathbf{A}, \mathbf{B}, \mathbf{C} are the constraint matrices, z\mathbf{z} is the witness extended with public inputs, uu is a slack scalar (1 in the standard R1CS case), and e\mathbf{e} is an error vector (zero in the standard case). The “relaxed” part is that uu and e\mathbf{e} are allowed to be nonzero — that’s what makes folding possible.

Given two relaxed R1CS instances (u1,z1,e1)(u_1, \mathbf{z}_1, \mathbf{e}_1) and (u2,z2,e2)(u_2, \mathbf{z}_2, \mathbf{e}_2), the folding scheme produces a single instance (u,z,e)(u, \mathbf{z}, \mathbf{e}) via a random challenge rr:

u=u1+ru2,z=z1+rz2,e=e1+rT+r2e2u = u_1 + r \cdot u_2, \quad \mathbf{z} = \mathbf{z}_1 + r \cdot \mathbf{z}_2, \quad \mathbf{e} = \mathbf{e}_1 + r \cdot \mathbf{T} + r^2 \cdot \mathbf{e}_2

with T\mathbf{T} a “cross-term” the prover sends to the verifier. The folded instance is satisfying iff both originals were (with overwhelming probability). Crucially, folding does not require the verifier to do any expensive cryptographic work: T\mathbf{T} is a vector commitment, rr is a Fiat-Shamir challenge, and the new (u,z,e)(u, \mathbf{z}, \mathbf{e}) is a linear combination. No pairings. No SNARK.

The Nova incrementally verifiable computation (IVC) recurrence is then:

(ui+1,zi+1,ei+1)=Fold((ui,zi,ei),  (uF,zF(i),0))(u_{i+1}, \mathbf{z}_{i+1}, \mathbf{e}_{i+1}) = \text{Fold}\big( (u_i, \mathbf{z}_i, \mathbf{e}_i), \; (u_F, \mathbf{z}_F^{(i)}, \mathbf{0}) \big)

where the second instance is a fresh R1CS encoding of “step ii of the program FF executed correctly.” After TT steps, you have one relaxed R1CS instance, and you produce a single SNARK that proves it’s satisfying. The SNARK runs once, at the end. Every step in between is folding.

The cost asymmetry is the entire pitch. Halo’s per-step cost is O(F)O(|F|) for the step plus O(logT)O(\log T) for the recursion. Nova’s per-step cost is just O(F)O(|F|) — no recursion overhead. For long computations (T1T \gg 1) Nova is significantly cheaper. The trade-off: Nova gives you one final SNARK to verify, while Halo gives you a SNARK at every step.

The folding scheme is the entire idea. Everything else in Nova is bookkeeping around it.

Halo2, Nova, SuperNova, HyperNova — what’s the difference

Four production-grade recursive systems, four design points.

OptionCostLatencyBlast radiusNotes
Halo2 (Zcash, Pasta curves) Per-step prover ~constant in T; verifier O(log T) MSM Step proof ~5-15s for non-trivial circuits Production since 2022 (Zcash NU5); audited Best for protocols that already use the IPA / Pasta stack. Wide deployment.
Nova (Pallas/Vesta cycle) Per-step prover O(|F|) with NO SNARK; one final SNARK Per-step ~100ms-1s; final SNARK ~5-15s Production via microsoft/Nova, Lurk; younger than Halo2 Best when T is very large and you can defer the SNARK. The right choice for a zkVM step circuit.
SuperNova Per-step proportional to the SIZE of the invoked instruction circuit Per-step ~50ms-300ms (varies by instruction) Newer; production zkVMs (RISC0, Jolt-shape) are exploring it Right answer for instruction-set machines (EVM, RISC-V) where steps don't all use the same circuit.
HyperNova (CCS) Folds CCS instead of R1CS; supports Plonkish/AIR/R1CS uniformly Comparable to Nova with a richer constraint system Newest of the four; CRYPTO 2024 The unification target. Drop-in upgrade for protocols that want CCS flexibility without rewriting.
Plain Groth16 + cycle Per-step verifier embedding ~10k constraints Per-step ~10s+ What everyone did before Halo and Nova Mostly historical now. Don't reach for this in 2026.

The two questions that decide which one to reach for in 2026:

  1. Is your computation a uniform step that repeats, or a heterogeneous instruction set? Uniform: Nova. Heterogeneous: SuperNova. (HyperNova handles both via CCS.)
  2. Do you need a SNARK at every step, or can you defer to one final SNARK? Every step: Halo2. Defer: Nova / SuperNova / HyperNova.

For a privacy-pool transfer, neither of these is the right shape — you want a single SNARK per spend, no recursion. For zera-sdk we ship Groth16 and don’t recurse. For a rollup settling many transfers in a batch, Nova-flavoured folding is the right structural answer because the per-step cost is dominated by the transfer logic and the final SNARK only runs once per epoch.

A Nova folding step, in Rust

The cleanest way to see what’s actually happening in a folding step is to write one out. The skeleton below is a Nova-shaped folding step over a toy R1CS — no pairings, no real curve, no soundness, but the linear-combination structure is the real thing.

Nova folding step (skeleton) [ rust ]
open

The shape is the thing. The fold is just three linear combinations: u=u1+ru2u' = u_1 + r u_2, z=z1+rz2\mathbf{z}' = \mathbf{z}_1 + r \mathbf{z}_2, e=e1+rT+r2e2\mathbf{e}' = \mathbf{e}_1 + r \mathbf{T} + r^2 \mathbf{e}_2. The cross-term T\mathbf{T} is what the prover sends; the challenge rr is Fiat-Shamir over the transcript. In real Nova the witness z\mathbf{z} is replaced by a Pedersen commitment to it (so the verifier never sees the witness), and the error vector e\mathbf{e} is replaced by a commitment as well. The linear structure of the fold is preserved by the additive-homomorphic property of the Pedersen commitment, which is the entire reason Pedersen is the right primitive here.

Where this lands for ZERA

The honest answer about recursion in zera-sdk v1: we don’t use it. A privacy transfer is one Groth16 proof per spend, and there’s nothing to recurse over. The advantage of recursion shows up when:

For ZERA’s transfer flow, none of these apply. For the eventual settlement layer that sits underneath a chain of ZERA transfers (think: a state-root proof every epoch), Nova-style folding is the right shape, and the design seam is in crates/zera-sdk-core/src/recursion.rs. Empty file today. We’ve left the door open.

The reason I wrote this post anyway is that recursion is the part of the ZK stack that’s most actively moving in 2026. HyperNova landed at CRYPTO 2024 with a CCS-based unification of R1CS / AIR / Plonkish that was supposed to take five more years. The next two years are going to compress IVC primitives down to “one folding scheme, three commitment choices, pick your poison.” Anyone deploying a ZK system today should know what shape that compressed primitive will be, because the migration cost will be the difference between a clean refactor and a rewrite.

Further reading

← Back to article