DOC# MCPSER SLUG mcp_server_inside_zera_sdk PRINTED 2026-05-06 03:47 UTC

The MCP server inside zera-sdk

Most SDKs ship as a library. zera-sdk also ships as a Model Context Protocol server. Here is why an AI agent should be able to call shielded-pool primitives directly, and how we keep that interface from becoming a footgun.

FROM
Dax the Dev <[email protected]>
SOURCE
https://blog.skill-issue.dev/blog/mcp_server_inside_zera_sdk/
FILED
2026-04-08 16:42 UTC
REVISED
2026-04-08 16:42 UTC
TIME
7 min read
SERIES
Building the Zera SDK
TAGS
#zera #mcp #sdk #ai-agents #rust #typescript

When we scaffolded the SDK monorepo in early March, the first non-obvious decision was including an MCP server in the box. Not as an example. Not as a future-work bullet. As a first-class crate alongside the Rust core and the TypeScript surface.

Six weeks later it still feels like the right call. Here is the reasoning.

What MCP actually is, and what it is not

MCP — Model Context Protocol — is Anthropic’s open JSON-RPC standard for letting LLM-driven applications call tools, read resources, and surface reusable prompts from any compliant server. By the start of 2026 there were over 10,000 public MCP servers and ~97 million SDK downloads per month across the Python and TypeScript implementations. The standard is in the boring-but-load-bearing phase: every major model vendor speaks it, the spec is on a regular cadence, the working groups have process.

What MCP is not is “an AI feature.” It is a protocol layer. The AI part is incidental. What MCP gives you is a typed, schema-described, discovery-friendly RPC surface that any client — model, CLI, IDE, agent — can connect to and immediately understand without bespoke glue. The most useful frame is “USB-C for tool calls.” That comparison gets thrown around to the point of cliché but it is also accurate: before USB-C you wrote per-cable glue; after, the cable is part of the device. MCP does the same thing for tool surfaces.

The interesting question for an SDK author in 2026 is not “should I expose an MCP server?” — that question is settled by the AI-agent-economy curve I wrote about in the broadband-moment post. The interesting question is which surface to expose, and how to keep it from becoming a footgun.

The tools the SDK actually exposes

The first version of zera-mcp shipped four tools and three resources. I want to talk about each one, because the choice of what to expose is more meaningful than the protocol mechanics.

search_posts(query, k=5)

Wait, no — that is the blog’s MCP server, not the SDK’s. (Yes, the blog has one too, and I am building a longer post about that. Let me get back to the SDK.)

The SDK’s four tools, as of commit e350707:

  1. compute_commitment(asset, amount, randomness) — returns a Poseidon commitment to a (asset, amount) pair under a caller-supplied blinding factor. This is the primitive an agent uses to describe a payment that has not happened yet — it can hand the commitment to a human for review without ever revealing the amount.

  2. derive_nullifier(note_secret, commitment) — returns the deterministic, single-use nullifier for a previously-committed note. As discussed in Nullifiers without the witchcraft, this is the hash that proves a note has been spent without revealing which one. Agents call this during proof generation.

  3. build_spend_proof(note, recipient, amount) — runs the full Groth16 prover for the canonical spend circuit and returns the proof bytes. This is the only tool that touches the prover. Doing this in-process via MCP is much better than asking an agent to shell out to a Rust binary; the agent gets a typed, schema-described return value with proof bytes and a public-input vector.

  4. get_pool_state() — read-only resource. Returns the current root hash of the commitment Merkle tree and the count of unspent notes. Agents that want to check whether their proof is still valid against the latest pool state poll this. It is a resource, not a tool, in MCP terms — the difference matters for caching and for explaining to the agent that the call is side-effect-free.

That is the entire surface. Four tools. No transfer, no withdraw, no set_owner. An agent can compose payments, prove them, and inspect pool state. It cannot move funds without a human signing the resulting transaction. That asymmetry is deliberate and I will defend it for as long as MCP exists.

The asymmetry rule

Every time I add a tool to zera-mcp, I run it through a single test:

If the agent is compromised — adversarial prompts, model jailbreak, supply-chain payload in the tool-calling library — what is the worst it can do?

If the answer is “compute a commitment that the human can audit,” fine. If the answer is “move funds,” not fine. The line is whether the tool has unilateral authority to change pool state. The current SDK MCP draws that line at proof construction. The proof itself is just a bunch of bytes; submitting it to the chain still requires a transaction signed by a wallet that the agent does not have direct authority over.

This is the same threat model I argued for in the x402 honeypot disclosure post and in Rusty Pipes before that. You assume the agent is compromised. You design the surface so a compromised agent cannot drain the pool. Everything else is detail.

What it looks like when an agent uses it

Concretely, here is the flow when a user asks Claude (or ChatGPT, or any MCP-enabled client) to “send $50 of USDC to alice.sol from my shielded balance, but show me the commitment first”:

  1. The agent calls get_pool_state() to fetch the current Merkle root.
  2. The agent picks an unspent note from the user’s local wallet that is ≥ $50.
  3. The agent calls compute_commitment(USDC, $50, fresh_randomness) to construct the visible commitment.
  4. The agent surfaces the commitment to the human in a message that says, in effect, “here is the commitment for the $50 send to alice.sol; proceed?”
  5. The human approves.
  6. The agent calls derive_nullifier(...) and build_spend_proof(...) and gets back the spend witness.
  7. The agent hands the proof to the wallet — not to the chain — and the wallet signs and submits the transaction. The wallet has policy: it will not co-sign a proof whose public inputs have not been displayed to the human in step 4.

Step 7 is where the privilege boundary lives. The MCP tools never touch a private key. They never broadcast a transaction. They are pure compute against pool state.

Why this generalises

I have argued for this pattern in three places now: the SDK’s MCP server, the blog’s MCP server, and the proposed lib.skill-issue.dev personal MCP that exposes my writing as queryable resources. The pattern is the same in all three:

Expose typed read + compute primitives. Do not expose state-changing authority. Push every authority decision back through the human or through a wallet that has its own policy.

If we are entering a decade where AI agents are going to be calling cryptographic primitives, this is the boundary that needs to hold. The cryptography is finally ready, the protocols are finally ready, and the interface design is the part that is still up for grabs. I would rather we set the precedent now than discover the right shape after the first six-figure agent-driven drain.

What I changed my mind about

When I first started writing zera-mcp I assumed I would expose the prover as a resource (cacheable, repeatable) rather than a tool (potentially side-effecting). The ZK community talks about provers as deterministic functions — given the same witness, you get the same proof — so it felt natural to treat them like a read.

I changed my mind after watching an agent hammer the prover during testing. The prover is computationally side-effecting even if it is mathematically pure. Eight seconds of CPU per call adds up fast when an agent is in a loop. Resources in MCP are aggressively cached by clients; tools are not. By moving the prover behind a tool I forced the client to think about whether to call it again. Worth it.

Further reading

← Back to article