Cruiser: A Tauri Hookup App on iroh, Geohash-Bucketed Presence, and Why P2P Dating Is Actually Fine
Sections
- The geohash topic split
- The three tasks per area
- What an announce looks like
- Direct messages: a separate topic per pair
- Why iroh-gossip and not libp2p
- CoreLocation, IP fallback, and the geolocation rabbit-hole
- What “Phase 1–26” means
- What I’d do differently
- Trade-offs
- What this taught me
- Further reading
The dating app market in 2026 is two things: dystopian centralized platforms (Match Group’s stable: Tinder, Hinge, OkCupid, etc.) and crypto-bro-coded alternatives that promise decentralization but ship a Mongo cluster behind the API. Neither is what the queer community I built Cruiser for actually wanted. They wanted a dating app where the only servers were the participants’ own devices, where presence was bucketed by location without any single party seeing all locations, and where the wallet was the identity but the wallet wasn’t a custody surface.
Cruiser shipped on 2026-02-26 in 4cecbd4 — Cruiser: P2P hookup app — Phases 1–26. 89 files. ~20,000 lines. Tauri 2.x for the desktop wrapper, React 18 + Zustand for the UI, iroh-gossip for the P2P transport, Solana for the wallet/payment rails. The full mono-commit is the result of 26 phases of design + implementation that I’d been working on locally, then squashed into one commit before pushing.
This post is about the gossip-presence architecture in particular, because that’s where the “no servers” promise actually has to be defended.
The geohash topic split
iroh-gossip is a publish/subscribe protocol over a peer-mesh, with content-addressed TopicIds. Every peer that subscribes to a TopicId joins the same gossip mesh and exchanges messages. The naive thing is to use one topic for the whole app — cruiser/v1 — and broadcast every presence announce to every peer.
This is a privacy disaster. It means every peer sees every other peer’s broadcast, including their location. The architecture that ships in Cruiser is per-geohash topics:
// src-tauri/src/gossip_presence.rs
const AREA_TOPIC_PREFIX: &str = "cruiser/area/v1/";
let topic_bytes = location::topic_from_geohash(AREA_TOPIC_PREFIX, geohash6);
let topic_id = TopicId::from_bytes(topic_bytes);
The topic is cruiser/area/v1/<geohash6>. Every 6-character geohash bucket is its own topic. A geohash6 covers approximately a 1.2 km × 0.6 km area — small enough to be a single neighborhood, large enough to have actual users in it. Two peers join the same topic only if they’re in the same geohash6 bucket.
This is the privacy architecture in one decision: you can only see the presence of peers who chose to be visible in the same geographic bucket as you. A peer in San Francisco can’t see a peer in Berlin’s gossip. Even within a city, a peer in the Mission can’t see a peer in the Castro, because those are different geohash6 buckets.
The cost of this design: peers walk between geohash6 boundaries (you cross a street, you’re in a new bucket). The app handles this by leaving the old topic and joining the new one whenever the user’s geohash6 changes. That’s the lifecycle the ActiveArea struct manages:
pub struct ActiveArea {
pub topic_id: TopicId,
pub geohash6: String,
pub sender: Arc<GossipSender>,
broadcast_handle: JoinHandle<()>,
receive_handle: JoinHandle<()>,
reaper_handle: JoinHandle<()>,
}
impl ActiveArea {
pub fn leave(self) {
self.broadcast_handle.abort();
self.receive_handle.abort();
self.reaper_handle.abort();
}
}
leave aborts all three Tokio tasks — broadcast loop, receive loop, peer-cache reaper — and drops the topic subscription. The next location update triggers a join_gossip_area for the new geohash, and the cycle repeats.
The three tasks per area
Every joined area spawns:
- A broadcast task that sends a
PresenceAnnounce(your profile snippet, your endpoint ID, your tags) every 30 seconds. - A receive task that handles incoming announces and updates a local
PeerCache. - A reaper task that runs every 60 seconds and evicts peers that haven’t announced in 90 seconds.
const BROADCAST_INTERVAL_SECS: u64 = 30;
const REAPER_INTERVAL_SECS: u64 = 60;
The 30s broadcast / 90s eviction (= reap if no announce in 3 broadcasts) gives you a “go offline within 90s” guarantee. If a peer disconnects from WiFi, walks out of range, or quits the app, every other peer’s view of them ages out within a minute and a half. No central server needed to mark them offline.
This is the whole mechanism for “who’s online right now in your area.” There is no presence-server.cruiser.app/online. The presence is the gossip itself.
What an announce looks like
// src-tauri/src/presence.rs (simplified)
#[derive(Serialize, Deserialize, Clone)]
pub struct PresenceAnnounce {
pub endpoint_id: String, // iroh node ID — the P2P address
pub geohash6: String, // your bucket (intentionally redundant for receivers)
pub display_name: String,
pub avatar_hash: String, // CID of avatar image; sender mirrors via iroh-blobs
pub bio_short: String, // ~80 chars max
pub energy: String, // a profile field — "🔥 high energy", "🌙 chill", etc.
pub tags: Vec<String>, // user-set tags for search/filter
pub last_seen_ms: u64, // sender's local clock at announce time
pub signature: String, // ed25519 sig over the rest, by the user's identity key
}
A few interesting design calls:
Avatar by CID, not inlined. The avatar is a hash; the actual image bytes are fetched via iroh-blobs on a separate transport from the gossip topic. Inlining the avatar would balloon every announce to ~50KB and make the gossip topic unreasonably noisy. CID + lazy fetch is ~256 bytes per announce.
signature is the integrity surface. Every announce is signed by the user’s identity key. A peer receiving an announce verifies the signature before adding to the peer cache. Without this, anyone could broadcast an announce claiming to be anyone else; with it, an impostor announce is detected and dropped.
last_seen_ms is the announcer’s clock. Not a synchronized clock. The receiver uses this for “rough freshness” but not for anti-replay — anti-replay is handled by the iroh-gossip layer’s own message dedup based on content hash + topic.
Direct messages: a separate topic per pair
DMs work the same way, with a different topic shape. From src-tauri/src/gossip_dm.rs:
cruiser/dm/v1/<sorted-endpoint-id-pair>
The endpoint IDs of the two peers are sorted lexicographically and concatenated. Both peers compute the same topic ID. Joining the topic establishes a 2-peer gossip mesh. Messages are encrypted with nacl.box (XSalsa20-Poly1305) using the peers’ x25519 keys, derived from their ed25519 identity keys.
The threat model:
- An eavesdropper on the gossip mesh sees the topic ID (which is opaque without the endpoint IDs that produced it) and ciphertext. They learn nothing about the participants or content.
- A passive observer of the iroh DHT sees the two endpoint IDs subscribing to a common topic, which leaks “these two people are in a DM” but not the content. Acceptable; DMs in any system leak metadata at this level.
- A man-in-the-middle can’t insert messages because they’re encrypted with
nacl.boxkeyed to the receiver’s pubkey. They can’t drop messages without the sender noticing (no acks, but the ordering would be visibly wrong).
The DM topic also handles tips, consent signals, location sharing, typing indicators, read receipts, and emoji reactions. All of those are just message variants in the same encrypted topic — there’s no separate channel for them. The reason: a separate channel for “I’m typing” would itself leak the metadata “person A is typing to person B” without authentication. Folding everything into the encrypted DM topic eliminates that side channel.
Why iroh-gossip and not libp2p
I evaluated three P2P stacks before landing on iroh:
- libp2p (Rust): the de-facto standard. Powerful, but operationally heavy — DHT, NAT traversal, transports, and a non-trivial topology config. It’s overkill for a single-purpose app.
- GossipSub (libp2p): the gossip protocol within libp2p. Closer to what I needed, but still requires the full libp2p stack as host.
- iroh + iroh-gossip: purpose-built for “P2P Rust app needs gossip.” Smaller surface area, batteries-included relay/DHT/NAT-traversal via iroh’s hosted public infrastructure. Subjectively faster to ship.
iroh hosts a public relay infrastructure (relay.iroh.network) that handles NAT traversal and STUN-style address discovery. Most home users are behind NAT, so without relay infrastructure most P2P apps don’t work in practice. iroh’s relay is opt-in and free for development; that’s what I used.
The trade-off: iroh is younger than libp2p, the API surface is still moving, and the network effects are smaller (fewer peer apps to interop with). For Cruiser this is fine — there are no peer apps it needs to interop with — but for a project that wanted to join the existing libp2p universe, iroh would be the wrong call.
CoreLocation, IP fallback, and the geolocation rabbit-hole
The whole gossip architecture above is meaningless without the user’s actual location. Browser navigator.geolocation doesn’t work in Tauri’s macOS WKWebView (wry auto-denies the permission). The follow-up commit d2b9cc8 — Phase 27: Native CoreLocation for macOS is where I solved that, and it’s its own post.
Worth noting here: the system has three fallback layers for location:
- Native CoreLocation (macOS) / GeoClue2 (Linux) / Windows.Devices.Geolocation (Windows). Best accuracy.
- IP-based geolocation via ipinfo.io. Used when native services are unavailable or denied.
- Manual override (you type your geohash6 into a settings field). Used for testing and for users who don’t want their actual location used.
Each layer feeds the same geohash6 value to join_gossip_area. The peer doesn’t care how the geohash was computed; they care that the geohash is honest and stable.
What “Phase 1–26” means
The mono-commit covers 26 design phases. A non-exhaustive sample of what each phase added:
- Phase 1–3: Identity (ed25519 key + Solana pubkey).
- Phase 4–6: Profile (avatar, bio, energy, tags).
- Phase 7–9: Gossip presence (the architecture above).
- Phase 10–12: DM chat (encrypted, with media + tips + consent signals).
- Phase 13: Block list.
- Phase 14: Favorites.
- Phase 15: Notifications.
- Phase 16: Themes.
- Phase 17: Search.
- Phase 18: Onboarding (the new-user flow).
- Phase 19–21: Chat management (delete threads, relative timestamps, profile peek).
- Phase 22–25: Dev tools (seed peers for local testing, SOL airdrop UI).
- Phase 26: The final polish + the squash into one commit.
The reason to squash 26 phases into a single commit is that the local development repo had 200+ commits with messages like wip and fix wallet sig and actually now it works, and that’s not a public history. The squash gives readers a single coherent diff that says “this is what shipped.” The cost: you lose the ability to bisect within Phase 1–26. The benefit: you don’t subject the public to a noisy 200-commit history.
What I’d do differently
The PeerCache should be persistent. Right now, when you restart the app, you lose the in-memory peer cache and have to wait 30s for the next broadcast cycle to repopulate. Persisting it (and re-validating on next announce) would make the first-second of app launch feel responsive instead of empty.
The geohash6 boundary needs hysteresis. Crossing a geohash boundary triggers a topic-leave / topic-join cycle. If you walk along the boundary you can flap between buckets every few seconds. The fix is to wait for a few consecutive readings on the new bucket before switching, or to subscribe to both buckets while in transition. Neither is implemented in the initial commit; both are easy add-ons.
The signature scheme should bind to the topic. Right now an announce signed for topic A could be replayed on topic B by an adversary who controls a relay. Including the topic ID in the signed payload would prevent that. Easy fix; on the to-do list.
Trade-offs
Why a desktop app first instead of mobile? Because Tauri’s desktop story was mature in 2025 and the iOS / Android mobile bindings were still beta. The Phase 29 iOS commit shipped iOS support a few weeks later; Android is still pending.
Why Tauri instead of Electron? Same reason as the Zera Wallet v3: smaller bundle, sane Rust↔JS IPC, and the Rust side can hold long-running background tasks (gossip loops, location service) without spinning up a separate process.
Why a per-pair DM topic instead of a single shared “DMs” topic? Because per-pair topics are the right scope for routing — only the two participants subscribe — whereas a shared topic would require every peer to receive every DM and filter by recipient. That’s both wasteful and a metadata leak.
Why no central reputation/abuse system? Because the moment you ship a central reputation system, the system is no longer P2P. The Cruiser approach is: every peer maintains their own block list, locally. Abuse is mitigated by the absence of a global directory — you can only be discovered by people in your geohash6, so the attack surface is bounded by your physical area.
What this taught me
P2P-as-architecture is mostly constraints management: deciding what state is allowed to be global (almost nothing), what state is allowed to be partial (peer caches, ephemeral), and what state is fully local (your block list, your profile, your settings). Once you’ve drawn those lines, the rest of the design falls out.
The other thing I learned is that iroh deserves more attention. It’s the smallest dependency I’ve ever shipped that supports a real P2P product. Most P2P stacks are 50,000-line behemoths. iroh-gossip + iroh-net + iroh-blobs is enough infrastructure for a real app and the code surface is comprehensible.
Further reading
- Cruiser on GitHub
- The Phase 1–26 mono-commit
- iroh on n0.computer — the P2P stack.
iroh-gossipdocs — the pub/sub layer.- Cruiser CoreLocation post — how the geolocation layer works.
- Cruiser iOS + Xcode Cloud — the App Store push.
- Cruiser+ landing page — the marketing surface.