DOC# PROCES SLUG process_thing_lsb_watermark PRINTED 2026-05-06 03:47 UTC

process-thing: An LSB Watermarker for upload-thing, Written in Rust via Neon

A Rust npm package that embeds invisible watermarks in the least significant bit of every red channel pixel. Built for upload-thing image preprocessing. Cross-compiled for 7 platforms. The README is one paragraph.

FROM
Dax the Dev <[email protected]>
SOURCE
https://blog.skill-issue.dev/blog/process_thing_lsb_watermark/
FILED
2024-09-08 15:42 UTC
REVISED
2024-09-08 15:44 UTC
TIME
9 min read
SERIES
Rust Side-Quests
TAGS
#rust #neon #npm #steganography #watermarking #side-quest #image-processing

There’s a class of side-quest where the project is half “I want to learn this thing” and half “I have a vague product idea.” process-thing was both: I wanted to learn neon-rs (Rust → Node.js native modules) properly, and I wanted to know whether upload-thing — the dominant React file-upload SaaS in 2024 — had a clean preprocessing hook I could plug a Rust binary into.

The answer to both questions is “yes.” process-thing shipped on 2024-09-08 in three commits: 4ba9cfa — :tada: Init, 3e26703 — :rocket: Try htis, and e2187b1 — :clown_face: Lock file. Three commits, ~1500 lines of Rust + scaffolding, seven cross-compiled platforms, one steganographic watermarker.

This post is what I learned about LSB watermarking, neon-rs, and shipping a Rust crate as an npm package in the same afternoon.

What is LSB watermarking

The least-significant-bit (LSB) of an 8-bit color channel is the difference between 0xAB and 0xAA. Mathematically, that’s a difference of one out of 255 — about 0.4%. Visually, that’s invisible. The human eye cannot distinguish a pixel with a red value of 170 from a pixel with a red value of 171, especially when the surrounding pixels are noisy.

So you have a free bit of bandwidth in every pixel of every uploaded image. You can carry data in it. The data is not encrypted and not robust to recompression — anyone who knows the scheme can decode it, and a single round-trip through a JPEG encoder will obliterate it. But for “did this image originate from my service” or “what userId uploaded this,” the LSB channel is both invisible to the end user and trivial to read back.

Hence the entire watermark embedder, in 50 lines of Rust:

fn embed_watermark(mut cx: FunctionContext) -> JsResult<JsString> {
    let base64_image = cx.argument::<JsString>(0)?.value(&mut cx);
    let watermark = cx.argument::<JsString>(1)?.value(&mut cx);

    let image_data = general_purpose::STANDARD
        .decode(base64_image)
        .or_else(|_| cx.throw_error("Invalid base64 image data"))?;

    let mut img = image::load_from_memory(&image_data)
        .or_else(|_| cx.throw_error("Failed to load image"))?;

    // Convert watermark to binary
    let binary_watermark: Vec<u8> = watermark.bytes()
        .flat_map(|byte| (0..8).rev().map(move |i| (byte >> i) & 1))
        .collect();

    let (width, height) = img.dimensions();
    let mut watermark_index = 0;

    for y in 0..height {
        for x in 0..width {
            let mut pixel = img.get_pixel(x, y);

            if watermark_index < binary_watermark.len() {
                pixel[0] = (pixel[0] & 0xFE) | binary_watermark[watermark_index];
                watermark_index += 1;
            } else {
                watermark_index = 0;
            }

            img.put_pixel(x, y, pixel);
        }
    }

    let buffer = img.to_rgba8().to_vec();
    let base64_output = general_purpose::STANDARD.encode(buffer);
    Ok(cx.string(base64_output))
}

The interesting bits, line-by-line:

Why only the red channel

You could embed in red, green, and blue and triple your carrier capacity. The reason I only embedded in red is that the human eye is most sensitive to green and least sensitive to blue, but red sits in the middle, and using the red channel alone halves the chance of an artist noticing the watermark in a high-saturation gradient.

This is a bullshit reason. The real reason is that the diff was 4 lines shorter and I was trying to ship in an afternoon.

Neon as the bridge

The Neon (rust → node.js) glue is a single attribute and a function pointer:

#[neon::main]
fn main(mut cx: ModuleContext) -> NeonResult<()> {
    cx.export_function("embedWatermark", embed_watermark)?;
    Ok(())
}

#[neon::main] produces a napi_register_module symbol that Node looks up when it loads the .node file. cx.export_function registers embedWatermark as a JS-callable name. From JS:

const { embedWatermark } = require("process-thing");
const watermarked = embedWatermark(base64Image, "userid:abc123");

That’s the entire surface area. No async (LSB embedding is fast — about 6ms for a 1280×720 PNG on M1), no streams, no buffers. Just String → String → String.

Why base64 strings on both sides

Neon supports passing Buffers directly, which would be more efficient — no base64 encode/decode tax. The reason I went with base64 is that process-thing was specifically meant to be plugged into upload-thing’s preprocessing hook, and the upload-thing API layer at the time worked in base64-encoded data URIs natively. Converting to a Buffer would have meant doing the conversion inside the JS wrapper anyway.

This is a small lesson but a real one: if you’re shipping a native module, optimize the API for the framework that will actually consume it, not for theoretical throughput. A 4ms base64 encode tax is invisible in the context of a file upload that’s measured in hundreds of ms anyway.

Cross-compilation: the 30% of the project that took 70% of the time

Look at the directory layout the init commit introduced:

platforms/
  android-arm-eabi/
  darwin-arm64/
  darwin-x64/
  linux-arm-gnueabihf/
  linux-x64-gnu/
  win32-arm64-msvc/
  win32-x64-msvc/

Seven platforms. Each one is a separate npm package — @process-thing/darwin-arm64, @process-thing/win32-x64-msvc, etc. — that contains exactly one prebuilt .node binary. The root process-thing package depends on all of them as optionalDependencies, and at install time npm picks the right one for the host platform.

This is the napi-rs idiomatic layout, and Neon supports the same pattern. The reason it exists: users do not want to compile Rust at install time. If your npm install process-thing command spawned a cargo build --release step, you’d have a bug report from every Windows user who didn’t have the MSVC toolchain installed. The right answer is to prebuild on every supported triple in CI and ship the binary.

GitHub Actions for the build is what .github/workflows/build.yml does — 137 lines of YAML to run cargo build once per target, package the result, upload to npm under the right scoped name. The actual Rust code is 50 lines; the infrastructure to ship the Rust code is 800.

This ratio is why I think most “let’s rewrite this in Rust for Node” tasks die in CI. The code is easy. The supply chain is the project.

What this taught me

There’s a particular shape of Rust side-quest that I find myself starting and finishing in afternoons: a thin Rust crate that does one CPU-intensive thing, exposed to JS via Neon, with a CI matrix that ships prebuilds. Image preprocessing is the canonical example. Hashing is another. Compression. Format conversion. Anything where the JS-native equivalent is pure-js-image-decoder and clocks in at 50× slower.

process-thing was the first time I built that template properly. I’ve reused the template several times since — including, eventually, in the zera-sdk where the same crates/<name>/ + platforms/<triple>/ layout shows up to expose Rust crypto to TypeScript. The architectural muscle memory came from this 1500-line shitpost about embedding invisible userIds in cat pictures.

Trade-offs and limitations

LSB watermarks don’t survive JPEG recompression. If your upload pipeline transcodes to JPEG before storage, this scheme is dead on arrival. For PNG-pinned pipelines (or service-side WebP at lossless settings), it survives.

LSB watermarks aren’t crypto. Anyone who knows the scheme can read the bits back. If you need integrity against a determined adversary, you need a HMAC, not a watermark. If you just need attribution against a casual screenshot-and-repost, LSB is fine.

Neon vs. napi-rs. I picked Neon because the #[neon::main] macro was the simplest entry point for a single-function crate. For more complex modules with many functions, async, or class-style exports, napi-rs’s derive macros are nicer. Neon is fine for “one function, no async.”

Why image-rs instead of a hand-rolled PNG parser? Because image supports JPEG, PNG, WebP, GIF, BMP, and more from one API and is well-maintained. The LSB embedding is format-agnostic; the format-specific decode is the part that’s actually hard.

Further reading

← Back to article