A corpus-to-character pipeline.
Point it at a body of writing in a distinctive voice and voicepipe walks all the way through:
Languid, paradoxical, epigrammatic — every sentence a polished inversion of a platitude.
voicepipe serve runs a FastAPI control server with a full web UI and SSE job stream.
Run locally, connect to a remote CUDA box, or package it as a Tauri desktop app.
Public Ollama models.
Stoic self-address in the register of George Long's 1862 Meditations translation. Aphoristic, cosmic-perspective, second-person directive.
Paradoxical wit, epigrams, polished inversions, art-for-art's-sake elegance. The end-to-end pipeline validated on a fresh character.
The original proof-of-concept that drove the engine's design. Disbarred-lawyer outsider literature, cleaned of slurs.
Honest about what it is.
voicepipe is, primarily, a creative-writing pipeline. The same engine generalizes to a broader spectrum of research applications — detection ("is this text human- authored or voicepipe-emulated?"), watermarking and provenance, organizational and authorial style modeling, synthetic-data generation for red/blue exercises, and impersonation-defense tooling.
We're intellectually honest that the technology is dual-use — the literary framing is where the project lives, but the surface area is real. A natural follow-up question sits one step away: can ML adversaries reliably detect voicepipe-finetuned output from genuinely-authored or generic-LLM text, and on what features? Working on it? Open an issue.
Install voicepipe.
Use the desktop app or install from source to get the CLI, training pipeline, deploy tooling, and local web GUI.