End-to-end character model pipeline

A corpus-to-character pipeline.

Point it at a body of writing in a distinctive voice and voicepipe walks all the way through:

categorize
synthesize
dedup
triage
assemble
train
deploy
torch ollama key
Oscar Wilde

Languid, paradoxical, epigrammatic — every sentence a polished inversion of a platitude.

1
categorize
Propose weighted prompt categories
2
synthesize
Generate response pairs in batches
3
dedup
Cosine dedup + phrase caps
4
triage
LLM judge scoring + policy flags
5
assemble
Build train / validation sets
6
train
QLoRA fine-tune the base model
7
deploy
GGUF → Modelfile → ollama push
Config-driven
Everything lives in project.toml
CLI + GUI
One engine, two interfaces
Local & private
You control the data
End-to-end
Corpus → model deployment
CLI
The same engine, in your terminal.
$ voicepipe new my-character
$ voicepipe synthesize --project my-character
$ voicepipe train --project my-character
$ voicepipe deploy --project my-character
WEB GUI
Local control server + desktop app.

voicepipe serve runs a FastAPI control server with a full web UI and SSE job stream.

Run locally, connect to a remote CUDA box, or package it as a Tauri desktop app.

MODELS BUILT WITH VOICEPIPE

Public Ollama models.

RESEARCH

Honest about what it is.

voicepipe is, primarily, a creative-writing pipeline. The same engine generalizes to a broader spectrum of research applications — detection ("is this text human- authored or voicepipe-emulated?"), watermarking and provenance, organizational and authorial style modeling, synthetic-data generation for red/blue exercises, and impersonation-defense tooling.

We're intellectually honest that the technology is dual-use — the literary framing is where the project lives, but the surface area is real. A natural follow-up question sits one step away: can ML adversaries reliably detect voicepipe-finetuned output from genuinely-authored or generic-LLM text, and on what features? Working on it? Open an issue.

GET STARTED

Install voicepipe.

Use the desktop app or install from source to get the CLI, training pipeline, deploy tooling, and local web GUI.

$ pip install -e ".[gui]"
$ voicepipe new my-character
$ voicepipe serve