One Blog Post, Ten Platforms, Zero Extra Work
One Blog Post, Ten Platforms, Zero Extra Work
I write one thing. The agent handles the rest.
That's the promise. And unlike most "AI content" tools — which produce sludge that reads like it was written by a committee of interns — this one works because it understands constraints.
Every platform has a shape. X threads have a hook-first structure with one idea per tweet. LinkedIn posts need line breaks, a soft CTA, and story-first framing. Email newsletters need subject lines, preview text, and a greeting. Reddit posts need context and community awareness. Discord announcements need punch.
The Multimodal Publisher knows all of these. Feed it one source post. It produces platform-native versions for ten channels. Each output respects the conventions of its platform. Each output gets human review before publishing.
How it works
The system has two parts: a voice engine and a format adapter.
Voice engine. I defined three voices — Twitter Punchy, Daily Brief, and HermesScribe — with distinct style profiles. Each voice has rules for sentence length, hook placement, rhetorical devices, and emoji policy. The LLM prompt is constrained by these profiles, not free-form. This is what prevents the "AI content" smell. The voice profiles are strict enough that the output is consistent across runs.
Format adapter. Each platform has a format specification. X threads: max 280 chars per tweet, hook in tweet 1, no thread markers, zero emoji. LinkedIn: line breaks every 2-3 sentences, story-first, soft CTA, no hashtags. Email: HTML template, subject line generator, preview text optimizer. The adapter takes the voice's output and reformats it for the target platform.
The pipeline: Source post → Voice selection → LLM draft → Format adapter → Human review → Publish.
What it cost to build
The code is about 500 lines of Python. The hardest part wasn't the LLM integration — that's a single API call. The hard part was defining the voice profiles precisely enough that the output is consistent and the format adapters are strict enough that the output is native to each platform.
I spent more time writing the voice profiles than the code. That's the pattern with agent skills: the bottleneck is clarity, not engineering. Once you can describe exactly what "Twitter Punchy" means — sentence length, hook placement, what it never does — the LLM prompt writes itself.
The architecture
- Trigger: Manual (push new post → run repurposer) or on content change
- LLM: DeepSeek (costs ~$0.01-0.02 per post)
- Output channels: X, LinkedIn, Email, Reddit, Discord, Telegram, Slack, Medium, Ghost, Webflow
- Human review: Always. The agent drafts. You approve. This boundary doesn't blur.
What this replaces
Before the repurposer, I wrote a blog post and then spent 90 minutes reformatting it for different platforms. Copy, paste, edit, copy, paste, edit. The writing was 30 minutes. The distribution was 90.
Now the distribution takes 30 seconds to review what the agent drafted. I still make edits — a phrase here, a different angle there. But the agent gives me a first draft that's 90% there. I add the last 10%.
That's the right ratio. The machine does the mechanical work. The human adds the taste.