Designing a Human-in-the-Loop Publishing Workflow with Agents
Most of the "AI + content" stories you see fall into one of two camps. Either you get demo-ware where an LLM generates everything and nobody checks anything, or long blog posts about "strategy" that never touch a real deployment pipeline.
Over the past weekend, I wired something a little more practical:
- Local Markdown drafts
- A static HTML site at JimmyRyan.info
- A small Python build script
- A safe, scoped deploy via SSH and
rsync - And a local agent named Jimmy Botler that helps draft, publish, and prepare LinkedIn posts
The key constraint: there’s always a human in the loop. The agent can build and deploy, but I still review every post on the live site before I ship it to LinkedIn.
In this post I’ll walk through how that workflow is structured and why the "human checkpoint" is a feature, not a bug.
The Pieces in the Workflow
There are a few simple components that make this work.
First, Markdown drafts. Content lives in a local workspace as .md files. Drafts go under:
content/drafts/
Once a post looks good, I "promote" it by copying it into published Markdown under:
content/published/
JimmyRyan.info itself is a static site. I added a /blog/ subtree for posts:
/index.html— main landing page/blog/index.html— blog index/blog/<slug>.html— individual post pages
A small build script converts Markdown plus some content metadata into HTML pages and a blog index. A simple deploy config tells the agent how to sync the generated HTML to the server over SSH using rsync.
Finally, there’s the local agent: Jimmy Botler. It runs inside OpenClaw and:
- Drafts posts in Markdown
- Runs the build script
- Uses the deploy config to publish to
/blog - Generates LinkedIn-ready post text once everything looks good
None of these parts are complicated individually. The interesting bit is how they’re chained together.
Step 1 – Drafting in Markdown with Jimmy Botler
Everything starts in Markdown.
The agent and I work together on posts in content/drafts/, and each file has a small metadata block at the top:
---
title: "Yelling at Your Network: Building a Voice Assistant for Home Lab + Security Tasks"
date: 2026-03-22
slug: "yelling-at-your-network"
description: "How I wired up a simple Python-based voice assistant to control home lab and security automations by voice."
---
# Yelling at Your Network: Building a Voice Assistant for Home Lab + Security Tasks
...
That metadata is the contract between content and tooling:
title— used in the blog index and LinkedIndate— used for sorting and metadataslug— used for the post URL (/blog/<slug>.html)description— used in the blog index and meta description
Jimmy Botler helps draft the body and the metadata, but it’s all just text files in a git-able directory. No magic CMS.
Once a post looks good in Markdown, I "promote" it by moving it into content/published/. That’s the signal that it’s ready to be part of the site.
Step 2 – Human Review on the Live Site
When I tell the agent:
"Publish the ‘Yelling at Your Network’ post."
a few things happen automatically.
First, the Markdown file is ensured in content/published/ with correct metadata. Then the Python build script runs and generates:
/site/blog/<slug>.htmlfor the post/site/blog/index.htmlas the blog index
Finally, the deploy step syncs /site/blog/ to the /blog directory on the server.
At that point, the post is technically "live," but the workflow still expects a human checkpoint:
- I open
https://jimmyryan.info/blog/andhttps://jimmyryan.info/blog/<slug>.html. - I check that the layout looks right, the title/date/description are correct, code blocks render properly, and the navigation and links work.
Only after I’ve looked it over do I treat the post as fully published and ready for promotion.
This is what I mean by "human-in-the-loop": the agent does the work, but I decide when it’s good enough.
Step 3 – Automating the Deploy, But Keeping Scope Narrow
The glue between the build output and the server is a small JSON config file that lives alongside the content. Conceptually, it contains things like which SSH host to deploy to, which user and key to use, which directory on the server is allowed to be updated, and the exact deploy command to run (in my case, an rsync call).
At a high level, the deploy looks like this:
rsync -avz --delete ./site/blog/ <user>@<host>:/path/to/webroot/blog
A few deliberate design choices matter here.
First, it’s a blog-only deploy. The command only touches the /blog subtree on the server. The main landing page and other root files are left alone. That keeps the agent’s blast radius small.
Second, there is one source of truth. Anything under /blog is generated from site/blog/, which in turn is generated from the Markdown under content/published/.
Finally, the agent executes, human configures. The agent reads this config and runs the command, but the config itself (host, user, path, key) is owned by me. If I want to change how deploy works, I edit the config, not the agent’s code.
This setup makes it easy to trust that when I say "publish," the agent isn’t going to rewrite my entire site or touch files it shouldn’t.
Step 4 – Cross-Posting to LinkedIn with a Simple Pattern
Once the blog post looks correct on JimmyRyan.info, the last step is pushing it to LinkedIn. Here, the pattern is intentionally simple.
After a successful publish, Jimmy Botler always gives me two drafts.
The short version is just the title and link:
Yelling at Your Network: Building a Voice Assistant for Home Lab + Security Tasks https://jimmyryan.info/blog/yelling-at-your-network.html
The expanded version adds a single sentence of context:
I’ve been experimenting with using voice as an interface to my home lab and security automations. In this post I walk through a simple Python-based assistant that listens, parses intent with an LLM, and talks to Home Assistant. https://jimmyryan.info/blog/yelling-at-your-network.html
My workflow is straightforward:
- Copy the expanded version into LinkedIn as the main body.
- Make small edits if needed for tone or length.
- Hit publish.
The key is that the agent always knows the URL and the title from the metadata, so the pattern is repeatable and low friction. I don’t have to hunt for links or re-type titles; I only decide if I’m happy with how it sounds.
Why Human-in-the-Loop Still Matters
With all of this wired up, it would be easy to let the agent run wild: generate a draft, build and deploy, and auto-post to LinkedIn.
That’s not the goal.
A few reasons I keep humans in the loop:
- Quality control – layout glitches, awkward transitions, or subtle errors are still easier for me to spot than for an automated system.
- Voice consistency – even with a well-tuned style, I want the option to tweak how I phrase things for different audiences.
- Scope and safety – the tighter the scope (only
/blog, only after I say "publish"), the more comfortable I am letting the agent handle the mechanics. - Confidence over time – the more I see the system behave correctly, the more work I can comfortably hand off to it—without giving up final approval.
In other words, the agent is there to move the ball down the court. I’m still the one taking the shot.
Where This Workflow Can Go Next
Once the basics are solid, there are a lot of ways to extend this pattern.
I could add scheduled posts, with metadata like publish_at so the agent publishes at specific times. The build script could generate an RSS feed from the same content metadata. I could introduce tags and categories in that metadata and generate filtered index pages.
Most importantly, the same title + URL + short blurb pattern can be reused across more channels: newsletters, other social networks, even internal updates.
The important part is that the structure stays the same:
- Draft in Markdown with metadata.
- Build to HTML with a small script.
- Deploy with a narrowly scoped command.
- Let an agent like Jimmy Botler handle the mechanics, while you stay in the loop for the final call.
That combination of automation and human oversight is what makes the whole workflow feel trustworthy—and actually sustainable—over time.