Designing a Human-in-the-Loop Publishing Workflow with Agents

Most of the "AI + content" stories you see fall into one of two camps. Either you get demo-ware where an LLM generates everything and nobody checks anything, or long blog posts about "strategy" that never touch a real deployment pipeline.

Over the past weekend, I wired something a little more practical:

The key constraint: there’s always a human in the loop. The agent can build and deploy, but I still review every post on the live site before I ship it to LinkedIn.

In this post I’ll walk through how that workflow is structured and why the "human checkpoint" is a feature, not a bug.


The Pieces in the Workflow

There are a few simple components that make this work.

First, Markdown drafts. Content lives in a local workspace as .md files. Drafts go under:

content/drafts/

Once a post looks good, I "promote" it by copying it into published Markdown under:

content/published/

JimmyRyan.info itself is a static site. I added a /blog/ subtree for posts:

A small build script converts Markdown plus some content metadata into HTML pages and a blog index. A simple deploy config tells the agent how to sync the generated HTML to the server over SSH using rsync.

Finally, there’s the local agent: Jimmy Botler. It runs inside OpenClaw and:

None of these parts are complicated individually. The interesting bit is how they’re chained together.


Step 1 – Drafting in Markdown with Jimmy Botler

Everything starts in Markdown.

The agent and I work together on posts in content/drafts/, and each file has a small metadata block at the top:

---
title: "Yelling at Your Network: Building a Voice Assistant for Home Lab + Security Tasks"
date: 2026-03-22
slug: "yelling-at-your-network"
description: "How I wired up a simple Python-based voice assistant to control home lab and security automations by voice."
---

# Yelling at Your Network: Building a Voice Assistant for Home Lab + Security Tasks

...

That metadata is the contract between content and tooling:

Jimmy Botler helps draft the body and the metadata, but it’s all just text files in a git-able directory. No magic CMS.

Once a post looks good in Markdown, I "promote" it by moving it into content/published/. That’s the signal that it’s ready to be part of the site.


Step 2 – Human Review on the Live Site

When I tell the agent:

"Publish the ‘Yelling at Your Network’ post."

a few things happen automatically.

First, the Markdown file is ensured in content/published/ with correct metadata. Then the Python build script runs and generates:

Finally, the deploy step syncs /site/blog/ to the /blog directory on the server.

At that point, the post is technically "live," but the workflow still expects a human checkpoint:

Only after I’ve looked it over do I treat the post as fully published and ready for promotion.

This is what I mean by "human-in-the-loop": the agent does the work, but I decide when it’s good enough.


Step 3 – Automating the Deploy, But Keeping Scope Narrow

The glue between the build output and the server is a small JSON config file that lives alongside the content. Conceptually, it contains things like which SSH host to deploy to, which user and key to use, which directory on the server is allowed to be updated, and the exact deploy command to run (in my case, an rsync call).

At a high level, the deploy looks like this:

rsync -avz --delete ./site/blog/ <user>@<host>:/path/to/webroot/blog

A few deliberate design choices matter here.

First, it’s a blog-only deploy. The command only touches the /blog subtree on the server. The main landing page and other root files are left alone. That keeps the agent’s blast radius small.

Second, there is one source of truth. Anything under /blog is generated from site/blog/, which in turn is generated from the Markdown under content/published/.

Finally, the agent executes, human configures. The agent reads this config and runs the command, but the config itself (host, user, path, key) is owned by me. If I want to change how deploy works, I edit the config, not the agent’s code.

This setup makes it easy to trust that when I say "publish," the agent isn’t going to rewrite my entire site or touch files it shouldn’t.


Step 4 – Cross-Posting to LinkedIn with a Simple Pattern

Once the blog post looks correct on JimmyRyan.info, the last step is pushing it to LinkedIn. Here, the pattern is intentionally simple.

After a successful publish, Jimmy Botler always gives me two drafts.

The short version is just the title and link:

Yelling at Your Network: Building a Voice Assistant for Home Lab + Security Tasks https://jimmyryan.info/blog/yelling-at-your-network.html

The expanded version adds a single sentence of context:

I’ve been experimenting with using voice as an interface to my home lab and security automations. In this post I walk through a simple Python-based assistant that listens, parses intent with an LLM, and talks to Home Assistant. https://jimmyryan.info/blog/yelling-at-your-network.html

My workflow is straightforward:

  1. Copy the expanded version into LinkedIn as the main body.
  2. Make small edits if needed for tone or length.
  3. Hit publish.

The key is that the agent always knows the URL and the title from the metadata, so the pattern is repeatable and low friction. I don’t have to hunt for links or re-type titles; I only decide if I’m happy with how it sounds.


Why Human-in-the-Loop Still Matters

With all of this wired up, it would be easy to let the agent run wild: generate a draft, build and deploy, and auto-post to LinkedIn.

That’s not the goal.

A few reasons I keep humans in the loop:

In other words, the agent is there to move the ball down the court. I’m still the one taking the shot.


Where This Workflow Can Go Next

Once the basics are solid, there are a lot of ways to extend this pattern.

I could add scheduled posts, with metadata like publish_at so the agent publishes at specific times. The build script could generate an RSS feed from the same content metadata. I could introduce tags and categories in that metadata and generate filtered index pages.

Most importantly, the same title + URL + short blurb pattern can be reused across more channels: newsletters, other social networks, even internal updates.

The important part is that the structure stays the same:

  1. Draft in Markdown with metadata.
  2. Build to HTML with a small script.
  3. Deploy with a narrowly scoped command.
  4. Let an agent like Jimmy Botler handle the mechanics, while you stay in the loop for the final call.

That combination of automation and human oversight is what makes the whole workflow feel trustworthy—and actually sustainable—over time.