Skip to content

Workflow Example: Autonomous Agent

AI involvement: Multiple specialized agents execute a full pipeline — research, write, edit, review, publish — with human approval at one gate.

An autonomous agent workflow is one where AI handles the entire process — from research to final deliverable — with minimal human intervention. This example is built using Claude Code subagents — specialized AI assistants that each run in their own context with domain expertise, coordinated by Claude Code as the orchestrator. The human sets the goal and reviews the draft at one checkpoint. Everything else runs autonomously.

  • Multi-agent — different agents handle different stages, each with domain expertise
  • Pipeline-structured — output from one agent becomes input to the next
  • Skill-enhanced — the editor agent loads the editing-hbr-articles skill to apply codified editorial standards during its editing pass
  • Self-reviewing — the editor agent applies quality criteria from that skill before the human sees the draft
  • Gate-controlled — a built-in safety mechanism (called a “hook”) automatically pauses the pipeline for human review before publishing
  • End-to-end — produces a finished deliverable (PDF + markdown) from a single goal statement

Use autonomous agent workflows when the task:

  • Requires multiple distinct capabilities (research, writing, editing, formatting)
  • Follows a pipeline where each stage has clear inputs and outputs
  • Benefits from specialist expertise at each stage
  • Produces a deliverable that should meet professional standards
  • Can include a human review gate without breaking the flow

The problem: A business leader wants to publish an HBR-style article about companies successfully using AI agents. The process requires deep research (finding real case studies with quantified outcomes), executive-level writing (translating technical concepts for business audiences), rigorous editing (applying HBR editorial standards), and professional publishing (PDF formatting with SEO metadata). Doing this manually involves multiple skill sets and takes days of focused work.

The solution: A multi-agent pipeline in Claude Code. One prompt triggers a chain of specialized agents — a researcher finds case studies, a writer produces the article, an editor applies HBR standards, the human reviews the draft, and a publisher formats the final deliverable. Each agent brings domain expertise that would otherwise require a different person.

This single prompt triggers the entire pipeline:

“Please write an analysis and Harvard Business Review-style article on successful companies that you can find by doing research that have successfully used and applied AI agents to their business. This article is for a business leadership audience, and I’d like to have the final deliverable as a PDF, and markdown file.”

All building blocks are already included in the business-first-ai plugin — no additional installation required.

Building BlockTypeRole in PipelineSource
ai-productivity-researcherAgentFinds documented case studies of companies using AI with quantified outcomesView on GitHub
tech-executive-writerAgentWrites the article for a business leadership audienceView on GitHub
hbr-editorAgentEdits the draft against HBR editorial standardsView on GitHub
editing-hbr-articlesSkillProvides editorial criteria and cut/replace patterns for the editorView on GitHub
hbr-publisherAgentFormats the approved article as PDF and markdown with SEO metadataView on GitHub
graph TD
A["Goal prompt"] --> B["Claude Code<br>(orchestrator)"]
B --> C["ai-productivity-researcher<br>finds case studies"]
C --> D["tech-executive-writer<br>drafts the article"]
D --> E["hbr-editor<br>+ editing-hbr-articles skill"]
E --> F{"SubagentStop Hook"}
F --> G["Human reviews draft"]
G -->|Approved| H["hbr-publisher<br>formats PDF + markdown"]
G -->|Rejected| I(("Stop"))
H --> J["Final deliverables:<br>PDF + markdown article"]

Step-by-step:

  1. User provides the goal — a single prompt describing the article topic, audience, and desired deliverables.
  2. ai-productivity-researcher runs — searches news outlets, business publications, and analyst reports for documented case studies of companies using AI agents. Prioritizes HBR-caliber sources with quantified outcomes (revenue impact, productivity gains, cost savings).
  3. tech-executive-writer runs — takes the research output and produces a full-length article. Translates technical AI concepts for a non-technical business audience. Structures the piece with a compelling narrative, specific examples, and executive-level insights.
  4. hbr-editor runs — reads the editing-hbr-articles skill to load editorial criteria, then edits the draft. Checks structure (does the opening hook?), evidence quality (are claims supported by named companies and data?), voice (active, no hedging), and length (2,500-3,500 words for features). Makes direct, prescriptive edits.
  5. Pipeline pauses for review — a hook (an automatic rule in Claude Code that triggers at a specific point) stops the pipeline and presents the edited draft to the human.
  6. Human reviews — reads the edited article and either approves it to continue or stops the pipeline for manual revision.
  7. hbr-publisher runs (on approval) — formats the article for web publication (SEO metadata, social snippets) and generates a professional PDF. Produces two files: a markdown version and a PDF.
StageAgent/ComponentInputOutputWhat Makes It Autonomous
Researchai-productivity-researcherGoal promptStructured case study briefsAgent decides which sources to search and which cases meet the quality bar
Writingtech-executive-writerResearch briefsFull article draftAgent structures the narrative, chooses which cases to feature, and adapts tone for the audience
Editinghbr-editor + skillArticle draftEdited draft with tracked changesAgent applies codified editorial criteria — not subjective taste, but documented standards
Review gateSubagentStop hookEdited draftHuman approval or rejectionPipeline pauses automatically — human decides quality, not the AI
Publishinghbr-publisherApproved draftPDF + markdown filesAgent handles formatting, metadata, and layout without human input

Each agent is a specialist. The researcher knows where to find credible business case studies. The writer knows how to structure executive-level content. The editor knows HBR’s specific editorial standards (loaded from a skill file with reference criteria). The publisher knows formatting and SEO.

A single generalist prompt could attempt all of this, but the output quality degrades because no single prompt can encode deep expertise across research methodology, executive writing style, editorial standards, and publication formatting. Splitting into specialists lets each agent focus on what it does best.

The human review gate is critical. A “hook” — an automatic rule you configure in Claude Code — fires after the editor finishes and before the publisher starts, giving the human a chance to:

  • Approve — the article meets standards, continue to publishing
  • Reject — the article needs changes the AI can’t make (factual corrections, strategic adjustments, tone shifts)

This is a deliberate design choice. The pipeline is autonomous enough to produce a near-final draft without human involvement, but publishing is a high-stakes action — once an article goes out, it represents the author. The gate ensures a human makes that call.

All five agents and the editing skill are included in the business-first-ai plugin.

Terminal window
# Install the plugin (one time)
/plugin install business-first-ai@handsonai

Then provide the goal prompt:

“Please write an analysis and Harvard Business Review-style article on successful companies that you can find by doing research that have successfully used and applied AI agents to their business. This article is for a business leadership audience, and I’d like to have the final deliverable as a PDF, and markdown file.”

Claude Code orchestrates the full pipeline automatically. You’ll be prompted to review the draft at the human-in-the-loop gate before publishing proceeds.

The HBR article pipeline is one application, but the multi-agent orchestration pattern applies to any workflow where different stages require different expertise:

  • Client deliverable pipeline — researcher gathers data → analyst produces insights → writer creates the report → reviewer checks quality → designer formats the final document
  • Sales proposal generation — researcher profiles the prospect → writer drafts the proposal → pricing specialist adds numbers → reviewer ensures accuracy → formatter produces the PDF
  • Course content creation — researcher gathers source material → instructional designer structures the lesson → writer creates slides and exercises → editor reviews for clarity → publisher formats for the LMS
  • Competitive intelligence reports — scanner monitors competitor channels → analyst identifies key changes → writer summarizes findings → editor ensures accuracy → distributor sends to stakeholders

To adapt: identify the distinct stages of your workflow and the specialist expertise each stage requires. If you’d assign different people to different stages in a team setting, those stages are candidates for different agents.