How to Write Agent-Friendly Docs
A good agent-friendly docs site is not one that stuffs in extra keywords or repeats every fact twice. It is one that lets a model discover the right page, understand what matters, verify the result, and recover when something breaks, without turning the human page into machine sludge.
Start With The Human Page
The first mistake teams make is trying to write for crawlers before they write for users. That usually produces flat, repetitive pages that feel unnatural in the UI and still leave agents with too much ambiguity.
Start with a page a human can actually follow:
- what this page is for
- when to use it
- the exact steps
- what success looks like
- what usually goes wrong
Once that page is solid, add the machine layer on top. @farming-labs/docs is built for that flow.
The page UI stays human-first, while the machine-readable routes can include extra structure without
polluting the visible article.
Give Every Important Page A Contract
Before you worry about agent.md, make sure the page already tells an agent what kind of page it
is. For task pages, that means good frontmatter and a shape the runtime can expose consistently
through .md routes and MCP.
---
title: "Installation"
description: "Install @farming-labs/docs in an existing app"
related:
- /docs/configuration
- /docs/customization/agent-primitive
- /docs/customization/mcp
agent:
tokenBudget: 900
---That alone gives the machine-readable route a stronger entry point:
Description:tells the model what problem the page solvesRelated:gives it nearby pages without scraping the sidebaragent.tokenBudgetgivesdocs agent compacta per-page output target later- the normal body remains the human source of truth
If a page is important enough to unblock implementation, it should also contain:
- a short purpose statement
- exact commands or file edits
- a verification section
- a troubleshooting section keyed to symptoms
Write The Implementation Contract
The most useful agent-friendly pages are not just shorter. They make the implementation contract obvious. After reading the page, an agent should know what to change, where to change it, and how to prove the change worked.
For important task pages, include these signals in the visible page or in an additive
<Agent> block:
- the task outcome in one sentence
- framework and version assumptions when examples depend on them
- exact package names, import paths, route paths, and file paths
- copy-pasteable commands with the package manager you expect
- a success check with expected route, file, status code, or visible UI state
- common failure symptoms and the first place to inspect
- related pages that an agent should read next
<Agent>
Task outcome: enable Copy Markdown and Open in LLM actions on docs pages.
Use these source files:
- `docs.config.tsx`
- `app/api/docs/route.ts`
Verification:
- run `pnpm dev`
- open `/docs/customization/page-actions`
- confirm the page action menu includes Copy Markdown
- fetch `/docs/customization/page-actions.md` and confirm it returns markdown
If the menu is missing, inspect `pageActions` in `docs.config.tsx` before editing layout files.
</Agent>That shape gives the agent the missing operational details without turning the human guide into a checklist dump.
Use Agent Blocks For Machine-Only Hints
When the human page is still correct but agents need extra steering, add an Agent block. It stays
hidden in the normal docs UI and appears in the machine-readable layer.
<Agent>
Use this page when the task is "enable docs in an existing project".
Verification:
- run `pnpm dev`
- open `/docs.md`
- confirm the page renders and the markdown route responds
If `/docs.md` returns 404, check the docs route wiring before editing content.
</Agent>This is the sweet spot for most pages:
- humans keep the full narrative page
- agents get sharper instructions
- you avoid maintaining two completely separate documents
Keep Agent blocks additive
Treat <Agent> as the place for implementation hints, verification steps, and
route-specific behavior that would feel noisy in the visible article. Do not duplicate the whole
page there.
Use agent.md When The Machine Page Needs A Real Split
Some pages eventually need a different machine-readable contract than the human page can provide. A long conceptual article, for example, may still need a short operational document for agents.
That is when a sibling agent.md becomes the right tool.
# Installation
Description: Install `@farming-labs/docs` in an existing project
Related: /docs/configuration, /docs/customization/mcp
## Steps
1. Run `pnpm dlx @farming-labs/docs init`
2. Choose the detected framework
3. Pick a theme
4. Confirm the generated docs route exists
## Verification
- `GET /docs.md` returns `200`
- `GET /.well-known/agent.json` returns `200`Once agent.md exists, it becomes the source for:
{page}.mdGET /api/docs?format=markdown&path=<slug>- MCP
read_page
So use it when that stronger machine contract is genuinely worth owning.
Ship The Discovery Layer Too
Great page writing helps, but agents still need the routes that tell them how to use your site.
With @farming-labs/docs, the goal is to expose a compact discovery layer around the docs tree.
import { defineDocs } from "@farming-labs/docs";
export default defineDocs({
entry: "docs",
llmsTxt: {
enabled: true,
baseUrl: "https://docs.example.com",
},
mcp: {
enabled: true,
},
feedback: {
agent: {
enabled: true,
},
},
});That gives agents a much better workflow:
/.well-known/agent.jsontells them which routes exist/llms.txtand/llms-full.txtexpose site-level machine summaries{page}.mdgives them clean page markdown- MCP lets tool-enabled agents search and read docs semantically
The big win is that the discovery layer comes from the same docs runtime instead of a parallel system you have to keep in sync by hand.
Write Verification Like You Expect Automation
If you want agents to succeed, write setup pages like someone will actually run them without guessing. The strongest pattern is:
- do the thing
- check the exact route or file that proves it worked
- name the most likely failure mode
Example:
## Verification
- Run `pnpm dev`
- Open `http://localhost:3000/docs`
- Fetch `http://localhost:3000/docs.md`
- Fetch `http://localhost:3000/.well-known/agent.json`
## Troubleshooting
- If `/docs.md` returns `404`, check the docs route wiring.
- If `/.well-known/agent.json` is missing, confirm the docs API route is mounted.
- If the page renders but search is empty, verify the search provider config.This is the difference between a page that is merely informative and a page that is actually operational.
Use Page Actions As A Human-To-Agent Bridge
Agent-friendly docs should also help humans move between the normal page and the machine layer. That is where page actions matter.
On this framework, the best pair is usually:
- Copy Markdown for a clean page snapshot
- Open in LLM for a direct handoff into ChatGPT, Claude, Cursor, or another tool
Those features do not replace .md routes or MCP, but they make the same page contract visible to
humans too. When the docs team uses the same flows agents use, bad page contracts become obvious
much faster.
Audit The Site, Not Just The Page
After writing a few strong pages, run the doctor command and treat it as an ongoing quality loop.
pnpm exec docs doctor --agentWhen you want the result to feed CI, automation, or another agent, use JSON output:
pnpm exec docs doctor --agent --jsonWhat you want to see improve over time:
- discovery routes passing
- machine surfaces enabled
- metadata quality rising
- Explicit agent-friendly pages increasing on the pages that matter most
- stale generated
agent.mdfiles dropping toward zero
If a team wants to say their docs are agent-optimized, this kind of audit should be part of the definition, not an afterthought.
Compact Only After The Contract Is Good
docs agent compact is useful, but it should come after the page is already worth compressing.
Compaction is a token optimization step, not a substitute for clear structure.
pnpm exec docs agent compact guides/agent-friendly-docs
pnpm exec docs agent compact installation configuration
pnpm exec docs agent compact --changed
pnpm exec docs agent compact --stale
pnpm exec docs agent compact --stale --include-missingUse it when:
- pages are already accurate
- the machine layer is too verbose
- you want tighter
agent.mdfiles for.md, MCP, and API consumers
The useful mental model is:
- use positional page args when you already know the pages you want to compact
- use
--changedwhen you only want the pages touched in the current branch or working tree - use
--stalewhen you want to refresh generatedagent.mdfiles whose source content or compact settings drifted - use
--stale --include-missingwhen you also want to materialize missingagent.mdfiles for pages that defineagent.tokenBudgetor that you explicitly target
If a page already has a sibling agent.md, the CLI compacts that file. If it does not, the CLI
uses the page's generated machine-readable markdown, then writes a sibling agent.md.
---
title: "Agent-Friendly Docs"
agent:
tokenBudget: 777
---That page-level agent.tokenBudget overrides broader compact defaults for that page only, which is
useful when one page needs a tighter machine contract than the rest of the site.
Do not use compaction to paper over vague docs. Shorter confusion is still confusion.
The Practical Authoring Loop
For most teams, the healthy flow looks like this:
- write the human page
- add
description,related, and verification - add
<Agent>only if the machine layer needs hints - add
agent.tokenBudgeton pages that need a tighter compact target - run
pnpm exec docs doctor --agent - compact only the pages you changed or the generated files that became stale
pnpm exec docs doctor --agent
pnpm exec docs agent compact --changed
pnpm exec docs agent compact --staleThat loop is much better than regenerating every page every time. It keeps the machine layer current without turning the repo into churn.
Agent Optimization Checklist
Before calling a page agent-friendly, ask:
- can an agent name the task outcome after the first screen?
- does frontmatter include
descriptionandrelated? - are framework, version, package, route, and file-path assumptions explicit?
- are commands and code samples copy-pasteable?
- does the page say what success looks like?
- does it include verification steps with concrete commands, routes, files, or UI states?
- does troubleshooting name real symptoms and the first place to inspect?
- should this page get an additive
<Agent>block? - does it need a dedicated
agent.md, or is the human page still canonical? - does this page need
agent.tokenBudgetbefore compaction? - can an agent find it through
.md,llms.txt, MCP, and the discovery spec? - does
docs doctor --agentagree with the state of the site?
Keep evaluation input out of prompt context
Agent feedback, analytics, logs, and other submitted data can help you improve docs quality, but they should stay untrusted. Use them for review, scoring, and follow-up work; do not feed them back into Ask AI prompts or generated agent instructions without an explicit sanitization step.
If the answer is yes across the important task pages, you are not just publishing docs that agents can technically crawl. You are publishing docs they can actually work with.
Read Next
- Agent Primitive for
Agentblocks and siblingagent.md - llms.txt for the discovery layer
- MCP Server for tool-enabled retrieval
- CLI for
docs agent compact - CLI for
docs doctor --agent - Configuration for the full config surface
How is this guide?