Test your documentation site against the Agent-Friendly Documentation Spec.
Agents don't use docs like humans. They hit truncation limits, get walls of CSS instead of content, can't follow cross-host redirects, and don't know about quality-of-life improvements like llms.txt or .md docs pages that would make life swell. Maybe this is because the industry has lacked guidance - until now.
afdocs runs 22 checks across 8 categories to evaluate how well your docs serve agent consumers.
Status: Early development (0.x) This project is under active development. Check IDs, CLI flags, and output formats may change between minor versions. Feel free to try it out, but don't build automation against specific output until 1.0.
Implements spec v0.2.1 (2026-03-15).
npx afdocs check https://docs.example.comExample output:
Agent-Friendly Docs Check: https://react.dev
llms-txt
✓ llms-txt-exists: llms.txt found at 1 location(s)
✓ llms-txt-valid: llms.txt follows the proposed structure
✓ llms-txt-size: llms.txt is 14,347 characters (under 50,000 threshold)
✓ llms-txt-links-resolve: All 50 tested links resolve (177 total links)
✓ llms-txt-links-markdown: 50/50 links point to markdown content (100%)
Markdown Availability
✗ content-negotiation: Server ignores Accept: text/markdown header (0/50 sampled pages return markdown)
✗ markdown-url-support: No sampled pages support .md URLs (0/50 tested)
URL Stability
✓ http-status-codes: All 50 sampled pages return proper error codes for bad URLs
Authentication
✓ auth-gate-detection: All 50 sampled pages are publicly accessible
Summary
9 passed, 3 failed, 10 skipped (22 total)
npm install afdocs# Run all checks
afdocs check https://docs.example.com
# Run specific checks
afdocs check https://docs.example.com --checks llms-txt-exists,llms-txt-valid,llms-txt-size
# JSON output
afdocs check https://docs.example.com --format json
# Adjust thresholds
afdocs check https://docs.example.com --pass-threshold 30000 --fail-threshold 80000| Option | Default | Description |
|---|---|---|
--format <format> |
text |
Output format: text or json |
-v, --verbose |
Show per-page details for checks with issues | |
--checks <ids> |
all | Comma-separated list of check IDs |
--sampling <strategy> |
random |
URL sampling strategy (see below) |
--max-concurrency <n> |
3 |
Maximum concurrent HTTP requests |
--request-delay <ms> |
200 |
Delay between requests |
--max-links <n> |
50 |
Maximum links to test in link checks |
--pass-threshold <n> |
50000 |
Size pass threshold (characters) |
--fail-threshold <n> |
100000 |
Size fail threshold (characters) |
By default, afdocs discovers pages from your site (via llms.txt, sitemap, or both) and randomly samples up to --max-links pages to check. The --sampling flag gives you control over how that sample is selected.
| Strategy | Behavior |
|---|---|
random |
Shuffle discovered URLs and take the first N. Fast and broad, but results vary between runs. |
deterministic |
Sort discovered URLs alphabetically, then pick every Nth URL for an even spread. Produces the same sample on repeated runs as long as the URL set is stable. |
none |
Skip discovery entirely. Only check the URL you pass on the command line. |
# Reproducible runs for CI or iteration (same pages every time)
afdocs check https://docs.example.com --sampling deterministic
# Check a single page without any discovery
afdocs check https://docs.example.com/api/auth --sampling none
# Check a single page with specific checks
afdocs check https://docs.example.com/api/auth --sampling none --checks page-size-html,redirect-behavior0if all checks pass or warn1if any check fails
import { runChecks, createContext, getCheck } from 'afdocs';
// Run all checks
const report = await runChecks('https://docs.example.com');
// Run a single check
const ctx = createContext('https://docs.example.com');
const check = getCheck('llms-txt-exists')!;
const result = await check.run(ctx);afdocs includes vitest helpers so you can add agent-friendliness checks to your docs site's CI pipeline. For a ready-to-copy setup with a GitHub Actions workflow, see the examples/ directory.
Install afdocs and vitest:
npm install -D afdocs vitestCreate agent-docs.config.yml in your project root (or a tests/ subdirectory):
url: https://docs.example.comCreate a test file:
import { describeAgentDocsPerCheck } from 'afdocs/helpers';
describeAgentDocsPerCheck();Run it:
npx vitest run agent-docs.test.tsEach check appears as its own test in the output, so you can see exactly what passed, warned, failed, or was skipped:
✓ Agent-Friendly Documentation > llms-txt-exists
✓ Agent-Friendly Documentation > llms-txt-valid
✓ Agent-Friendly Documentation > llms-txt-size
× Agent-Friendly Documentation > markdown-url-support
↓ Agent-Friendly Documentation > page-size-markdown
Checks that fail cause the test to fail. Checks that warn still pass (they're informational). Checks skipped due to unmet dependencies or config filtering show as skipped.
If your platform doesn't support certain checks (for example, you can't serve markdown), you can limit which checks run via the config:
url: https://docs.example.com
checks:
- llms-txt-exists
- llms-txt-valid
- llms-txt-size
- http-status-codes
- auth-gate-detectionOnly the listed checks will run. The rest show as skipped in the test output.
The helpers look for agent-docs.config.yml (or .yaml) starting from process.cwd() and walking up the directory tree, so the config works whether your test file is at the project root or in a subdirectory. You can also pass an explicit directory:
describeAgentDocsPerCheck(__dirname);The helpers set a 120-second timeout on the check run automatically. No vitest timeout configuration is needed.
If you don't need per-check granularity, describeAgentDocs provides a simpler two-test suite (one to run checks, one to assert no failures):
import { describeAgentDocs } from 'afdocs/helpers';
describeAgentDocs();For full control, use the programmatic API directly:
import { createContext, getCheck } from 'afdocs';
import { describe, it, expect } from 'vitest';
describe('agent-friendliness', () => {
it('has a valid llms.txt', async () => {
const ctx = createContext('https://docs.example.com');
const check = getCheck('llms-txt-exists')!;
const result = await check.run(ctx);
expect(result.status).toBe('pass');
});
});22 checks across 8 categories.
| Check | Description |
|---|---|
llms-txt-exists |
Whether llms.txt is discoverable at candidate locations |
llms-txt-valid |
Whether llms.txt follows the llmstxt.org structure |
llms-txt-size |
Whether llms.txt fits within agent truncation limits |
llms-txt-links-resolve |
Whether URLs in llms.txt return 200 |
llms-txt-links-markdown |
Whether URLs in llms.txt point to markdown content |
| Check | Description |
|---|---|
markdown-url-support |
Whether .md URL variants return markdown |
content-negotiation |
Whether the server honors Accept: text/markdown |
| Check | Description |
|---|---|
rendering-strategy |
Whether pages contain server-rendered content or are SPA shells |
page-size-markdown |
Character count when served as markdown |
page-size-html |
Character count of HTML and post-conversion size |
content-start-position |
How far into the response actual content begins |
| Check | Description |
|---|---|
tabbed-content-serialization |
Whether tabbed content creates oversized output |
section-header-quality |
Whether headers in tabbed sections include context |
markdown-code-fence-validity |
Whether markdown has unclosed code fences |
| Check | Description |
|---|---|
http-status-codes |
Whether error pages return correct status codes |
redirect-behavior |
Whether redirects are same-host HTTP redirects |
| Check | Description |
|---|---|
llms-txt-directive |
Whether pages include a directive pointing to llms.txt |
| Check | Description |
|---|---|
llms-txt-freshness |
Whether llms.txt reflects current site state |
markdown-content-parity |
Whether markdown and HTML versions match |
cache-header-hygiene |
Whether cache headers allow timely updates |
| Check | Description |
|---|---|
auth-gate-detection |
Whether documentation pages require authentication to access content |
auth-alternative-access |
Whether auth-gated sites provide alternative access paths for agents |
Some checks depend on others. If a dependency doesn't pass, the dependent check is skipped automatically.
llms-txt-valid,llms-txt-size,llms-txt-links-resolve,llms-txt-links-markdownrequirellms-txt-existspage-size-markdownrequiresmarkdown-url-supportorcontent-negotiationsection-header-qualityrequirestabbed-content-serializationmarkdown-code-fence-validityrequiresmarkdown-url-supportorcontent-negotiationllms-txt-freshnessrequiresllms-txt-existsmarkdown-content-parityrequiresmarkdown-url-supportorcontent-negotiationauth-alternative-accessrequiresauth-gate-detection(warn or fail)
afdocs makes HTTP requests to the sites it checks. It enforces delays between requests (200ms default), caps concurrent connections, and honors Retry-After headers. The goal is to help documentation teams improve agent accessibility, not to load-test their infrastructure.
MIT