How to Make Your Documentation AI Readable (A Practical Guide)
Your docs will be read by AI agents more than humans. Here's how to structure llms.txt, serve markdown versions, and actually get found by AI tools.
TL;DR
Here's something that's easy to miss: your documentation is increasingly being read by AI, not humans. When developers ask Claude about your API, when Cursor tries to understand your SDK, when Perplexity answers questions about your product, they're pulling from your docs. If your docs aren't AI readable, you're invisible to a growing chunk of your audience.
This isn't theoretical. We've been talking to teams revamping their documentation, and the question keeps coming up: "How do we make our docs LLM friendly?"
This post is the answer. We'll cover:
- The files AI tools look for (llms.txt, llms-full.txt, markdown versions)
- How to structure them properly
- What actually matters vs what's just noise
- A framework for evaluating your own docs
Let's get into it.
Why This Matters Now
The shift happened faster than most teams realized. A year ago, developers read documentation directly. Now they ask AI first.
When someone types "how do I authenticate with [your product]" into Claude or ChatGPT, the AI needs to find and understand your docs. If it can't:
- It hallucinates an answer (bad for everyone)
- It says "I don't know" (you lose the developer)
- It finds a competitor's docs instead (really bad)
AI readability isn't a nice to have anymore. It's table stakes for developer tools.
The Three Layers of AI Readable Docs
There are three things you can do, in order of impact:
| Layer | What It Is | Effort | Impact |
|---|---|---|---|
| 1. Markdown versions | Serve .md version of each page | Medium | High |
| 2. llms.txt | Summary file for AI to understand your site | Low | Medium |
| 3. llms-full.txt | Complete content in one file | Medium | High |
Let's break each one down.
Layer 1: Markdown Versions of Every Page
This is the foundation. AI tools struggle with HTML. They struggle more with JavaScript rendered content. They do great with plain markdown.
The pattern: For every documentation page, serve a markdown version at the same URL with .md appended.
1https://docs.yoursite.com/api/auth2https://docs.yoursite.com/api/auth.md ← AI readable versionFor URLs without file extensions, append index.html.md:
1https://docs.yoursite.com/guides/2https://docs.yoursite.com/guides/index.html.mdWhy This Works
When AI agents browse documentation, many request markdown by default via the Accept header. If you serve markdown, they get clean content. If you don't, they get a mess of HTML, navigation, cookie banners, and ads.
One team I talked to took a different approach: they check the user agent and redirect AI crawlers to the markdown version automatically. Same result, cleaner URLs.
Implementation
If you're using a docs platform like Mintlify, Gitbook, or Fumadocs (I mean I shouldn't mention docsalot but it should be obvious), this is often handled automatically. Check your platform's settings.
If you're building with a static site generator, add a build step that outputs markdown versions alongside your HTML. Here's the rough logic:
1// Pseudocode for a build script2for (const page of allDocPages) {3 // Build HTML version (normal)4 buildHTML(page, `${page.path}/index.html`)5 6 // Build markdown version7 fs.writeFile(`${page.path}/index.html.md`, page.markdownContent)8}Layer 2: The llms.txt File
The llms.txt proposal is simple: put a markdown file at /llms.txt that summarizes your site for AI.
Think of it as a table of contents written for machines. It tells AI:
- What your project is
- What the key documentation pages are
- Where to find detailed information
The Format
Here's the structure:
1# Project Name2
3> One sentence description of what this does4
5Optional paragraphs with important context. Things an AI needs6to know before diving into the details.7
8## Section Name9
10- [Page Title](https://url): Brief description of what's here11
12## Another Section13
14- [Another Page](https://url): What this covers15
16## Optional17
18- [Less Important Page](https://url): Secondary contentThe "Optional" section has special meaning. Content listed there can be skipped if the AI has limited context window space.
A Real Example
Here's what a well structured llms.txt looks like:
1# Acme API2
3> Acme API provides payment processing for SaaS applications. 4> RESTful API with SDKs for Python, Node, Ruby, and Go.5
6Important notes:7- All API calls require authentication via Bearer token8- Rate limit is 1000 requests per minute per API key9- Webhooks are required for async payment confirmations10- This is NOT compatible with the legacy v1 API11
12## Getting Started13
14- [Quickstart](https://docs.acme.com/quickstart.md): Install SDK, create first payment in 5 minutes15- [Authentication](https://docs.acme.com/auth.md): How to generate and use API keys16- [Error Handling](https://docs.acme.com/errors.md): Common errors and how to fix them17
18## Core Concepts19
20- [Payments](https://docs.acme.com/payments.md): Create, capture, and refund payments21- [Customers](https://docs.acme.com/customers.md): Store and manage customer payment methods22- [Webhooks](https://docs.acme.com/webhooks.md): Receive real time payment notifications23
24## API Reference25
26- [REST API](https://docs.acme.com/api/reference.md): Full endpoint documentation27- [SDK Reference](https://docs.acme.com/sdk/reference.md): Language specific SDK docs28
29## Optional30
31- [Migration from v1](https://docs.acme.com/migration.md): Upgrading from legacy API32- [Changelog](https://docs.acme.com/changelog.md): Version historyWhat Makes This Good
Notice a few things:
- The blockquote tells AI what this is. No ambiguity. Payment processing. SaaS. REST API with multiple SDKs.
- The notes section is explicit. Don't make AI guess. Tell it directly: authentication required, rate limits, webhook requirements, incompatibility warnings.
- Links include
.mdextension. Points to the markdown versions, not HTML. - Descriptions are informative. Not just "Payments" but "Create, capture, and refund payments."
- Optional section is used correctly. Migration and changelog are nice to have, not essential.
Layer 3: llms-full.txt
Some sites also provide llms-full.txt, which contains the complete documentation content in a single file.
When to use this:
- Your total docs are small enough to fit in an LLM context window (under 100k tokens)
- You want AI to have full context without making multiple requests
When to skip this:
- Your docs are huge (megabytes of content)
- Content changes frequently (hard to keep in sync)
If you do create one, it's typically generated by expanding your llms.txt, following each link and concatenating the content.
The Structure That Actually Works
After looking at a bunch of implementations and talking to folks who've done this, here's what separates good from mediocre:
1. Be Explicit About Everything
This came up multiple times in conversations: don't trust that AI will understand implicit information.
Bad:
1- [Authentication](https://docs.acme.com/auth.md)Good:
1- [Authentication](https://docs.acme.com/auth.md): Bearer token auth, API key generation, OAuth2 flow for user actionsThe second version tells AI exactly what it'll find. No guessing.
2. Use Synonyms
If people might search for the same thing with different terms, include both.
1> Acme provides payments (also known as: billing, checkout, 2> payment processing, payment gateway)This seems unnecessary. SEO folks will tell you it's not needed in 2026. Based on testing, it still helps with AI retrieval.
3. Add Section Descriptions
Don't just list links. Add a sentence explaining what the section covers:
1## API Reference2
3The complete REST API documentation. Includes request/response 4examples for every endpoint.5
6- [Payments API](https://...): Create and manage payments7- [Customers API](https://...): Customer and payment method management4. Include What Your Product Integrates With
Integrations are high value content. Be explicit:
1## Integrations2
3Works with these platforms out of the box:4- Stripe (payment fallback)5- Segment (analytics)6- Slack (notifications)7
8- [Stripe Integration](https://...): How to use Stripe as backup processor9- [Segment Integration](https://...): Send payment events to Segment5. Front Load the Important Stuff
The blockquote and opening paragraphs are what AI reads first. Put your most important information there.
What belongs in the opening:
- What the product does (one sentence)
- Key constraints or requirements
- Common misconceptions to avoid
- Version or compatibility information
Beyond llms.txt: Other AI Readability Wins
The file is just part of the story. Here's what else helps:
Serve Markdown on Request
Check if the request accepts text/markdown and serve markdown content:
1// Pseudocode2if (request.headers.accept.includes('text/markdown')) {3 return serveMarkdown(page)4} else {5 return serveHTML(page)6}Add "Open in LLM" Links
Some docs sites include links that open a page directly in an AI chat context. The URL pattern:
1https://docs.yoursite.com/quickstart?openInLLM=trueWhen clicked, it opens the page content in an AI assistant. This is more of a UX feature than a technical requirement, but it signals that you're thinking about AI consumption.
Keep Content Focused
AI has context limits. Pages that try to cover everything are harder to use than focused pages that cover one thing well.
If a page is over 3000 words, consider splitting it.
Use Clear Headings
AI uses headings for navigation. Vague headings hurt:
Bad: "Overview", "Details", "More Information"
Good: "How Authentication Works", "API Rate Limits", "Handling Webhook Failures"
How to Evaluate Your Docs (The DocsAgent Score)
We've been thinking about how to measure AI readability. Here's our framework. We're calling it the DocsAgent Score because these are the things that matter when an AI agent tries to use your docs. (We're building a free tool to check your score automatically—more on that soon.)
Availability (0-25 points)
| Check | Points |
|---|---|
| llms.txt exists at /llms.txt | 10 |
| Markdown versions available (.md URLs) | 10 |
| llms-full.txt exists | 5 |
Structure (0-25 points)
| Check | Points |
|---|---|
| llms.txt has proper format (H1, blockquote, sections) | 10 |
| Links include descriptions | 5 |
| Sections are logically organized | 5 |
| Optional section used appropriately | 5 |
Content Quality (0-25 points)
| Check | Points |
|---|---|
| Opening describes what the product does | 5 |
| Key constraints/requirements listed | 5 |
| Integrations documented | 5 |
| Error handling documented | 5 |
| Examples included | 5 |
Accessibility (0-25 points)
| Check | Points |
|---|---|
| Markdown versions load correctly | 10 |
| Fast page load (<2 seconds) | 5 |
| No JavaScript required for content | 5 |
| Clean content (no navigation/ads in markdown) | 5 |
Score Interpretation:
- 80-100: Excellent. AI agents can use your docs effectively.
- 60-79: Good. Most AI tools will work, some rough edges.
- 40-59: Fair. Hit or miss. AI will struggle with some queries.
- Below 40: Poor. You're probably invisible to AI tools.
The Bottom Line
Your docs are being read by AI. That's not a prediction. It's happening now.
The good news: making docs AI readable isn't hard. It's the same stuff that makes docs good for humans (clear structure, explicit explanations, focused content) plus a few specific files (llms.txt, markdown versions).
The teams that do this well will show up when developers ask AI for help. The teams that don't will wonder why their docs feel invisible.
Don't be invisible.
If you've implemented llms.txt or AI readable docs and have lessons to share, drop us a note. We're collecting examples for a future post on what's working in the wild.
More Articles to Read
llms.txt Isn't Enough
llms.txt solves discovery. Content negotiation solves consumption. One of these matters 27x more than the other.
install.md reinvents Gherkin, poorly
A new 'standard' for AI-powered installation has emerged. But is it solving a real problem, or is it a solution that wouldn't exist if building things wasn't so cheap now?
Your Documentation Is Already Lying to You
I spent three years watching documentation rot. Here's why it happens, what's actually changing, and the uncomfortable truth about keeping docs alive.