Blogs/Best Practices

How to Make Your Documentation AI Readable (A Practical Guide)

Your docs will be read by AI agents more than humans. Here's how to structure llms.txt, serve markdown versions, and actually get found by AI tools.

F
Faizan KhanAuthor
12 min read

TL;DR

Here's something that's easy to miss: your documentation is increasingly being read by AI, not humans. When developers ask Claude about your API, when Cursor tries to understand your SDK, when Perplexity answers questions about your product, they're pulling from your docs. If your docs aren't AI readable, you're invisible to a growing chunk of your audience.

This isn't theoretical. We've been talking to teams revamping their documentation, and the question keeps coming up: "How do we make our docs LLM friendly?"

This post is the answer. We'll cover:

  • The files AI tools look for (llms.txt, llms-full.txt, markdown versions)
  • How to structure them properly
  • What actually matters vs what's just noise
  • A framework for evaluating your own docs

Let's get into it.


Why This Matters Now

The shift happened faster than most teams realized. A year ago, developers read documentation directly. Now they ask AI first.

When someone types "how do I authenticate with [your product]" into Claude or ChatGPT, the AI needs to find and understand your docs. If it can't:

  • It hallucinates an answer (bad for everyone)
  • It says "I don't know" (you lose the developer)
  • It finds a competitor's docs instead (really bad)

AI readability isn't a nice to have anymore. It's table stakes for developer tools.


The Three Layers of AI Readable Docs

There are three things you can do, in order of impact:

LayerWhat It IsEffortImpact
1. Markdown versionsServe .md version of each pageMediumHigh
2. llms.txtSummary file for AI to understand your siteLowMedium
3. llms-full.txtComplete content in one fileMediumHigh

Let's break each one down.


Layer 1: Markdown Versions of Every Page

This is the foundation. AI tools struggle with HTML. They struggle more with JavaScript rendered content. They do great with plain markdown.

The pattern: For every documentation page, serve a markdown version at the same URL with .md appended.

Text
1https://docs.yoursite.com/api/auth
2https://docs.yoursite.com/api/auth.md ← AI readable version

For URLs without file extensions, append index.html.md:

Text
1https://docs.yoursite.com/guides/
2https://docs.yoursite.com/guides/index.html.md

Why This Works

When AI agents browse documentation, many request markdown by default via the Accept header. If you serve markdown, they get clean content. If you don't, they get a mess of HTML, navigation, cookie banners, and ads.

One team I talked to took a different approach: they check the user agent and redirect AI crawlers to the markdown version automatically. Same result, cleaner URLs.

Implementation

If you're using a docs platform like Mintlify, Gitbook, or Fumadocs (I mean I shouldn't mention docsalot but it should be obvious), this is often handled automatically. Check your platform's settings.

If you're building with a static site generator, add a build step that outputs markdown versions alongside your HTML. Here's the rough logic:

JavaScript
1// Pseudocode for a build script
2for (const page of allDocPages) {
3 // Build HTML version (normal)
4 buildHTML(page, `${page.path}/index.html`)
5
6 // Build markdown version
7 fs.writeFile(`${page.path}/index.html.md`, page.markdownContent)
8}

Layer 2: The llms.txt File

The llms.txt proposal is simple: put a markdown file at /llms.txt that summarizes your site for AI.

Think of it as a table of contents written for machines. It tells AI:

  • What your project is
  • What the key documentation pages are
  • Where to find detailed information

The Format

Here's the structure:

Markdown
1# Project Name
2
3> One sentence description of what this does
4
5Optional paragraphs with important context. Things an AI needs
6to know before diving into the details.
7
8## Section Name
9
10- [Page Title](https://url): Brief description of what's here
11
12## Another Section
13
14- [Another Page](https://url): What this covers
15
16## Optional
17
18- [Less Important Page](https://url): Secondary content

The "Optional" section has special meaning. Content listed there can be skipped if the AI has limited context window space.

A Real Example

Here's what a well structured llms.txt looks like:

Markdown
1# Acme API
2
3> Acme API provides payment processing for SaaS applications.
4> RESTful API with SDKs for Python, Node, Ruby, and Go.
5
6Important notes:
7- All API calls require authentication via Bearer token
8- Rate limit is 1000 requests per minute per API key
9- Webhooks are required for async payment confirmations
10- This is NOT compatible with the legacy v1 API
11
12## Getting Started
13
14- [Quickstart](https://docs.acme.com/quickstart.md): Install SDK, create first payment in 5 minutes
15- [Authentication](https://docs.acme.com/auth.md): How to generate and use API keys
16- [Error Handling](https://docs.acme.com/errors.md): Common errors and how to fix them
17
18## Core Concepts
19
20- [Payments](https://docs.acme.com/payments.md): Create, capture, and refund payments
21- [Customers](https://docs.acme.com/customers.md): Store and manage customer payment methods
22- [Webhooks](https://docs.acme.com/webhooks.md): Receive real time payment notifications
23
24## API Reference
25
26- [REST API](https://docs.acme.com/api/reference.md): Full endpoint documentation
27- [SDK Reference](https://docs.acme.com/sdk/reference.md): Language specific SDK docs
28
29## Optional
30
31- [Migration from v1](https://docs.acme.com/migration.md): Upgrading from legacy API
32- [Changelog](https://docs.acme.com/changelog.md): Version history

What Makes This Good

Notice a few things:

  1. The blockquote tells AI what this is. No ambiguity. Payment processing. SaaS. REST API with multiple SDKs.
  2. The notes section is explicit. Don't make AI guess. Tell it directly: authentication required, rate limits, webhook requirements, incompatibility warnings.
  3. Links include .md extension. Points to the markdown versions, not HTML.
  4. Descriptions are informative. Not just "Payments" but "Create, capture, and refund payments."
  5. Optional section is used correctly. Migration and changelog are nice to have, not essential.

Layer 3: llms-full.txt

Some sites also provide llms-full.txt, which contains the complete documentation content in a single file.

When to use this:

  • Your total docs are small enough to fit in an LLM context window (under 100k tokens)
  • You want AI to have full context without making multiple requests

When to skip this:

  • Your docs are huge (megabytes of content)
  • Content changes frequently (hard to keep in sync)

If you do create one, it's typically generated by expanding your llms.txt, following each link and concatenating the content.


The Structure That Actually Works

After looking at a bunch of implementations and talking to folks who've done this, here's what separates good from mediocre:

1. Be Explicit About Everything

This came up multiple times in conversations: don't trust that AI will understand implicit information.

Bad:

Markdown
1- [Authentication](https://docs.acme.com/auth.md)

Good:

Markdown
1- [Authentication](https://docs.acme.com/auth.md): Bearer token auth, API key generation, OAuth2 flow for user actions

The second version tells AI exactly what it'll find. No guessing.

2. Use Synonyms

If people might search for the same thing with different terms, include both.

Markdown
1> Acme provides payments (also known as: billing, checkout,
2> payment processing, payment gateway)

This seems unnecessary. SEO folks will tell you it's not needed in 2026. Based on testing, it still helps with AI retrieval.

3. Add Section Descriptions

Don't just list links. Add a sentence explaining what the section covers:

Markdown
1## API Reference
2
3The complete REST API documentation. Includes request/response
4examples for every endpoint.
5
6- [Payments API](https://...): Create and manage payments
7- [Customers API](https://...): Customer and payment method management

4. Include What Your Product Integrates With

Integrations are high value content. Be explicit:

Markdown
1## Integrations
2
3Works with these platforms out of the box:
4- Stripe (payment fallback)
5- Segment (analytics)
6- Slack (notifications)
7
8- [Stripe Integration](https://...): How to use Stripe as backup processor
9- [Segment Integration](https://...): Send payment events to Segment

5. Front Load the Important Stuff

The blockquote and opening paragraphs are what AI reads first. Put your most important information there.

What belongs in the opening:

  • What the product does (one sentence)
  • Key constraints or requirements
  • Common misconceptions to avoid
  • Version or compatibility information

Beyond llms.txt: Other AI Readability Wins

The file is just part of the story. Here's what else helps:

Serve Markdown on Request

Check if the request accepts text/markdown and serve markdown content:

JavaScript
1// Pseudocode
2if (request.headers.accept.includes('text/markdown')) {
3 return serveMarkdown(page)
4} else {
5 return serveHTML(page)
6}

Add "Open in LLM" Links

Some docs sites include links that open a page directly in an AI chat context. The URL pattern:

Text
1https://docs.yoursite.com/quickstart?openInLLM=true

When clicked, it opens the page content in an AI assistant. This is more of a UX feature than a technical requirement, but it signals that you're thinking about AI consumption.

Keep Content Focused

AI has context limits. Pages that try to cover everything are harder to use than focused pages that cover one thing well.

If a page is over 3000 words, consider splitting it.

Use Clear Headings

AI uses headings for navigation. Vague headings hurt:

Bad: "Overview", "Details", "More Information"

Good: "How Authentication Works", "API Rate Limits", "Handling Webhook Failures"


How to Evaluate Your Docs (The DocsAgent Score)

We've been thinking about how to measure AI readability. Here's our framework. We're calling it the DocsAgent Score because these are the things that matter when an AI agent tries to use your docs. (We're building a free tool to check your score automatically—more on that soon.)

Availability (0-25 points)

CheckPoints
llms.txt exists at /llms.txt10
Markdown versions available (.md URLs)10
llms-full.txt exists5

Structure (0-25 points)

CheckPoints
llms.txt has proper format (H1, blockquote, sections)10
Links include descriptions5
Sections are logically organized5
Optional section used appropriately5

Content Quality (0-25 points)

CheckPoints
Opening describes what the product does5
Key constraints/requirements listed5
Integrations documented5
Error handling documented5
Examples included5

Accessibility (0-25 points)

CheckPoints
Markdown versions load correctly10
Fast page load (<2 seconds)5
No JavaScript required for content5
Clean content (no navigation/ads in markdown)5

Score Interpretation:

  • 80-100: Excellent. AI agents can use your docs effectively.
  • 60-79: Good. Most AI tools will work, some rough edges.
  • 40-59: Fair. Hit or miss. AI will struggle with some queries.
  • Below 40: Poor. You're probably invisible to AI tools.

The Bottom Line

Your docs are being read by AI. That's not a prediction. It's happening now.

The good news: making docs AI readable isn't hard. It's the same stuff that makes docs good for humans (clear structure, explicit explanations, focused content) plus a few specific files (llms.txt, markdown versions).

The teams that do this well will show up when developers ask AI for help. The teams that don't will wonder why their docs feel invisible.

Don't be invisible.


If you've implemented llms.txt or AI readable docs and have lessons to share, drop us a note. We're collecting examples for a future post on what's working in the wild.