Sentry Already Optimizes for AI Agents — Does Your Site?
David Cramer, co-founder of Sentry, just published how they serve completely different content to AI agents. It's not llms.txt. It's not a new standard. It's HTTP content negotiation — and it works right now.
Accept: text/markdown HTTP header and serves them optimized markdown instead of HTML. Their docs, app, and side projects all do this. The result: agents get exactly what they need — no navigation bloat, no JavaScript, no auth walls. This is AEO in production, not theory.
What Sentry Is Actually Doing
When you visit docs.sentry.io in a browser, you get the normal docs site — sidebar, navigation, JavaScript interactivity, the works. When an AI agent hits the same URL with Accept: text/markdown in its request headers, it gets something completely different: pure markdown with a frontmatter title, clean content hierarchy, and zero browser cruft.
Try it yourself:
curl -H "Accept: text/markdown" https://docs.sentry.io/
You'll get back structured markdown starting with the page title, a description of Sentry, key features as bullet points, and a sitemap of links. No <nav>, no <script>, no CSS. Just the content an agent actually needs.
This isn't a gimmick. It solves three real problems:
- Token savings. HTML pages with navigation, scripts, and layout markup burn through an agent's context window. Markdown is typically 5-10x smaller for the same information.
- Accuracy. Agents parsing HTML have to guess what's content vs. what's chrome. With markdown, there's no guessing — everything returned is content.
- Actionability. Sentry's markdown responses are structured for agents. Index pages become sitemaps. Docs pages become step-by-step instructions. Auth walls become pointers to MCP servers and APIs.
The Auth Wall Problem (Solved Elegantly)
Here's the clever part. When an agent hits sentry.io — the main app, which requires login — serving an auth page is useless. An agent can't log in through a browser form. So instead, Sentry responds with:
# Sentry
You've hit the web UI. It's HTML meant for humans, not machines.
Here's what you actually want:
## MCP Server (recommended)
The fastest way to give your agent structured access to Sentry.
{
"mcpServers": {
"sentry": {
"url": "https://mcp.sentry.dev/mcp"
}
}
}
## CLI
Query issues and analyze errors from the terminal.
https://cli.sentry.dev
Instead of a dead end, the agent gets a map to every programmatic interface Sentry offers. MCP server, CLI, API — with setup instructions. The agent immediately knows how to interact with Sentry productively.
This is what good AEO looks like. Not just "can agents read your content" but "when agents arrive, do they know what to do next?"
Why Not llms.txt?
David Cramer's take is direct:
"llms.txt was a valuable idea, but it was the wrong implementation."
The insight behind llms.txt — that agents need a machine-readable overview of your site — is correct. But a single static file at your domain root has limitations:
- It's one-size-fits-all. Every page on your site is different. An agent visiting your docs needs different context than one hitting your pricing page. A static file can't adapt.
- It doesn't scale. For a site with thousands of pages, a single llms.txt either becomes massive (defeating the purpose) or stays too sparse to be useful.
- It's a new standard that requires adoption. Content negotiation via
Acceptheaders is an existing HTTP standard. Every web server already supports it. No new spec needed.
Content negotiation gives you per-page optimization. The agent visits any URL, sends Accept: text/markdown, and gets a version of that specific page optimized for machine consumption. Same URL, different representation based on the client's needs. That's how HTTP was designed to work.
No. They serve different purposes. llms.txt is a discovery mechanism — it tells agents your site exists and what it's about. Content negotiation is a delivery mechanism — it serves optimized content once agents arrive. The best setup uses both: llms.txt for the "what are you?" overview, and content negotiation for every individual page. More on llms.txt here.
The Three Optimization Axes
Cramer identifies three areas where agent-served content should differ from browser content:
1. Order of Content
Agents (and the LLMs behind them) are known to read the beginning of content more carefully than the end. Put the most important information first. For Sentry's docs index, that means leading with "what is Sentry" and "key features" before diving into the sitemap — the opposite of a typical docs sidebar that starts with "Getting Started."
2. Content Size
Frontier models and their agent wrappers have a known behavior: to avoid context bloat, they only read part of files. The first N lines, or bytes, or characters. Your agent-optimized content should front-load the critical information and keep total size manageable. A 50KB HTML page should become a 3KB markdown response.
3. Depth of Nodes
Deeply nested content (subsection of a subsection of a tab panel) is hard for agents to navigate. Flattening the hierarchy and using clear heading levels (h1 → h2 → h3, no deeper) makes your content parseable in a single pass.
What This Means for Your Site
Sentry isn't a small project. It's one of the most widely-used developer tools in the world. When they invest engineering effort into serving different content to AI agents, it's a signal that this matters.
Here's the uncomfortable question: what do agents see when they visit your site?
For most sites, the answer is: the same bloated HTML that browsers get. Navigation bars, cookie banners, JavaScript bundles, footer links — noise that burns through context windows and confuses agent parsing.
Sentry, Cloudflare, Stripe, and other tech leaders are already optimizing for agents. Every month they do and you don't, the AI-driven discovery gap between your site and theirs grows. This isn't a future concern — agents are making tool-selection decisions right now based on what they can read.
How to Start (Today)
You don't need Sentry's engineering team to get the basics right. Here's the priority order:
- Check what agents see now. Run
curl -H "Accept: text/markdown" https://yoursite.comand look at the response. If it's the same HTML as a browser gets, you have work to do. - Add an llms.txt file. Even if content negotiation is the better long-term solution,
llms.txtis the easiest starting point. It takes 10 minutes. Here's how. - Unblock AI crawlers. Check your
robots.txtfor rules blocking GPTBot, ClaudeBot, or PerplexityBot. If they can't crawl you, you don't exist in AI-generated answers. - Add structured data. Schema.org JSON-LD on your key pages gives AI systems machine-readable context about your content without any server-side changes.
- Implement content negotiation. If you're on Cloudflare, they have a built-in solution for this. For custom setups, it's middleware that checks the
Acceptheader and returns markdown. Most frameworks make this straightforward.
Check Your AEO Score — Free
See how your site handles AI agents across 7 checks — including content negotiation, llms.txt, structured data, and robots.txt rules.
Run Free AEO Check →The Bigger Picture
We're in the early innings of a fundamental shift in how content gets discovered. For 25 years, the question was "what does Google see when it visits my site?" Now there's a second question: "what do agents see?"
Sentry's implementation isn't the final answer. Cramer himself notes that "how you do that is an ever-evolving subject." But the direction is clear: sites that serve optimized content to agents will be more visible, more useful, and more recommended by AI systems than sites that don't.
The tools to start are available now. Check your site, fix the basics, and iterate. The companies that figure this out early will have a compounding advantage as AI-mediated discovery becomes the norm.