We ran 7 automated AEO checks on the homepages of 20 major tech companies. The average score is 57/110. Only one company scored "Excellent." The company building AI agents? It scored "Poor."
We scanned the homepage of each company using AEO Checker's 7 automated checks, testing for the signals AI agents use to discover, understand, and recommend businesses:
| # | Company | Grade | Score | Bar |
|---|---|---|---|---|
| 1 | sentry.io | Excellent | 88/110 | |
| 2 | linear.app | Good | 71/110 | |
| 2 | shopify.com | Good | 71/110 | |
| 2 | datadog.com | Good | 71/110 | |
| 2 | twilio.com | Good | 71/110 | |
| 6 | stripe.com | Good | 68/110 | |
| 6 | vercel.com | Good | 68/110 | |
| 6 | slack.com | Good | 68/110 | |
| 9 | github.com | Fair | 63/110 | |
| 10 | supabase.com | Fair | 61/110 | |
| 11 | notion.so | Fair | 56/110 | |
| 11 | zoom.us | Fair | 56/110 | |
| 11 | gitlab.com | Fair | 56/110 | |
| 14 | hubspot.com | Fair | 51/110 | |
| 15 | figma.com | Fair | 49/110 | |
| 15 | anthropic.com | Fair | 49/110 | |
| 15 | cloudflare.com | Fair | 49/110 | |
| 18 | atlassian.com | Poor | 43/110 | |
| 19 | openai.com | Poor | 23/110 | |
| 20 | hashicorp.com | Critical | 15/110 |
88/110 — the only "Excellent" score. Sentry gets perfect marks on Structured Data (20/20), llms.txt (15/15), Tool/API Description (15/15), and Markdown for Agents (10/10). They've actively invested in AI agent optimization, and it shows. When AI agents need an error tracking tool, Sentry is the one they'll recommend.
The company building the AI agents that crawl the web scored "Poor" on AI-readiness. Zero structured data. No llms.txt. No content structure. No API description on their homepage. Their only points: robots.txt (8/15) and fast servers (15/15). Anthropic fares slightly better at 49/110, but neither AI company has optimized their own site for the agents they build.
The top 8 companies are almost all developer-facing: Sentry, Linear, Datadog, Twilio, Vercel, Stripe. They understand API-first thinking and structured data. Meanwhile, enterprise SaaS — Atlassian (43), HubSpot (51), Zoom (56) — lags significantly. The companies that need AI agent visibility most (enterprise B2B) are investing in it least.
Every company scoring "Good" or above has an llms.txt file. Every company scoring "Fair" or below doesn't (with one exception: Cloudflare, which scores on markdown negotiation instead). The 15-point llms.txt check is the single biggest differentiator between leaders and laggards. Creating one takes 30 minutes.
Only 3 of 20 companies score any points on the Markdown for Agents check: Sentry (10/10), Cloudflare (8/10), and Linear/Vercel (5/10). The other 16 score zero. Cloudflare built this capability for others but it hasn't been widely adopted yet. This is the biggest untapped opportunity — and one of the easiest to implement.
Sentry's 88/110 isn't an accident. They've invested in every layer of AI agent discoverability:
| Check | Sentry (88) | OpenAI (23) | Average |
|---|---|---|---|
| 🧱 Structured Data | 20/20 | 0/20 | 12.8/20 |
| 🤖 robots.txt | 8/15 | 8/15 | 7.7/15 |
| 📄 llms.txt | 15/15 | 0/15 | 7.5/15 |
| 🏗️ Content Structure | 10/20 | 0/20 | 8.1/20 |
| 🔌 Tool/API | 15/15 | 0/15 | 4.6/15 |
| ⚡ Performance | 10/15 | 15/15 | 13.5/15 |
| 📝 Markdown | 10/10 | 0/10 | 1.6/10 |
The pattern is clear: Sentry treats AI agents as first-class users. Their site isn't just for humans — it's structured for machines to read, understand, and recommend. OpenAI's site, by contrast, is purely human-oriented.
If the best tech companies in the world average 52% on AI-readiness, imagine where most businesses stand. The opportunity for early movers is massive:
Most of these changes take less than a day. The companies that act now will be the ones AI agents recommend in 6 months.
Run the same 7 checks on your own website. Free, instant, no signup.
Scan Your Site Now →