The AEO Paradox: AI Companies Are Just as Invisible to AI as Everyone Else

April 4, 2026 · Original data from scanning 36 YC S25 startups · 5 min read

There's a reasonable assumption floating around the AI industry: companies building AI tools must be optimized for AI discovery. They understand the technology. They know how agents parse content. They should be ahead of the curve.

We tested that assumption. It's wrong.

The Experiment

We scanned all 36 companies in Y Combinator's Summer 2025 batch — their newest cohort — using our AEO Checker. The tool measures seven dimensions of AI-readiness: structured data, robots.txt AI directives, llms.txt, content structure, tool descriptions, performance, and markdown-for-agents support.

Then we split the batch in half: 18 companies with "AI" explicitly in their product (AI governance, voice models, agent platforms, ML infrastructure) and 18 without (real estate, collaboration tools, fintech, developer tools).

The Result

43.9 AI Companies (avg /110)
44.2 Non-AI Companies (avg /110)

The difference is 0.3 points. Statistically meaningless. Companies whose entire business is artificial intelligence are exactly as invisible to AI agents as companies selling rental brokerage services.

The Leaderboard Doesn't Help Either

The top scorer in the entire batch? Lilac, at 95/110 — a hiring platform. Not an AI infrastructure company. Not a model training startup.

Meanwhile, at the bottom:

CompanyProductScoreRating
LilacHiring platform95/110Excellent
...
truthsystemsAI trust verification16/110Critical
NuntiusAI communications7/110Critical

An AI trust verification platform that AI agents can't verify exists. An AI communications company that can't communicate with AI. The irony writes itself.

Why This Happens

Building AI products and optimizing for AI discovery are completely different skills. Knowing how GPT-4 works doesn't automatically mean your website tells GPTBot what your company does. It's the same reason backend engineers often have terrible personal websites — proximity to the technology doesn't transfer to applying it to yourself.

The specific gaps tell the story:

What's Missing% of YC S25 Startups
Markdown for agents89%
Tool/API descriptions75%
llms.txt signals64%
Content structure issues57%
robots.txt AI directives47%

89% have no markdown-for-agents representation. 75% have no tool or API descriptions that AI agents can parse. These aren't exotic requirements — they're the equivalent of meta descriptions for the AI era.

The Broader Implication

If YC's newest batch — small teams, deeply technical, immersed in AI discourse — isn't optimizing for AI discovery, enterprise companies certainly aren't either. Our broader industry scan confirms this: fintech averages 33% llms.txt adoption, healthcare 20%, travel and consumer 0%.

The takeaway: The companies that move first on AEO won't be the ones building AI. They'll be the ones paying attention to how AI discovers things — regardless of what they're building.

One Bright Spot

53% of YC S25 companies have an llms.txt file. That's significantly higher than the 38% average across the 61 major websites we scanned in our industry study. YC founders are reading about llms.txt on Hacker News and implementing it in minutes.

But llms.txt alone doesn't make you visible. Without structured data, without proper content hierarchy, without AI bot directives in robots.txt — the llms.txt file sits there unread. It's necessary but not sufficient.

The 30-Minute Fix

The good news: none of this is hard. Lilac's 95/110 isn't the result of months of SEO work. It's five things done right:

  1. JSON-LD structured data describing what the product does
  2. llms.txt with clear content signals
  3. robots.txt explicitly allowing GPTBot, ClaudeBot, PerplexityBot
  4. Semantic HTML with proper heading hierarchy
  5. Clear, parseable product descriptions

That's a morning's work. The gap between 28/110 and 95/110 isn't talent or budget — it's awareness.

Where Does Your Site Stand?

Scan your website in 30 seconds. See exactly what AI agents see — and what they miss.

Check Your AEO Score →

Frequently Asked Questions

Do AI companies have better AEO scores than non-AI companies?

No. Our scan of 36 YC S25 startups shows AI-focused companies average 43.9/110 while non-AI companies average 44.2/110. The 0.3-point difference is statistically meaningless. Building AI products doesn't translate to AI visibility.

What is the AEO Paradox?

The AEO Paradox is the counterintuitive finding that companies building AI products are no more visible to AI agents than companies in completely unrelated industries. Expertise in AI technology doesn't automatically transfer to AI discoverability optimization.

What AEO score do YC S25 startups average?

YC S25 startups average 44/110 on our AEO scoring system. 50% scored Poor or Critical. Only one company (Lilac, a hiring platform) scored Excellent at 95/110.

How long does it take to fix a low AEO score?

Most AEO improvements take under 30 minutes. Adding JSON-LD structured data, creating an llms.txt file, updating robots.txt for AI bots, and fixing content hierarchy are straightforward technical tasks. The gap between a score of 28 and 95 is awareness, not engineering effort.