Why 90% of AI Search Optimization Advice Is Cargo Cult Science

10 min read

TL;DR

We ran a pre-registered study on 485 domains and found zero correlation (r=0.009) between generic SEO scores and AI citation rates. Instead of ignoring the data, we studied Google patents, RAG research papers, and Olaf Kopp's GEO framework to understand how AI actually selects sources. The key factors: heading-query alignment (AI skims headings first), self-contained paragraphs (RAG systems extract atomic "nuggets"), content clarity over authority signals, and cross-page brand consistency. We rebuilt our tool around these research-backed signals instead of industry guesswork.

The Generative Engine Optimization (GEO) industry is full of confident advice: add Schema.org markup, get on Reddit, optimize for E-E-A-T, and AI will cite you. But where is the evidence? We looked for it, could not find any, and decided to generate our own. Across 485 domains, 30 queries, and 90 Perplexity API runs, our pre-registered study found zero correlation between generic structural SEO scores and AI citation rates (Pearson r = 0.009, p = 0.849).

Instead of ignoring the data, we went deeper. We studied Google patents on passage retrieval and entity extraction, academic papers on RAG system behavior, and the GEO framework developed by Olaf Kopp, one of the early pioneers of Generative Engine Optimization. What we found changed how we build our tool. Here is what the research actually says about how AI selects, processes, and cites your content.

How AI Actually Processes Your Content

Three mechanisms, documented in Google patents and academic research, determine whether your content gets cited in AI-generated answers. None of them are about keywords, backlinks, or domain authority.

AI Does Not Read Your Page — It Skims Your Headings First

A Google patent on passage-based answer scoring describes how AI systems evaluate content through a “heading vector” — the hierarchical path from the root heading (H1) down to the subheading (H2, H3) where a passage sits. The system first checks whether the heading text is semantically relevant to the query. Only if the heading matches does the system analyze the passage beneath it.

This means a page with excellent content under a generic heading like “Our Services” may never be evaluated, while a competitor with a clear heading like “How We Handle Emergency Plumbing Repairs” gets their passage scored and potentially cited. The heading is the gatekeeper. If it does not match the query, the content beneath it is invisible to AI.

Your Paragraphs Are Auditioned as Standalone Quotes

Modern AI search uses Retrieval-Augmented Generation (RAG), where a retrieval system finds relevant passages and feeds them to a language model for answer generation. The GINGER framework (Grounded Information Nugget-Based Generation of Responses) from Google Research takes this further: it breaks retrieved passages into atomic “information nuggets” — minimal, verifiable units of information. These nuggets are clustered, ranked for relevance, and synthesized into the final answer.

The practical implication: each paragraph on your page is evaluated as a potential standalone quote. If it depends on the previous paragraph for context (“As mentioned above” or “This approach” without defining what “this” refers to), it fails the nugget test.

A separate Google user study on trust in RAG responses found that users judge AI answers based on clarity, actionability, and information coverage — not on the authority of the source. RAG systems that cite clear, self-contained passages generate more trusted answers. This means your content competes on clarity, not on brand recognition.

AI Builds a Brand Profile From Your Entire Site

A Google patent on entity data extraction using LLMs describes how AI systems navigate an entire website to build a characterization of the entity (business, person, or organization) behind it. The system crawls multiple page types — About Us, product pages, contact pages — extracts facts from each, and synthesizes them into a hierarchical graph of attributes. This is not a keyword scan. It is an interpretation of your brand based on everything your site says.

Recent LLM behavior confirms this: when processing a user query, models like GPT perform a two-step retrieval — first a generic search to identify relevant entities, then site-specific queries to gather context about each entity from its own domain. Your website is the primary source AI uses to understand who you are and what you offer.

What the Research Says You Should Actually Optimize

The 7 Factors That Make Content Citable

Olaf Kopp, a GEO researcher and practitioner who has been working with entity-based search and knowledge graphs since 2013, developed a framework of seven LLM Readability factors based on his analysis of RAG and grounding-related patents. Each factor aligns with the patent-level mechanisms described above:

FactorWhat It MeansPatent/Research Backing
Natural Language QualityClear, fluent writing without keyword stuffingPassage scoring: Language Model Score
Content StructuringClear heading hierarchy, lists, tables, short paragraphsPassage scoring: Heading Vector, Context Score
Chunk RelevanceSelf-contained paragraphs with clear focus (≤400 chars for key answers)GINGER: atomic information nuggets
User Intent MatchDirect response to the search intentPassage scoring: Query Term Match Score
Information HierarchyAnswer first, then explanation, then evidencePassage scoring: Position Score (top content scores higher)
Context ManagementBalanced context-to-info ratio, avoiding “lost in the middle”GINGER: mitigates lost-in-the-middle problem
Information DensityHigh value per sentence (sweet spot: 1,200–1,500 words)Passage scoring: Passage Coverage Ratio

Why Authority Matters Less Than You Think

One of the most persistent misconceptions in GEO is that E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) directly influences LLM behavior. As Kopp points out: LLMs are not trained to recognize authority the way Google’s search algorithm does. LLMs are context machines — they build associations through co-occurrence patterns, developing confidence through the frequency and consistency of those patterns across their training and grounding data.

This distinction matters because it explains our null finding. We were measuring structural signals that proxy for authority (Schema.org completeness, robots.txt configuration, authorship signals). But LLMs do not evaluate authority — they evaluate contextual clarity. A page with no Schema.org but crystal-clear prose can outperform a heavily marked-up page with vague content.

Important nuance: AI search systems like Google AI Overviews and Perplexity use a two-stage process — retrieval (where traditional authority signals like rankings and backlinks still matter) followed by generation (where LLM-style context signals take over). Both stages matter. Authority gets you into the candidate set; clarity gets you cited.

Content Manipulation Works — and That Is the Problem

The CORE paper (Controlling Output Rankings in Generative Engines) demonstrated that appending strategically optimized content to product descriptions can push products higher in LLM recommendations with a 91.4% success rate for Top-5 placement. The researchers tested three content optimization approaches — string-based patterns, reasoning-based arguments, and review-based social proof — across four major LLMs (GPT-4o, Gemini-2.5, Claude-4, Grok-3) using a benchmark of 3,000 products in 15 categories.

This finding confirms that content quality and structure directly influence AI output rankings. It also shows that the system is manipulable. Our position: do not manipulate. Instead, understand what signals matter and make your content genuinely better along those dimensions. Reasoning-based content (“why this product solves your problem”) and authentic reviews are the strongest ethical levers the paper identifies.

How We Rebuilt Our Approach

Our original tool scored 26 structural characteristics of a website and produced a generic 0–100 score. When our own research showed this score had zero predictive power for AI citations, we pivoted. We studied the patents and papers described in this article and rebuilt our checks around the signals that AI systems actually use.

Our tool now evaluates heading-query alignment (do your headings match what people ask AI?), passage self-containedness (can each paragraph stand alone as a citation?), content relevance to specific queries (not generic scores), and brand context consistency across your site pages. These checks are grounded in the mechanisms documented in Google patents and RAG research — not in industry folklore.

What This Means for You

  1. Audit your headings. They are the gatekeeper for AI citation. Every H2 and H3 should read like a question your audience actually asks, not like an internal category label.
  2. Make every paragraph a standalone quote. Read each paragraph in isolation. If it depends on the previous one for context, rewrite it. AI will extract it alone.
  3. Optimize for clarity, not authority signals. Clean prose with specific facts beats a page loaded with Schema.org but vague in its descriptions.
  4. Use Brand Identity Blocks. Kopp’s concept: write a short block of plain-language brand description at the top of your About page and key pages. Subject-verb-object sentences. No pronouns. Both humans and AI can read it.

Honest Limitations

  • Patents are not confirmed production systems. Google patents describe methods that may or may not be active in current search products. We treat them as strong signals of design intent, not as proven specifications.
  • Research evolves. The mechanisms described here reflect the state of published research as of early 2026. Our checks will continue to update as new evidence emerges.
  • AI search is probabilistic. As Kopp warns: generic statistics and blanket theories lead to dead ends. Effective optimization requires per-industry, per-query analysis — not one-size-fits-all blueprints.

Check Your Content Against Research-Backed Signals

Our free AI Search Readiness audit evaluates your site using checks grounded in the patent research and academic findings described in this article. No guesswork. No cargo cult.

Run a Free Audit

Sources & Methodology

This article synthesizes findings from: (1) our pre-registered empirical study on 485 domains (Pearson r=0.009, p=0.849); (2) Google patents on passage-based answer scoring, entity data extraction using LLMs, and phrase-based indexing; (3) the GINGER and CORE academic papers on RAG and generative engine ranking; (4) the Google user study on trust in RAG responses; and (5) the LLM Readability and Brand Context Optimization framework developed by Olaf Kopp (Aufgesang GmbH). Patent analysis does not confirm production deployment. All claims about AI system behavior are based on published research, not internal knowledge.

Frequently Asked Questions

What is the difference between traditional SEO and AI search optimization?+

Traditional SEO optimizes for keyword matching and backlink authority to rank in search engine results pages. AI search optimization (GEO) focuses on making your content extractable and citable by large language models. The key difference: AI systems parse your content into passages and evaluate each one independently for clarity, self-containedness, and relevance to the query - not for keyword density or domain authority.

Do I still need Schema.org markup for AI search?+

Yes, but it's not sufficient on its own. Schema.org helps AI systems identify entities and attributes on your pages, but our research showed that structured data alone doesn't predict citation rates. What matters more is how clearly your natural language text conveys entity-attribute relationships. Think of Schema.org as a supplement to clear writing, not a replacement for it.

How often should I audit my content for AI readiness?+

AI systems are probabilistic - results vary between queries and over time. We recommend a quarterly audit of your key pages, with continuous monitoring of citation rates for your target queries. Major content updates or site redesigns should trigger an immediate re-audit.

Is AI search optimization relevant for B2B or only e-commerce?+

It's relevant for any business that wants to be found through AI-assisted search. When a procurement manager asks ChatGPT "what are the best project management tools for remote teams," the same passage-scoring and entity-extraction mechanisms apply. B2B companies with clear, well-structured service descriptions and thought leadership content often perform well in AI citations.

What is the difference between being cited and being recommended by AI?+

Being cited means AI quotes your content as a source (drives traffic). Being recommended means AI names your brand in its answer (builds awareness). GEO pioneer Olaf Kopp calls these two distinct disciplines: LLM Readability Optimization (citability) and Brand Context Optimization (recommendability). Both matter, and they require different optimization strategies.

AT

Alexey Tolmachev

Senior Systems Analyst · AI Search Readiness Researcher

Senior Systems Analyst with 14 years of experience in data architecture, system integration, and technical specification design. Researches how AI search engines process structured data and select citation sources. Creator of the methodology.

Check Your AI Search Readiness

Get your free AI Search Readiness Score in under 2 minutes. See exactly what to fix so ChatGPT, Perplexity, and Google AI Overviews can find and cite your content.

Scan My Site — Free

No credit card required.

Related Articles