playbook · execution

How to get cited by AI.

The five moves that turn a brand from invisible to cited on ChatGPT, Perplexity, Google AI Overviews, Gemini, and Claude — and the order to do them in.

The mental model: AI engines cite sources, not brands.

The first reframe: AI engines never cite a brand directly. They cite a source — a URL, a thread, a video, an entity record — that mentions or endorses the brand. Your job in GEO is to get your brand named inside sources AI engines already trust, and to ensure your own content is extractable enough to become one of those sources. 85% of brand mentions in AI responses come from third-party pages (AirOps, 2026).

That reframe changes the entire work-order. It is not “optimize my site for ChatGPT.” It is “make sure ChatGPT’s favorite sources about my category mention me correctly.”

Move 1 — Fix the on-page extraction layer.

Own-site extractability is still the foundation. AI engines will not cite a page they cannot parse. The three structural rules, in priority order:

  • Strict H1→H2→H3 hierarchy, no skipping. Cited pages do this 68.7% of the time; Google top-10 pages, only 23.9% (Opollo, 2026).
  • Answer the query in the first 30% of the page. 44.2% of LLM citations come from the first 30% of a cited page (seoClarity, 362K-query study).
  • Use lists and tables aggressively. Cited pages average 13.75 list sections vs 0.81 for uncited — a 17× gap (ALM Corp, 548K-page audit, 2026). 74.2% of AI citations come from structured list or comparison content (GenOptima Q1 2026 Benchmark Report).

Move 2 — Raise claim density.

AI engines are statistical machines that reward numeric substance. Pages with 19+ named data points average 5.4 citations vs 2.8 for thinner content (Bartlett 200M-citation dataset, 2026). Rule of thumb: every 150–200 words of body copy should contain at least one sourced, dated numeric claim. Two is better.

Every number gets a source and a date inline — not in a footnote. AI engines parse the surrounding context, and a bare number with no attribution gets weighted down. A bare claim with a bracketed source gets lifted verbatim into the answer.

Move 3 — Lock entity consistency.

Entities are the atomic unit AI engines use to disambiguate. If your brand is spelled differently on Wikipedia vs LinkedIn vs your own /about page, the engine will treat you as multiple weaker entities and cite whichever one has the most coherent external signal — or none of them. Lock:

  • A single canonical product/company name. One spelling. Everywhere.
  • A single Organization JSON-LD block on the homepage, with sameAs pointing to your LinkedIn, Crunchbase, GitHub, Wikipedia, and top directory profiles. This is the Knowledge Graph glue that carries across all five engines.
  • A single category label you use on every third-party page. (“AI search visibility platform” vs “GEO tool” vs “AI SEO software” — pick one, use it always.)

Pages with 15+ Knowledge Graph entities per 1,000 words earn 4.8× selection lift in AI Overviews (Ziptie.dev, 2026). Consistency is what produces those entities reliably.

Move 4 — Ship the schema stack.

Schema markup alone produces roughly more AI citations in controlled tests (Opollo AEO/GEO best practices, 2026). The stack that actually ships on top-cited pages in 2026:

SchemaWhen to use itWhat it does for GEO
OrganizationHomepage, globally via layoutAnchors the entity. Enables sameAs graph.
Product + OfferPricing page, landing pagesTriggers product-carousel citations in AI Overviews.
FAQPageEvery landing page with a real FAQSingle highest-lift schema for ChatGPT and Gemini.
HowToStep-by-step guidesTriggers procedural AIO blocks.
ArticleBlog posts and pillar pagesAdds datePublished / dateModified — the one reliable freshness signal.
BreadcrumbListEvery non-homepageHelps engines understand the site hierarchy.

Move 5 — Run the off-site plays.

This is the one SEO agencies skip and the one GEO services earn their margin on. 56% of AI citations come from off-site (AirOps, 2026). The channels, ranked by 2026 citation-per-dollar:

Reddit

Up to 46.7% of Perplexity citations by category (BrightEdge). Also 44% of all social-media citations inside Google AI Overviews (ALM Corp, Jan 2026). Authentic participation only — build the account over 30 days, answer real questions, link only when the link answers the question.

YouTube

29.5% of Google AI Overviews cite YouTube — the single top source (Ahrefs, 2026). Transcripts are the currency: publish a detailed transcript alongside every video, and the citation lift lands on both YouTube and your own site.

Wikipedia

47.9%of ChatGPT’s top-10 most-cited sources (Yext, 2026). Getting on Wikipedia is slow and political — start with a neutral entry in the category page, not a standalone brand page, and let third-party press build notability before proposing your own article.

LinkedIn

Top-3 AI-search citation platform (Search Engine Land, 2026). Employee-published long-form articles outperform company-page posts by roughly 3:1 on citation lift.

Directories

G2, Capterra, TrustRadius, Gartner Peer Insights, Product Hunt, and 15–30 category-specific directories. These are the easiest first wins — a complete directory profile with an updated description, current pricing, and 5+ recent reviews tends to produce its first citation inside 14 days.

PR pitches and HARO

Earned media inside high-freshness publications (TechCrunch, The Verge, Bloomberg, category-specific trade press) drops into Perplexity’s freshness gate within 24–72 hours of publication. One good quote in one strong outlet routinely outperforms 20 directory listings.

The sequence that wins.

  1. Audit. Identify the 50 queries that matter and which of them your competitors are already cited for.
  2. Fix on-page (Moves 1 + 4). Structure and schema are the cheapest first wins.
  3. Lock entities (Move 3). One spelling. One category. One canonical everywhere.
  4. Ship off-site assets on the two channels with the worst competitor coverage first (Move 5).
  5. Measure weekly. Iterate. Add channels as you exhaust gaps on the current ones.

Or let Cited run it for you.

Every move above is one of the seven agents in Cited’s pipeline (gap-analysis, content-strategist, content-writer, content-qa, distribution-planner, report, citation-probe) plus a human review gate before anything goes out the door. Start with the free 48-hour audit or see pricing.

◉ faq

What most people ask first.

What's the single highest-leverage move to get cited by AI?+
Ship a new comparison page on a third-party trust domain — a Wikipedia entry, a top-5 directory listing, or a well-sourced Reddit post in a category-appropriate subreddit. 56% of AI citations come from off-site, and new third-party content hits freshness + authority in one move. Most first citations appear on that kind of asset within 14 days.
How important is schema markup?+
Significant but not sufficient. Schema markup correlates with roughly 3× more AI citations in controlled tests (Opollo AEO/GEO 2026). But schema without underlying claim density and off-site authority just makes a thin page easier for crawlers to ignore more efficiently. Ship the structure, then ship the substance, then ship the schema.
Should I write for ChatGPT or for Perplexity?+
Neither, exclusively. Only 11% of domains are cited by both — they're different ecosystems. Write a page that satisfies both constraints: H1>H2>H3 hierarchy (ChatGPT), high claim density with 2025–2026 dates (Perplexity), sequential lists, and 15+ named entities per 1,000 words. That page wins in both and usually Gemini too.
Does Reddit actually work?+
Yes, but only with authentic participation. Reddit drives up to 46.7% of Perplexity citations in some categories (BrightEdge). But bot-posted content gets detected and demoted fast, and a single shadowban removes your domain from Perplexity's Reddit graph for weeks. Build the account, participate for a month before you post anything commercial, and link only where it actually answers the question.
How long does it take to see real results?+
First citations: 7–14 days on well-placed off-site content. Meaningful citation share (10%+ on your top 20 commercial queries): 60–90 days. Category dominance (30%+ citation share across 100+ queries): 6–12 months. AI search has less organic-search-style latency because freshness is rewarded, not just accumulated authority.
What does Cited actually do here?+
The full loop: probes your target queries across all five engines, runs gap analysis against competitor citations, drafts the off-site assets (directory listings, LinkedIn articles, comparison posts, PR pitches, Wikipedia-ready entity content), distributes them through vetted channels, and measures the lift. You approve each asset before it goes out. Pricing starts at $1,500/month.
◉ keep reading

Want Cited to run the audit for you?

50 target queries, 5 AI engines, competitor gap analysis. 48-hour turnaround. Free.

Get your free audit →