What Answer Engine Optimization Does and Does Not Have in Common with SEO
Court documents from the Google antitrust case revealed something that should have gotten more attention than it did. Google's AI Overviews run on a system called FastSearch, which is powered by a signal set called RankEmbed. Unlike the full organic ranking pipeline, FastSearch retrieves fewer documents and relies on lighter signals. RankEmbed measures semantic relationships between queries and documents, not backlinks, not domain authority (PageRank), not page speed. It asks one question: how closely does this content align with the meaning of what someone asked?
That matters because it tells us something concrete about what answer engine optimization actually requires, and where it does and doesn't overlap with traditional SEO.
My company, Custom Legal Marketing, has been running large-scale studies across thousands of law firm ranking pages through our AI marketing platform, CLM Sequoia, testing which signals actually predict performance in both traditional search and AI-generated answers. What I found is that the overlap between SEO and AEO is real in some places, completely imaginary in others, and the conventional wisdom about which signals matter is wrong more often than it's right.
First, a quick level-set on what we're comparing
SEO targets organic search results. You're trying to rank a web page in Google's list of blue links or the map pack. The mechanics involve keyword targeting, content quality, backlinks, technical health, local signals, and hundreds of other ranking factors that every SEO professional has been rambling on about for decades (me included.)
AEO targets AI-generated answers, and the way people interact with those answers is fundamentally different from how they use Google. Most people don't open ChatGPT and type "personal injury lawyer Denver" the way they'd type it into a search bar. They start by trying to understand their situation.
They ask if they have a case.
They ask what the law says.
They ask what mistakes to avoid and what timelines look like.
Only after they've worked through those questions do they ask for a lawyer recommendation.
By the time the recommendation appears, ChatGPT has already helped the user frame their problem. It has narrowed the practice area. It has often narrowed the jurisdiction. In many cases, it has filtered out people who don't have viable claims. The recommendation happens after the lead has been qualified by AI.
Getting your business mentioned in those answers is the mission behind answer engine optimization. The platforms that matter right now are ChatGPT, Google's AI Overviews, Perplexity, Claude, and Gemini. Even Grok has sent a stray click to some of our clients over the last few months.
Those are different outputs. But the inputs? More shared than most people realize.
Content architecture: where SEO and AEO share the most DNA
SEO practitioners have been building topic clusters for years: a pillar page targeting a broad subject, surrounded by supporting pages that cover specific subtopics, all interconnected through internal links. That structure signals topical authority to Google's crawlers.
AI retrieval systems interpret the same architecture. When a language model goes looking for material to build an answer, it doesn't just evaluate the single page it finds. It looks at what else exists on that domain around the same topic. Sites with comprehensive, interlinked coverage of a subject area get picked up more often than a standalone page with no supporting content around it.
We see this in our own client work. Law firms with deep content hubs on specific practice areas (10 to 20 interlinked pages covering every angle of, say, car accident law in a specific state) consistently outperform firms with thinner coverage in both Google rankings and AI chatbot recommendations. The architecture is doing the same job in both channels.
This maps directly to what the ,. If the signal is measuring semantic relationships between queries and content, then a site with a dense web of semantically related pages is going to light up that signal more effectively than a site with scattered, unrelated content. Physical and virtual siloing isn't just good SEO hygiene. It's building the exact kind of topical coherence that RankEmbed is designed to detect.
Entity consistency fits here, too. The AEO world has a lot of fancy terminology around "entity signals" and "entity clarity." What those terms actually describe is something local SEO practitioners have been managing for over a decade: consistent business information across every platform where your business appears.
When an AI platform decides whether to recommend your business, it cross-references your website against directories, review platforms, and professional profiles. If your name, address, phone number, and credentials match everywhere, the AI has no reason to question your legitimacy. This is the exact same principle behind Google's local ranking algorithm, which has always weighted NAP consistency across citations. Businesses that have maintained clean directory listings for SEO have been building AEO trust signals for years without knowing it.
Expert content: where the data gets surprising
The second layer of overlap is the content itself: expertise, depth, and quality. This is where SEO and AEO still share common ground, but the conventional wisdom about what "quality" means is shakier than most people realize.
We measured this directly. In February 2026, we pulled the top 5 organic Google results for 28 legal keywords across 24 major U.S. cities and ran every page through Winston AI to score the percentage of AI-generated content in law-related searches. The dataset covered 2,435 ranking appearances, 1,618 unique URLs, and 1,021 unique domains across 8 practice areas.
The Spearman correlation between AI content percentage and organic ranking position came back at r = 0.065, with a p-value of 0.138. That is statistically insignificant, meaning there is no correlation. The algorithm simply does not care whether AI wrote the page. But we also found a correlation of r = -0.233 (p < 0.0001) between AI content percentage and readability score. Pages with more AI-generated content were harder to read. And word count had a stronger statistical relationship with rankings than AI percentage did.
So AI vs human content doesn't matter. Readability matters a lot.
The implication for both SEO and AEO is the same. The content itself is not what gets rewarded or penalized. It's the quality, readability, and depth of the content that matters. AI platforms evaluating whether to cite your page are looking at the same quality signals Google looks at. If your content reads like it was generated by a machine and never edited, both systems will pass you over for something better.
This is also where E-E-A-T does real work across both channels. Google formalized experience, expertise, authoritativeness, and trustworthiness as a quality framework years ago. AI answer engines enforce the same criteria, arguably more aggressively. When Perplexity picks a source to cite, it's not counting your backlinks. It's evaluating whether the page reads like it was written by someone with actual knowledge of the subject. Named authors, credentials on the page, citations to primary sources, and specificity that only comes from real practice experience. Those signals carry in both channels.
Machine-readability: one implementation, two audiences
Schema markup is where the third layer of overlap lives. FAQ schema, article schema, attorney profile schema, local business schema. These have been SEO best practices for years because they help Google understand what a page is about, who wrote it, and what questions it answers.
AI answer engines use the exact same structured data. When an AI model is deciding which content blocks to extract from a page, schema markup tells it where the answers are. FAQ schema is especially powerful because it explicitly labels question-and-answer pairs, which is exactly the format AI models are designed to consume.
If your schema is already in place for SEO, you've given AI platforms a machine-readable map of your content. One implementation, two payoffs.
But here's where machine-readability extends beyond traditional schema work for AEO: the content itself needs to be structured for extraction, not just for indexing. In SEO, Google evaluates the page as a whole. In AEO, the AI model is pulling specific answer blocks out of your page and reassembling them into someone else's response. That means each section of your content needs to function as a standalone, extractable unit. Direct answer in the opening sentences of each section. Headings phrased as questions real people ask, not generic labels. Content structured so an AI can grab a clean paragraph without needing the paragraphs before or after it for context.
Where the conventional wisdom is wrong
This is where it gets interesting. A lot of what SEO agencies sell as critical ranking factors turned out to have negligible impact on traditional rankings, and they matter even less for AEO.
PageSpeed: the emperor has no clothes in either channel
In early 2026, we analyzed 1,750 search engine results across 50 of the largest U.S. metro markets. We pulled the top 5 organic results for 350 keyword-city combinations covering personal injury keywords, then ran each URL through the Google PageSpeed Insights API. We measured eight performance metrics across 1,328 unique URLs from 653 distinct domains.
The Pearson correlation between PageSpeed performance score and ranking position was r = -0.0705.
For context: the famously absurd correlation between Nicolas Cage movie releases and swimming pool drownings is nearly ten times stronger. Our number, -0.0705, rounds to what it is: zero.
The data got worse the closer we looked. Of the 340 pages sitting in Position 1 across all searches, 64.7% received a "Poor" grade on Google's own Largest Contentful Paint metric. Only 14.7% met Google's "Good" threshold. The average PageSpeed score across all 1,750 top-five results was 64.9 out of 100. The difference between Position 1 (66.6 average) and Position 5 (64.1 average) was 2.5 points on a 100-point scale.
One page scored a perfect 100 on PageSpeed and sat at Position 5. Another scored 28 and held Position 1 for a comparable keyword. That is a 72-point gap that produced zero ranking advantage for the faster site.
Why does this matter for AEO? Because AI answer engines care even less about your load time than Google does. ChatGPT and Perplexity are not rendering your page in a browser. They are reading your content from an index or a live crawl. Whether your site takes 1.2 seconds or 4.8 seconds to render visually is irrelevant to a system that is parsing text, not loading a webpage.
Yet agencies continue to sell PageSpeed optimization as a primary SEO lever, and now some are rebranding it as an AEO lever too. The data says otherwise in both cases.
Where AEO genuinely diverges from SEO
Now for the differences that actually matter.
The unit of output is different
In SEO, the unit of output is a ranking position. You're trying to get your page to appear in a list, ideally near the top. Even if a searcher never clicks, you're still "ranking."
In AEO, the unit of output is an inclusion in a generated answer. There is no list. The AI either mentions you or it doesn't. It's binary in a way that SEO has never been. You're not fighting for Position 3 versus Position 7. You're fighting for existence in the response.
This changes the optimization calculus. In SEO, incremental improvements (moving from Position 8 to Position 5) have incremental value. In AEO, the gap between "mentioned" and "not mentioned" is the entire value.
Further reading: How to Produce Content Optimised for LLM Citation
Content extraction vs. content ranking
A page can rank well in Google while being poorly structured for AI extraction. We see this constantly. Pages that rank Position 1 for a legal keyword but bury the actual answer four paragraphs deep get passed over by ChatGPT in favor of pages that state the answer clearly up front, even if those pages rank lower in traditional search. In SEO, ranking is a matter of degree. In AEO, selection is binary. You either get cited or you don't.
Links matter differently
Backlinks are still one of the strongest ranking factors in traditional SEO. Our research consistently shows that the link profile gap between Position 1 and Position 10 is one of the most pronounced differentiators in competitive legal markets.
In AEO, links don't carry the same direct weight. AI models don't count your backlinks the way Google does. What they do is evaluate whether your brand and content are referenced across trusted sources on the web. That's related to link building, but it's not the same thing. A firm that gets mentioned in a legal directory, a bar association profile, a news article, and a legal blog has stronger AEO signals than a firm with 500 links from press release syndication endpoints that Google mostly ignores anyway.
The shift is from link quantity to reference quality. SEO already rewards this to a degree, but AEO makes it the primary signal.
Multi-platform optimization is a new discipline
In SEO, you're optimizing for one platform: Google. (Fine, technically also Bing, but let's be realistic about the traffic split.)
In AEO, you're optimizing for Google AI Overviews, ChatGPT, Perplexity, Claude, and Gemini simultaneously. Each platform retrieves, evaluates, and presents information differently. Google's AI Overviews run on a retrieval system called FastSearch, which uses semantic alignment signals that differ from traditional organic ranking factors. ChatGPT pulls from live web data and knowledge bases with its own trust evaluation. Perplexity shows its sources and links directly to pages it cites. Claude handles long-form, multi-step reasoning particularly well.
Optimizing for all of these at once requires a different strategic layer than SEO has traditionally demanded. You can't just write one page, rank it in Google, and assume AI platforms will find it.
Measurement is fundamentally different
SEO measurement is mature: rankings, organic traffic, click-through rates, conversions. The tools exist. The frameworks are established.
AEO measurement is new and rough. The core question, "When someone asked an AI about my practice area in my city, did my firm show up?" requires purpose-built monitoring tools. Most of these tools didn't exist 18 months ago. We built our own into the Sequoia platform because nothing on the market tracked citation frequency, sentiment, and competitive positioning across AI chatbot responses with the specificity we needed.
The real takeaway
SEO and AEO are not the same discipline, but they share enough foundation that the line between them is blurry. Content quality, content architecture, entity consistency, and structured data all serve both channels. The conventional metrics that agencies love to sell (PageSpeed, Domain Authority) don't meaningfully predict performance in either channel.
Where AEO diverges is in the mechanics of how content gets consumed. AI models are not ranking your page in a list. They are deciding whether to trust your content enough to put it in someone's answer, and that trust evaluation is more about verifiable reputation than technical signals. The firms winning in AI answers right now built strong reputations, published clear and specific content, and maintained consistent entity information long before AEO became a conference topic.
If you're spending money on SEO work that isn't also setting you up for AEO, you're building on a foundation that's already halfway obsolete. And if someone is selling you AEO as a completely separate discipline that requires starting from zero, they're ignoring the 80% of the work that carries over.
The smartest play is recognizing where the overlap is real, investing in the things that serve both channels, and saving your incremental budget for the genuinely new capabilities: answer-first content structuring, multi-platform monitoring, and the kind of verifiable reputation that machines trust enough to put in front of the people asking for help.