Optiview AEO + GEO Methodology
Optiview provides a complete AEO + GEO readiness assessment combining structural audits (21 weighted checks) with live citation testing across ChatGPT, Claude, Perplexity, and Brave Search.
Overview
Our dual-layer assessment evaluates both structural readiness and real-world citation performance:
- Structural Audit (21 checks): How well your site is optimized for training bots and answer engines
- Live Citation Testing: Whether your brand actually appears in LLM responses today
- GEO Adjusted Score: Combined metric that rewards both structure and real-world visibility
What We Measure
- AEO (Answer Engine Optimization): 11 checks for answer boxes, featured snippets, and AI Overviews
- GEO (Generative Engine Optimization): 10 checks for LLM citation readiness
- Live LLM Citations: Real queries against ChatGPT, Claude, Perplexity, and Brave
- Training Bot Access: How GPTBot, ClaudeBot, and PerplexityBot view your site
Scoring Framework
Each check is scored on a 0-3 scale:
| Score | Meaning | Weight Application |
|---|---|---|
| 3 | Strong | Full weight applied |
| 2 | Moderate | 66% of weight |
| 1 | Weak | 33% of weight |
| 0 | Poor/Missing | 0% of weight |
AEO Checks (11 total)
| Check | Weight | What We Measure |
|---|---|---|
| A1: Answer-First Design | 15 | Summary block, jump links, or tables above the fold |
| A2: Topical Cluster | 15 | Internal links to related content |
| A3: Site Authority | 15 | Organization and Author schema |
| A4: Originality & Effort | 12 | Tables, data, or 3+ outbound citations |
| A5: Schema Accuracy | 10 | Structured data (JSON-LD) |
| A6: Crawlability | 10 | Canonical tags, proper URLs |
| A7: UX & Performance | 8 | No CLS risk, fast load |
| A8: Discoverability | 6 | Sitemaps present |
| A9: Freshness | 5 | dateModified present |
| A10: AI Overviews | 4 | Citations block, chunkable structure |
| A11: Render Visibility | 10 | Content visible in static HTML (70%+ for score 3) |
GEO Checks (10 total)
| Check | Weight | What We Measure |
|---|---|---|
| G1: Citable Facts | 15 | Fact blocks, statistics, key takeaways |
| G2: Provenance | 15 | Author, publisher, citations, license schema |
| G3: Evidence Density | 12 | Citations block + 3+ outbound references |
| G4: AI Crawler Access | 12 | GPTBot, ClaudeBot, PerplexityBot allowed in robots.txt |
| G5: Chunkability | 10 | Clean HTML structure, semantic markup |
| G6: Stable URLs | 8 | Canonical fact URLs with anchors |
| G7: Dataset Links | 8 | Links to datasets, research, or sources hub |
| G8: Policy Transparency | 6 | License and reuse policy |
| G9: Update Hygiene | 7 | Changelog or version history |
| G10: Cluster Linking | 7 | Links to sources hub or related evidence |
Render Visibility Penalty
Sites with low render visibility (content only visible after JavaScript execution) receive penalties:
- AEO: -5 points if <30% of content in static HTML
- GEO: -5 to -10 points if <50% of content in static HTML
This reflects the reality that many AI crawlers (GPTBot, ClaudeBot, PerplexityBot) do not execute JavaScript.
Live Citation Testing
Beyond structural checks, we test how your brand appears in live LLM responses:
Query Generation
We use a context-aware prompt system to generate realistic branded and non-branded queries:
- Industry Detection: Classify your site's vertical using weighted signals, JSON-LD, and navigation taxonomy
- Branded Queries: 10 queries using your brand name and common aliases/nicknames
- Non-Branded Queries: 18 queries about your industry, products, and services (without brand name)
- Quality Gates: All queries are validated for brand leakage, relevance, and realism before testing
LLM Sources Tested
- ChatGPT (OpenAI): GPT-4 with web browsing enabled
- Claude (Anthropic): Claude 3 with web search
- Perplexity: Real-time web search and citation
- Brave Search: AI-powered search results
Citation Metrics
For each source, we track:
- Citation rate (% of queries where your domain appears)
- Branded vs non-branded performance
- Citation position and context
- Competitive gaps (queries where competitors appear but you don't)
GEO Adjusted Score
The GEO Adjusted Score combines structural readiness with real-world citation performance:
- Base GEO Score (70% weight): Your structural score from the 10 GEO checks
- Citation Performance (30% weight): Weighted average of citation rates across ChatGPT, Claude, and Perplexity
- Formula:
geo_adjusted = geo_raw × 0.7 + citation_bonus × 0.3
This rewards sites that are both structurally sound and actually appearing in LLM responses today.
Bot Identity & Crawling
Our audit system respects website owners and follows best practices:
- User-Agent:
OptiviewAuditBot/1.0 (+https://api.optiview.ai/bot) - Robots.txt: We parse and respect all
Allow/Disallowrules - Crawl Delay: We honor
Crawl-delaydirectives and implement exponential backoff - Rate Limiting: Configurable delays between requests to avoid overloading servers
- Meta Robots: We respect
noindex,nofollow, andnoaitags
Full documentation: OptiviewAuditBot Documentation
Research Foundations
Our methodology is based on:
- Google's public guidance on AI Overviews
- Academic research on Generative Engines (arXiv:2404.16366)
- GPTBot user-agent documentation
- Cloudflare AI crawler documentation
- Live testing of thousands of queries across ChatGPT, Claude, Perplexity, and Brave Search
- Analysis of citation patterns across 18+ industry verticals
Audit Architecture
Our system combines multiple techniques for comprehensive assessment:
- Sitemap Discovery: Automatic detection from robots.txt and common locations
- Breadth-First Crawl: Intelligent link extraction prioritizing top-level navigation
- Dual-Mode Rendering: Static HTML + JavaScript rendering for SPA detection
- Schema Validation: JSON-LD parsing and validation against schema.org
- Industry Classification: Hybrid rule-based + AI embedding classifier (18+ verticals)
- Prompt Generation: LLM-native query generation with quality gates and fallback to industry templates
Test Corpus
We maintain a test corpus of reference pages to validate our scoring: aeo-geo-test-v1.csv
Privacy & Data Handling
We take data privacy seriously:
- We only analyze publicly accessible pages
- Audit data is tied to user accounts via magic link authentication
- We do not share audit results or citations data with third parties
- See our Privacy Policy for details
Content License
This methodology and our scoring guide are published under our content license for transparent reuse and citation.
Sources
For a complete list of references and citations, see our Sources Hub.