Optiview AI Visibility Methodology
Optiview provides a comprehensive AI visibility assessment combining structural audits (36 diagnostic checks) with live citation testing across ChatGPT, Claude, Perplexity, and Brave Search. Our methodology evaluates how well AI assistants can discover, understand, and cite your content.
Overview
Our assessment evaluates both structural readiness and real-world citation performance across 6 key categories:
- 36 Diagnostic Checks: 23 page-level and 13 site-level checks measuring AI visibility
- 6 Actionable Categories: Content & Clarity, Structure & Organization, Authority & Trust, Technical Foundations, Crawl & Discoverability, and Experience & Performance
- Live Citation Testing: Whether your brand actually appears in LLM responses today
- Weighted Scoring: Each check has an impact weight (1-15) reflecting its importance for AI visibility
What We Measure
- Content & Clarity: Clear, comprehensive content that answers user intent directly (FAQ coverage, Q&A scaffolds, CTA placement)
- Structure & Organization: Semantic markup, headings, internal linking, and entity graphs for AI understanding
- Authority & Trust: Signals of expertise, credibility, and entity authority (Organization schema, provenance)
- Technical Foundations: Core metadata, tags, and technical SEO elements (titles, descriptions, canonical tags, schema)
- Crawl & Discoverability: Sitemaps, robots.txt, AI bot access, and crawl efficiency
- Experience & Performance: Speed, mobile-readiness, and user experience metrics
- Live LLM Citations: Real queries using context-aware, industry-specific prompts against ChatGPT, Claude, Perplexity, and Brave
Scoring Framework
Optiview uses a weighted 0-100 scoring system where each check contributes based on its importance:
Check Structure
- Page-Level Checks (23 total): Run on every page analyzed, measuring content quality, structure, and technical implementation
- Site-Level Checks (13 total): Evaluate overall site properties like FAQ coverage, entity graph completeness, and crawl policies
Impact Levels
Each check is assigned an impact level that determines prioritization:
| Impact Level | Weight Range | Priority |
|---|---|---|
| High | 10-15 | Critical for AI visibility - address failures immediately |
| Medium | 6-9 | Important optimization opportunities |
| Low | 1-5 | Polish and refinement |
Score Calculation
The overall score is calculated as:
- Weighted Sum: Each passing check contributes its full weight to the total
- Normalized to 100: Total points earned / Maximum possible points × 100
- Category Scores: Grouped into 6 categories for easier prioritization and actionable insights
- Pass/Warn Thresholds: Most checks use 85% pass threshold and 60% warn threshold
Check Types
Optiview employs four types of diagnostic checks, each optimized for different aspects of AI visibility:
| Check Type | Description | Example Checks |
|---|---|---|
| html_dom | Deterministic HTML analysis - fast, reliable structural checks | Title quality, meta descriptions, heading structure, FAQ presence, schema validation |
| llm | AI-assisted evaluation for semantic and content quality assessment | Topic depth & semantic coverage, content clarity analysis |
| aggregate | Site-level rollups that measure consistency and coverage across pages | FAQ coverage %, canonical correctness %, mobile-ready pages % |
| http | Robots.txt and sitemap validation via HTTP requests | Sitemap discoverability, AI bot access status |
Key Examples by Category
- Content & Clarity: Answer-first hero section (weight: 15), FAQ presence (8), Q&A scaffold (10), FAQ coverage site-wide (10)
- Structure & Organization: Single H1 tag (10), semantic heading structure (10), internal linking (7), H2 coverage ratio (8)
- Authority & Trust: Organization entity graph (10), entity graph adoption site-wide (10)
- Technical Foundations: Title quality (12), meta description (8), FAQPage schema (10), canonical correctness (8), OG tags (6)
- Crawl & Discoverability: No blocking robots directives (12), sitemap discoverability (6), AI bot access (10)
- Experience & Performance: Mobile viewport (8), page speed/LCP (7), Core Web Vitals hints (8)
For the complete list of all 36 checks with detailed documentation, examples, and how-to-fix guidance, see the Optiview Score Guide.
Live Citation Testing
Beyond structural checks, we test how your brand appears in live LLM responses:
Query Generation
We use a context-aware prompt system to generate realistic branded and non-branded queries:
- Industry Detection: Classify your site's vertical using weighted signals, JSON-LD, and navigation taxonomy
- Branded Queries: 10 queries using your brand name and common aliases/nicknames
- Non-Branded Queries: 18 queries about your industry, products, and services (without brand name)
- Quality Gates: All queries are validated for brand leakage, relevance, and realism before testing
LLM Sources Tested
- ChatGPT (OpenAI): GPT-4 with web browsing enabled
- Claude (Anthropic): Claude 3 with web search
- Perplexity: Real-time web search and citation
- Brave Search: AI-powered search results
Citation Metrics
For each source, we track:
- Citation rate (% of queries where your domain appears)
- Branded vs non-branded performance
- Citation position and context
- Competitive gaps (queries where competitors appear but you don't)
Overall AI Visibility Score
Your overall score represents comprehensive AI visibility across all 36 checks:
- Structural Score (0-100): Weighted average across all 36 diagnostic checks
- Category Breakdown: Individual scores for each of the 6 categories help prioritize improvements
- Citation Performance: Tracked separately showing actual appearance in LLM responses
- Pass/Warn/Fail States: Clear status indicators for each check with actionable next steps
This holistic approach rewards sites that are both structurally sound and actually appearing in LLM responses today, while providing actionable insights for improvement in specific areas.
Bot Identity & Crawling
Our audit system respects website owners and follows best practices:
- User-Agent:
OptiviewAuditBot/1.0 (+https://api.optiview.ai/bot) - Robots.txt: We parse and respect all
Allow/Disallowrules - Crawl Delay: We honor
Crawl-delaydirectives and implement exponential backoff - Rate Limiting: Configurable delays between requests to avoid overloading servers
- Meta Robots: We respect
noindex,nofollow, andnoaitags
Full documentation: OptiviewAuditBot Documentation
Research Foundations
Our methodology is based on:
- Google's public guidance on AI Overviews
- Academic research on Generative Engines (arXiv:2404.16366)
- GPTBot user-agent documentation
- Cloudflare AI crawler documentation
- Live testing of thousands of queries across ChatGPT, Claude, Perplexity, and Brave Search
- Analysis of citation patterns across 18+ industry verticals
Audit Architecture
Our system combines multiple techniques for comprehensive assessment:
- Sitemap Discovery: Automatic detection from robots.txt and common locations
- Breadth-First Crawl: Intelligent link extraction prioritizing top-level navigation
- Dual-Mode Rendering: Static HTML + JavaScript rendering for SPA detection
- Schema Validation: JSON-LD parsing and validation against schema.org
- Industry Classification: Hybrid rule-based + AI embedding classifier (18+ verticals)
- Prompt Generation: LLM-native query generation with quality gates and fallback to industry templates
Test Corpus
We maintain a test corpus of reference pages to validate our scoring: aeo-geo-test-v1.csv
Privacy & Data Handling
We take data privacy seriously:
- We only analyze publicly accessible pages
- Audit data is tied to user accounts via magic link authentication
- We do not share audit results or citations data with third parties
- See our Privacy Policy for details
Content License
This methodology and our scoring guide are published under our content license for transparent reuse and citation.
Sources
For a complete list of references and citations, see our Sources Hub.