Audit Process & Scoring
Understand how our audits work, what the Visibility Score measures, and how to interpret your results across 36 diagnostic criteria.
What happens when I run an audit?
When you run an audit, Optiview performs a comprehensive analysis of your site's AI readiness:
- Discovery: We crawl your site's key pages (up to 120 by default), prioritizing important content from your sitemap and internal links
- Technical Analysis: We examine titles, metadata, canonical tags, mobile optimization, language tags, and crawl policies
- Structural Assessment: We analyze heading hierarchy, internal linking, FAQ presence, and entity graph completeness
- Content Evaluation: We check answerability, clarity, and question-based content structure
- Authority Signals: We verify organization schema, logo presence, and trust indicators
- Citation Testing: We query AI models (ChatGPT, Claude, Perplexity, Brave AI) with industry-specific questions to see if your content gets cited
- Scoring & Recommendations: We calculate your composite score and prioritize fixes by impact
Typical completion time: 30-60 seconds
What are the main components of the Visibility Score?
Your Optiview Composite Score (0-100) combines 36 diagnostic criteria across 6 categories:
1. Technical Foundations (30%)
- Title tag quality and brand presence
- Meta description completeness
- Canonical URL correctness
- Mobile viewport configuration
- Language and region tags
- Open Graph tag coverage
2. Structure & Organization (25%)
- H1 presence and uniqueness
- Semantic heading hierarchy
- Internal linking health
- H2 coverage ratio
- Entity graph completeness
3. Content & Clarity (20%)
- Answer-first hero sections
- FAQ presence and structure
- Contact CTA placement
- Related questions coverage
- Topic depth and semantic richness
4. Authority & Trust (15%)
- Organization entity graph (JSON-LD)
- Logo and brand signals
- Social profile linking (sameAs)
5. Crawl & Discoverability (10%)
- Noindex/robots configuration
- Sitemap availability and quality
- AI bot access policies
6. Performance & Experience (10%)
- Core Web Vitals optimization hints
- Mobile responsiveness
- Page speed (LCP)
How it's calculated: Each of the 36 criteria is scored 0-100 based on detected attributes. Criteria are then weighted by category importance and impact level (High/Medium/Low) to produce your composite score.
What is a good Visibility Score?
Scores are calibrated across thousands of audits. Here's how to interpret yours:
90–100 Excellent
Frequently cited and well-structured. Your content is highly discoverable by AI models. Focus on maintaining quality and expanding coverage to new topics.
70–89 Strong
Well-optimized with room for depth or authority improvements. You're competitive but may lose citations to sites with better structure or more comprehensive answers.
50–69 Average
Needs clearer data or improved crawl structure. AI models can find your content but may struggle to interpret or trust it. Focus on structured data, schema, and answerability.
Below 50 Low Visibility
Limited AI crawl access or significant technical issues. Your content is largely invisible to LLMs. Address critical technical foundations first.
Context matters: A 75 score for a new startup is excellent. The same score for an established publisher might indicate missed opportunities. Compare against competitors in your industry and track improvement over time.
How often should I re-audit?
Recommended frequency: Monthly, or after major content updates.
Visibility scores can shift as:
- AI models refresh their training data (typically monthly or quarterly)
- New AI assistants emerge with different citation preferences
- Competitors improve their own AI optimization
- You publish new content or update existing pages
Pro tip: Run an audit before and after implementing fixes to measure the impact of your optimizations.
What's included in the Executive Summary Report?
Every audit generates a comprehensive Executive Summary Report designed for sharing with stakeholders, leadership, or clients. The report includes:
- Cover Page: Overall composite score, citation rate, audit date, and key metrics at a glance
- Score Breakdown: Category-by-category analysis with strengths, opportunities, and affected pages
- Priority Fixes: Top issues ranked by weighted impact, with specific recommendations and expected improvement
- Site-Level Diagnostics: Site-wide checks with status indicators (pass/warn/fail) and actionable guidance
- Successful Citations: 8-10 real queries where your content is cited by AI models, including source and cited URL
- Missed Opportunities: 8+ queries where you should appear but don't, with reasons and fix recommendations
- Page Insights: Top-performing pages, pages needing attention, and quick wins for immediate impact
- PDF Export: One-click download for offline sharing, presentations, or archiving
Professional & Actionable: Reports are designed to be client-ready and stakeholder-friendly, with clear explanations of technical issues and specific next steps. Perfect for justifying AI visibility investments or demonstrating progress to leadership.
How are Industry-Specific Citation Queries Generated?
Optiview uses 200+ industry taxonomies to generate realistic, relevant test queries that reflect how users actually search for information in your space.
How It Works:
- Industry Detection: We analyze your domain, content, and schema to classify your site using hierarchical dot-slug taxonomy (e.g.,
health.pharma.brand,travel.air.commercial,retail.grocery) - Template Selection: Based on your industry, we select appropriate query templates (branded and non-branded) that match real user intent
- Dynamic Placeholder Replacement: Templates include placeholders like
{brand},{product},{category},{city},{competitor}that get replaced with your actual data - Query Validation: Queries are filtered for realism, avoiding hallucinated products or nonsensical phrasing
- Multi-Source Testing: Each query is tested across ChatGPT, Claude, and Perplexity to measure citation coverage
Industry Examples:
- Healthcare (Pharma): "prescribing information for {brand}", "{brand} side effects", "{brand} vs {competitor} comparison"
- E-commerce (Fashion): "{brand} sizing guide", "Best {category} from {brand}", "{brand} return policy"
- B2B SaaS: "{brand} pricing plans", "{brand} integrations", "{brand} vs {competitor} features"
- Travel (Airlines): "{brand} baggage policy", "{brand} customer service number", "Check-in for {brand} flights"
- Education (Higher Ed): "{brand} admissions requirements", "{brand} tuition costs", "{brand} vs {competitor} rankings"
Why Industry-Specific Matters: Generic queries like "tell me about {domain}" don't reflect real user behavior. Our taxonomy ensures you're tested on the exact queries your potential customers are actually asking AI assistants.
What Check Types Does Optiview Use?
Our 36 diagnostic criteria use four different check types, each optimized for specific aspects of AI visibility:
- html_dom Checks (23 page-level): Deterministic HTML analysis using DOM parsing. These checks examine your rendered HTML for specific elements like title tags, meta descriptions, H1 tags, FAQ markup, internal links, and more. Fast, accurate, and consistent.
- aggregate Checks (10 site-level): Site-wide rollups that combine page-level data to measure overall coverage. Examples include FAQ presence percentage, entity graph completeness, and H2 coverage ratio.
- http Checks (2 site-level): Validates HTTP-level properties like robots.txt policies, sitemap availability, and crawl access for AI bots (GPTBot, Claude-Web, etc.).
- llm Checks (1 experimental): AI-assisted evaluation for subjective criteria that require human-like judgment. Currently used for answerability assessment.
Transparent & Reproducible: You can see exactly which check type was used for each criterion in the Score Guide. This transparency helps you understand how we arrive at each score and what you can do to improve.