Audit Process & Scoring

Understand how our audits work, what the Visibility Score measures, and how to interpret your results across 36 diagnostic criteria.

What happens when I run an audit?

When you run an audit, Optiview performs a comprehensive analysis of your site's AI readiness:

  1. Discovery: We crawl your site's key pages (up to 120 by default), prioritizing important content from your sitemap and internal links
  2. Technical Analysis: We examine titles, metadata, canonical tags, mobile optimization, language tags, and crawl policies
  3. Structural Assessment: We analyze heading hierarchy, internal linking, FAQ presence, and entity graph completeness
  4. Content Evaluation: We check answerability, clarity, and question-based content structure
  5. Authority Signals: We verify organization schema, logo presence, and trust indicators
  6. Citation Testing: We query AI models (ChatGPT, Claude, Perplexity, Brave AI) with industry-specific questions to see if your content gets cited
  7. Scoring & Recommendations: We calculate your composite score and prioritize fixes by impact

Typical completion time: 30-60 seconds

What are the main components of the Visibility Score?

Your Optiview Composite Score (0-100) combines 36 diagnostic criteria across 6 categories:

1. Technical Foundations (30%)

2. Structure & Organization (25%)

3. Content & Clarity (20%)

4. Authority & Trust (15%)

5. Crawl & Discoverability (10%)

6. Performance & Experience (10%)

How it's calculated: Each of the 36 criteria is scored 0-100 based on detected attributes. Criteria are then weighted by category importance and impact level (High/Medium/Low) to produce your composite score.

What is a good Visibility Score?

Scores are calibrated across thousands of audits. Here's how to interpret yours:

90–100 Excellent

Frequently cited and well-structured. Your content is highly discoverable by AI models. Focus on maintaining quality and expanding coverage to new topics.

70–89 Strong

Well-optimized with room for depth or authority improvements. You're competitive but may lose citations to sites with better structure or more comprehensive answers.

50–69 Average

Needs clearer data or improved crawl structure. AI models can find your content but may struggle to interpret or trust it. Focus on structured data, schema, and answerability.

Below 50 Low Visibility

Limited AI crawl access or significant technical issues. Your content is largely invisible to LLMs. Address critical technical foundations first.

Context matters: A 75 score for a new startup is excellent. The same score for an established publisher might indicate missed opportunities. Compare against competitors in your industry and track improvement over time.

How often should I re-audit?

Recommended frequency: Monthly, or after major content updates.

Visibility scores can shift as:

Pro tip: Run an audit before and after implementing fixes to measure the impact of your optimizations.

What's included in the Executive Summary Report?

Every audit generates a comprehensive Executive Summary Report designed for sharing with stakeholders, leadership, or clients. The report includes:

Professional & Actionable: Reports are designed to be client-ready and stakeholder-friendly, with clear explanations of technical issues and specific next steps. Perfect for justifying AI visibility investments or demonstrating progress to leadership.

How are Industry-Specific Citation Queries Generated?

Optiview uses 200+ industry taxonomies to generate realistic, relevant test queries that reflect how users actually search for information in your space.

How It Works:

  1. Industry Detection: We analyze your domain, content, and schema to classify your site using hierarchical dot-slug taxonomy (e.g., health.pharma.brand, travel.air.commercial, retail.grocery)
  2. Template Selection: Based on your industry, we select appropriate query templates (branded and non-branded) that match real user intent
  3. Dynamic Placeholder Replacement: Templates include placeholders like {brand}, {product}, {category}, {city}, {competitor} that get replaced with your actual data
  4. Query Validation: Queries are filtered for realism, avoiding hallucinated products or nonsensical phrasing
  5. Multi-Source Testing: Each query is tested across ChatGPT, Claude, and Perplexity to measure citation coverage

Industry Examples:

Why Industry-Specific Matters: Generic queries like "tell me about {domain}" don't reflect real user behavior. Our taxonomy ensures you're tested on the exact queries your potential customers are actually asking AI assistants.

What Check Types Does Optiview Use?

Our 36 diagnostic criteria use four different check types, each optimized for specific aspects of AI visibility:

Transparent & Reproducible: You can see exactly which check type was used for each criterion in the Score Guide. This transparency helps you understand how we arrive at each score and what you can do to improve.