Methodology

How we collect, analyze, and rank AI visibility data

The AI Visibility Index is built on a rigorous, transparent methodology designed to accurately measure which brands and domains dominate conversational AI platforms. This page details our data collection processes, ranking calculations, and quality controls.

Overview & Metrics

Our index tracks two fundamental metrics that determine AI visibility:

Brand Mentions

We measure how frequently companies are recommended or referenced in ChatGPT responses across various user queries. This measures direct brand awareness and recommendation frequency in conversational AI.

  • Mention Frequency: Raw count of how many times a brand appears in responses
  • Visibility Percentage: The percentage of queries where a brand is mentioned
  • Coming SoonRanking Position: Average position when mentioned
  • Coming SoonContext Quality: Whether mentioned positively, neutrally, or with caveats

Source Citations

We track which domains ChatGPT cites as authoritative sources when providing information. These citations influence the credibility and context of AI-generated responses.

  • Citation Frequency: How often a domain is cited as a source
  • Citation Impact: Percentage of queries influenced by citations from this domain
  • Domain Authority: Classification as publisher, business, social media, or other
  • Coming SoonContent Relevance: How well citations align with query intent

Together, these metrics reveal the complete picture of AI marketplace dominance - showing not just which brands are mentioned, but which sources shape the conversation around entire industries.

Data Collection

We collect data directly from the actual ChatGPT interface to ensure authentic, representative results.

Real ChatGPT Interface

Unlike competitors who use API approximations, we collect data using the actual ChatGPT web interface that real users access. This ensures our data reflects authentic user experiences, including:

  • The same response quality and formatting users see
  • Real-world citation patterns and source links
  • Actual brand mentions and recommendations

Query Group Design

Each category includes 100-200 carefully curated prompts representing genuine user intent:

  • Discovery Queries: "What are the best [category] tools?" - Broad exploration
  • Use Case Queries: "Best [category] for [specific need]" - Targeted recommendations
  • Problem-Solving Queries: "How to [solve X] with [category]" - Solution-oriented
  • Coming SoonComparison Queries: "Compare [brand A] vs [brand B]" - Direct comparisons
  • Coming SoonFeature Queries: "[Category] with [specific feature]" - Feature-specific searches

Geographic Localization

Data is collected from specific geographic locations to reflect regional variations:

  • Primary Region: United States (majority of data)
  • IP Geo-location: Requests originate from target region IPs
  • Language Settings: English (US) interface and queries
  • Future Expansion: UK, Canada, Australia, and other English-speaking markets coming soon

Data Integrity

Our infrastructure runs automated scans of all categories:

  • Scan Frequency: Categories scanned every 7 days
  • Time Distribution: Scans distributed throughout the day to avoid temporal bias
  • Session Management: Fresh sessions for each category to prevent cross-contamination
  • Rate Limiting: Responsible query pacing to respect platform guidelines

Processing & Quality

We transform raw ChatGPT responses into structured, validated rankings through multiple processing and quality control stages.

Entity Extraction

We extract and identify all companies and solutions mentioned in ChatGPT responses:

  • Company Names: Every brand, product, and service recommended in responses
  • Source Domains: All URLs and domains cited in responses
  • Manual Review: Human verification to resolve ambiguous names and acronyms
  • Name Standardization: Consistent naming across variations (e.g., "HubSpot" vs "Hubspot")

Rolling 30-Day Window: All metrics represent the most recent 30 days of data, ensuring current relevance while smoothing daily volatility. As new data comes in, the oldest data ages out, creating a continuously fresh snapshot.

Transparency & Limitations

We maintain full transparency about our data collection process and acknowledge the boundaries of our methodology.

Our Transparency Commitments

We maintain full transparency in our methodology:

  • Last Update Timestamps: Every category shows exact last update time
  • Query Visibility: Users can view the full query set for each category
  • Configuration Display: Location, keywords, and scan settings shown on each page
  • Changelog: Public log of methodology changes and improvements
  • Citation Sources: Full domain attribution for all cited sources

Known Limitations

While we strive for comprehensive accuracy, users should be aware of certain limitations:

  • Single Model Focus: Currently tracks ChatGPT exclusively. Other AI platforms (Claude, Gemini, Perplexity) may show different results. We plan to expand to multiple models in 2025.
  • Query Selection Bias: Results reflect our curated query set, which may not capture every possible user intent in a category. We continuously refine queries based on user feedback and real search patterns.
  • Natural Language Variability: AI responses can vary based on phrasing, context, and conversation history, introducing some measurement variance. Our 30-day rolling window helps smooth these variations.
  • Brand Name Ambiguity: Common names or acronyms may occasionally be conflated with unrelated entities. We use manual disambiguation to minimize false positives.
  • Citation Attribution: This free index tracks domain-level citations. For specific page URLs and content attribution, upgrade to our paid platform.
  • Geographic Limitations: Current focus on US market. Results may differ in other regions due to localized AI training data and regional preferences.

Despite these limitations, our methodology provides the most comprehensive and actionable view of AI visibility available today. We continuously improve our processes and welcome feedback.

Future Roadmap

We're continuously evolving our platform to provide more comprehensive AI visibility insights.

Planned Enhancements

We're actively working on the following improvements:

  • Coming SoonMulti-Model Support: Tracking Claude, Gemini, Perplexity, and other AI platforms
  • Coming SoonInternational Expansion: UK, Canada, Australia, and non-English language markets
  • Coming SoonSentiment Analysis: Measuring positive vs. negative brand mentions
  • Coming SoonHistorical Trending: Long-term visibility trends and market share shifts
  • Coming SoonDaily Updates: Moving towards daily updates for top categories
  • Coming SoonAPI Access: Programmatic access to visibility data for enterprises

Questions About Our Methodology?

We're committed to transparency and continuous improvement. If you have questions about our data collection, ranking calculations, or quality controls, please reach out.

Contact Us