Methodology Changelog

AI Visibility Score uses a standardized, centrally versioned methodology. Historical runs keep the version active at execution time.

v2.6

Coverage-first blended visibility scoring

Score affecting

Effective Mar 15, 2026

- Headline AI Visibility Score now blends coverage and mention quality.
- Formula: score = (0.70 * coverage + 0.30 * quality_when_mentioned) * 100.
- Coverage answers the primary question: does the brand show up?
- Mention quality remains a modifier instead of dominating the headline score.
- Competitor density signal softened from 1/(1+c) to 1/sqrt(1+c).
- Strict quality-adjusted score is still available for diagnostic detail.

v2.5

Rank-aware visibility weighting (list position signal)

Score affecting

Effective Feb 25, 2026

- Added list-rank extraction metadata for mention-level scoring context.
- Rank extraction uses deterministic parsing first; optional low-cost LLM fallback for ambiguous list-like responses.
- Rank confidence threshold: 0.80. Below-threshold deterministic rank is treated as unknown.
- Unknown rank is stored as null (never 0) and does not zero-out mention value.
- When rank signal is enabled, mention quality blends rank + position + prominence + density + base mention value.
- If rank is unavailable, scoring falls back to legacy position/prominence behavior for continuity.
- Added rank backfill command support for historical runs and controlled cutover.

Formula details (v2.5)

score = (sum(m_i * w_i) / sum(w_i)) * 100

m_i = mention quality (0-1), w_i = prompt reliability weight.

Rank enabled

  • 20% rank
  • 15% position
  • 20% prominence
  • 25% competitor density
  • 20% base mention value

Then multiplied by sentiment factor.

Rank unavailable fallback

  • 30% position
  • 25% prominence
  • 25% competitor density
  • 20% base mention value

Preserves continuity when rank is missing.

v2.4

Weighted visibility scoring (position/prominence/sentiment)

Score affecting

Effective Feb 23, 2026

- Introduced weighted visibility scoring for standard prompt answers.
- Formula: score_v2_raw = (sum(m_i * w_i) / sum(w_i)) * 100; headline score rounds score_v2_raw.
- m_i uses mention_quality_score when available, 0 when not mentioned.
- If mention_quality_score is missing and cannot be recomputed, fallback m_i is binary mention (1 or 0).
- w_i is prompt reliability weight clamped to configured bounds (default=1.0, min=0.5, max=1.5).
- Denominator includes all standard answers in the run.
- Custom prompt answers remain excluded from core score math.
- Added score components metadata for reproducibility/auditability.

v2.3

Brand confidence framing update

Non-score update

Effective Feb 19, 2026

- Added brand-level confidence framing for multi-run interpretation.
- Limited confidence-interval display to baseline single-run contexts.
- Reduced risk of over-interpreting sparse or mixed-run aggregate views.
- Goal: improve trust and readability for early-stage monitoring data.

v2.2

Exploratory prompt governance clarification

Non-score update

Effective Feb 19, 2026

- Clarified exploratory prompts are tracked separately from the core score.
- Improved prompt attachment lifecycle controls across categories.
- Improved visibility into exploratory coverage and execution conditions.
- Goal: preserve standardized scoring while supporting custom research workflows.

v2.1

Category input quality guardrails

Non-score update

Effective Feb 19, 2026

- Added deterministic guardrails for low-signal category names.
- Added warning-first quality guidance with override support.
- Standardized category validation behavior across creation and edit flows.
- Goal: improve category clarity and downstream prompt-pack relevance.

v2

Confidence framing and report standardization update

Non-score update

Effective Feb 19, 2026

- Standardized brand-level reporting with explicit Run Report vs Trend Report views.
- Added early-signal (model-era cohort) framing for low-history brands before trend unlock.
- Added confidence-oriented context on brand views to reduce over-interpretation of sparse data.
- Clarified report provenance and methodology version display across exported artifacts.
- Core visibility scoring remains mention-based across completed prompt/model answers.

v1

Initial AI Visibility methodology

Score affecting

Effective Feb 19, 2026

Introduced standardized AI Visibility scoring based on brand mentions across completed prompt/model answers, with run-level methodology versioning for historical consistency.