Docs
Citany methodology for AI visibility measurement
Understand how prompts are grouped, how engine results are normalized, and how Citany turns observations into comparable reporting.
Methodology pillars
- Prompt sampling by intent and market
- Engine-specific response capture
- Brand mention and citation extraction
- Competitor comparison and time-series reporting
Why methodology matters
AI answers are probabilistic and change over time. Reliable measurement requires consistent prompt sets, clear market segmentation, and a way to compare engines without flattening their differences.
What consistent measurement requires
- Stable prompt sets over time
- Separate tracking by market and language
- Comparable competitor groups
- Source-level review instead of score-only reporting
Frequently asked questions
Common questions about this workflow, use case, or research area.
Why can’t AI visibility be reduced to one universal score?
Because engines behave differently, markets behave differently, and citation patterns vary by prompt type. A single number is only useful when the underlying context stays visible.
Why does repeated measurement matter?
Because one isolated answer can be noisy. Repeated measurement shows whether the same prompt cluster and source pattern are moving in a consistent direction.
Next Step
Check your own brand against these patterns
If this page matches what you are seeing, run a free audit to review prompt coverage, competitor gaps, and the sources shaping AI answers in your category.