Research
Research, benchmarks, and methodology for AI visibility teams
Read the research behind how answer engines behave, which sources they trust, and why some brands keep getting cited first.
Explore
Explore the pages most relevant to your situation
Browse the pages that explain the problem, the common mistakes, and the fixes that usually matter first.
Research
State of AI Visibility 2026
A benchmark view of how answer engines cite brands, which source types win, and where teams should focus their first month of AEO work.
Research
AI citation benchmark for ChatGPT, Perplexity, and Google AI
See the source patterns that drive citations in answer engines and learn how to improve your first-party and third-party citation mix.
Research
Localized search ecosystem benchmark for Kimi, Doubao, and DeepSeek
A cross-border benchmark focused on the answer engines that shape discovery in Chinese-language markets.
Insights
Read the long-form guides behind the numbers
Use these guides to understand the mechanics behind citations, engine behavior, and the fixes that usually matter first.
Research Report
What Is Your AI Monitoring Tool Actually Measuring?
Most AI visibility tools call a model API and call it 'monitoring ChatGPT.' That is not the same thing. Here is what each measurement mode actually captures — and why it matters for every claim on your dashboard.
Guide
The 5 Root Causes of AI Search Invisibility
Brands lose in AI search for five distinct reasons — and fixing the wrong one wastes months. A diagnostic framework for understanding exactly why your brand is not getting cited.
Research Report
The Cross-Border AI Visibility Gap: Why You Are Invisible in Kimi and Doubao
A brand can rank #1 in ChatGPT answers while being completely absent from Kimi, Doubao, and DeepSeek. This is not a translation problem. Here is why — and what to do about it.
Research Report
The Hidden Cost of AI Monitoring: Why Search Fees Change Everything
Search-enabled AI engines like Perplexity charge per web search, not just per token. At scale, that changes the entire economics of brand monitoring. A breakdown of real costs across 8 engines.
Research Report
Kimi, Doubao & DeepSeek: The AI Visibility Guide for Asian Markets
Millions of users in distinct search ecosystems use localized models to research purchases every month. Most global monitoring tools miss these. Here is what cross-border brands need to know.
Guide
AEO vs SEO: What's Actually Different in 2026
A practical comparison of Answer Engine Optimization vs traditional SEO. What changed, which metrics matter, and how to build a strategy that wins in both worlds.
Guide
GEO for Cross-Border Brands: The Complete 2026 Playbook
Generative Engine Optimization guide for cross-border sellers and DTC brands. How to get cited by global mainstream models and localized search ecosystems — and measure whether it is working.
Research Report
The Citation War: How AI Search Selects Winners in 2026
A deep-dive into the black-box citation mechanisms of ChatGPT, Perplexity, and Google AI Overviews.
How To Use Research
Use research to make fewer guesses
The useful output is not a statistic on its own. It is a clearer decision about what to fix, which sources matter, and which prompts are worth tracking.
Start with one prompt cluster
Do not try to react to every possible question at once. Start with the category, comparison, or risk prompts that shape real demand.
Separate source issues from content issues
Sometimes the missing piece is a better comparison page. Sometimes it is third-party proof or cleaner entity language.
Re-check after changes
Use the research as a baseline, then verify whether citations, prompt coverage, and competitor ordering actually move after you ship work.