Most brands only track one question: “is my brand mentioned?” That is one question type out of six. The other five are where your competitors are winning — and where the real AI visibility gap lives. Here is what each type reveals, why it matters, and what to do if you are invisible in it.
1. Why prompt type matters more than mention rate
AI search is not one uniform behavior. A buyer researching a category for the first time is running different queries than a buyer who is close to making a purchase decision. The intent behind each query type is different, the AI engine's source selection is different, and the opportunity to appear — and the consequence of not appearing — is different.
A brand monitoring program that only tracks branded queries — queries that include the brand name — is measuring the smallest, most intent-qualified slice of AI search. It is equivalent to a traditional SEO program that only tracks branded keyword rankings. You are measuring how you perform among people who already know you exist. You are not measuring how you perform in the discovery moments where buyers form their shortlists.
2. The 6 prompt types
Brand-Direct
Example: “Is [Brand] good?” / “[Brand] pricing” / “How does [Brand] work?”
These are queries that include your brand name directly. Most monitoring programs start and stop here. The upside: these are the easiest to capture and the most directly indicative of brand health. A buyer asking “Is Citany good?” has already heard of Citany — the engine's answer will reinforce or undermine a prior impression.
The limitation: most buyers have not heard of you yet when they start researching. Brand-direct queries represent a small fraction of all AI queries that ultimately influence a purchase in your category. Monitoring only this type tells you how you perform among the already-aware, not among the much larger population of buyers in earlier stages.
Category Discovery
Example: “Best AI brand monitoring tools” / “Top SEO tools for agencies”
Category discovery prompts are the highest-volume early-funnel entry point in AI search. A buyer forming their initial shortlist is running prompts like these. The AI engine generates a short list of options — usually three to five — and the buyer leaves that query with a mental shortlist they will evaluate further. If you are not on that list, you are not evaluated further.
This is where brands are most often surprised. A company with strong brand-direct scores can be absent from category discovery prompts entirely — because the engine treats their brand as a known entity but not as the right recommendation for category-level questions. Winning category discovery prompts requires topical authority, not just brand recognition.
Comparison & Alternatives
Example: “[Brand] vs [Competitor]” / “Alternatives to [Competitor]” / “[Tool] alternatives 2026”
Comparison and alternatives prompts are late-funnel, high-intent queries. Buyers who ask these are close to a decision — they have narrowed their options and are doing final due diligence. The AI engine's answer here carries disproportionate weight relative to the buyer's stage in the journey.
The dangerous scenario: your competitor is being recommended as an alternative to your own product. This means buyers who are unhappy with you — or considering leaving — are being directed to your competitor by AI engines, without you knowing it is happening. Monitoring your competitor's alternatives prompts (not just your own) is how you catch this pattern.
Problem-Solution
Example: “How do I track my brand on ChatGPT?” / “Why is my brand not showing up in AI search?”
Problem-solution prompts represent buyers who are pain-aware but not solution-aware. They are describing a problem — not yet looking for a specific category of tool or service. The brand that owns these prompts gets to make the first introduction: “here is what the problem is called, here is the category that addresses it, and here is a tool that does it.”
This is the prompt type that maps most directly to long-form problem/solution content and FAQ-structured pages. A well-structured guide that addresses the exact problem, names the solution category, and includes the brand as an example solution is the content format that wins these prompts.
Use-Case & Persona
Example: “Best AI monitoring tool for agencies” / “AI visibility tracking for DTC brands”
Use-case and persona prompts are niche-qualified queries from buyers who already know what they need and are looking for the specialist. “Best AI monitoring tool for cross-border e-commerce brands” is not an exploratory query — it is a qualified buyer describing their specific context and asking for the best fit.
Appearing consistently in persona-specific prompts builds a reputation as the specialist for that use case — which compounds. A brand that appears in “for agencies,” “for DTC brands,” and “for cross-border sellers” on the same topic becomes associated with versatility and category leadership, even if its total mention volume is lower than a generic competitor.
Reputation & Risk
Example: “[Brand] complaints” / “Is [Brand] trustworthy?” / “[Competitor] alternatives after bad experience”
Reputation prompts are systematically ignored by most AEO teams. They should not be. A buyer who had a bad experience with a competitor and is actively asking AI for alternatives is one of the most qualified buyers you will ever encounter. They are motivated, solution-ready, and frustrated with the competitor you are trying to displace. The brand that appears in those prompts wins the defection.
The second function of reputation monitoring: catching negative framing about your own brand before it compounds. If an AI engine is answering “Is [Your Brand] trustworthy?” with cautionary language — citing complaints or warning users to verify claims — that is an active reputational problem that needs diagnosis and a response strategy, not just more content.
3. Building a prompt set that covers all six
A practical monitoring prompt set for a brand should include at minimum two to three examples of each type — six types × three prompts = eighteen prompts per brand as a baseline. That covers the most important query intents without being impractically large.
For each type, anchor the prompt in language that actual buyers use, not marketing language. Category discovery prompts should use the terms buyers use to describe their problem, not the category name your marketing team invented. Problem-solution prompts should describe the problem symptom, not the solution.
Run the full prompt set, then analyze separately by type. A brand might have strong brand-direct scores and weak category discovery scores — that is a different diagnosis than a brand with strong category discovery but weak comparison scores. The type-level breakdown is where the actionable insight lives.
Build Your Prompt Set
Citany auto-generates all 6 prompt types for your brand
Enter your brand and category — Citany generates a complete monitoring prompt set covering all six types, in both English and Chinese, for all 8 AI engines.