← Research Report

AI Mentioned Your Brand — But Described a Company That Does Not Exist

The monitoring dashboard showed ✓ Mentioned. The actual AI response described an AutoML platform with GPU/TPU acceleration, TensorFlow support, and one-click model deployment. None of that is real. This is what brand hallucination looks like — and it is more dangerous than not being mentioned at all.

C
Citany Intelligence Lab
March 18, 2026 · 9 min read

We were running a standard monitoring test on Citany — querying AI engines with the prompt “Citany AI platform features” to see how each engine described the product. Gemini returned a response marked ✓ Mentioned / Neutral — which looked like a pass. But the response itself described a machine learning development platform with AutoML pipelines, custom model training, GPU/TPU infrastructure, and one-click deployment. A completely fabricated product. Gemini had encountered a brand name it did not have reliable data for, and filled the gap with a plausible-sounding description of what an “AI platform” probably does.

The monitoring system had no way to know the description was wrong. It just counted the mention.

1. The difference between being mentioned and being described correctly

Most AI brand monitoring tools operate on a binary: the brand name appears in the response, or it does not. A mention is a mention. The dashboard turns green, the mention rate goes up, the weekly report looks healthy.

This metric collapses two completely different outcomes into one. Consider what a “mentioned” response can actually contain:

Accurate mention

The AI describes what your product actually does, names real features, and positions it correctly relative to competitors. This is the outcome monitoring is meant to capture.

Hallucinated mention

The AI recognises the brand name but has insufficient training data to describe it accurately. It generates a plausible-sounding description based on the name, category, or adjacent brands — almost all of which is fabricated.

Misattributed mention

The AI associates your brand name with a different company — a competitor, a similarly named product, or a historical version of your product that no longer exists. The mention is real but the entity being described is not you.

Only the first outcome is useful. The other two are not neutral — they are active problems. But all three look identical on a standard monitoring dashboard.

2. Why AI engines hallucinate new and small brands

Large language models do not look up facts in real time (unless they have a web search tool enabled). They generate text based on statistical patterns learned during training. When asked about a brand they have strong training signal on — Apple, Salesforce, HubSpot — they produce accurate descriptions because accurate descriptions appeared repeatedly in their training data.

For new brands, niche SaaS products, or companies that launched after the model’s training cutoff, the model faces a different situation: a brand name exists (it may have appeared in a product hunt listing, a press mention, or a user query) but detailed, accurate description does not. The model resolves this tension by generating a description that is consistent with what this type of product would plausibly do.

In our case: “Citany” plus “AI platform” in the query primed Gemini to produce a description of what a generic AI platform does. The result was internally coherent, fluently written, and completely wrong. It described AutoML, GPU/TPU compute, model training pipelines — features of a machine learning infrastructure product, not a brand visibility monitoring tool.

What Gemini actually said

“Citany AI is a platform designed to help businesses leverage artificial intelligence for various tasks. Core capabilities include: Data Preparation & Preprocessing, Model Selection & Experimentation, Automated Machine Learning (AutoML), Custom Model Training with TensorFlow/PyTorch/scikit-learn, GPU/TPU Acceleration, and One-Click Deployment.”

Gemini response to “Citany AI platform features” — March 2026. None of this is accurate.

3. Why hallucinated mentions are more dangerous than zero visibility

Not being mentioned is a visibility problem. You can measure it, track it, and work toward fixing it by building up authoritative content and citations. The absence is visible in your data.

A hallucinated mention is a different class of problem, for three reasons.

It is invisible in standard monitoring

A ✓ Mentioned status looks like success. Teams reviewing dashboards do not go back and read the full AI response for every brand mention — especially when volumes increase. The hallucinated content circulates undetected.

It actively misleads buyers at the moment of evaluation

When a potential customer asks an AI engine about your product and receives a hallucinated description, they form a mental model of your product that is wrong. They may decide you are not a fit — not because of your actual product, but because of a product the AI invented. You lose the opportunity without knowing it happened.

It is harder to fix than zero visibility

Zero visibility is a content and authority problem — create accurate, structured content that AI engines can learn from, and the signal improves. Hallucinated content means the AI has a conflicting, inaccurate signal that must be displaced. You are not filling a gap; you are correcting a wrong impression that the model has already partially committed to.

4. The conditions that create brand hallucination risk

Not every brand faces the same hallucination risk. The conditions that increase exposure are predictable:

  • New domains with limited crawl history. AI training data skews toward well-established web properties. A domain launched in the last 12–18 months has minimal training signal, even if the product is excellent.
  • Generic or ambiguous brand names. Names that combine common words (especially words like “AI”, “cloud”, “hub”, “flow”) give the model more room to pattern-match against other products in the same naming space.
  • Niche categories with limited editorial coverage. A product in a well-documented category (CRM, email marketing) will have competitors and adjacent content that provides accurate framing. A product in a new category (AI brand monitoring) has less reference material for the model to anchor to.
  • Training data cutoff mismatch. If a product launched or significantly changed after the model’s training cutoff, older descriptions may dominate — either of the brand in a prior form, or of something the brand name was associated with before rebranding.

5. How to detect whether an AI mention is accurate or hallucinated

Detection requires reading the actual response text — not just noting that a mention occurred. Specifically, look for:

SignalWhat it indicates
Features or capabilities you do not haveClassic hallucination — the model is pattern-matching to similar products in its training data
Correct category, wrong use casePartial hallucination — the model knows the rough domain but not the specific positioning
Pricing or plan details that do not existThe model is extrapolating from competitors or common SaaS pricing structures
Competitor names used to describe your productMisattribution — your brand is being conflated with another product
Vague, generic description with no specific differentiatorsThin signal — the model has almost no real data and is producing filler

6. How to reduce hallucination risk over time

The root cause is a knowledge gap in the model’s training data. The fix is creating accurate, structured, and repeatedly crawled content that gives the model reliable signal to draw from.

Structured data on every key page

JSON-LD schema (Organization, Product, SoftwareApplication) explicitly tells crawlers and AI training pipelines what your product is, what it does, and what category it belongs to. This structured signal is harder for models to misinterpret than prose.

Third-party editorial coverage with accurate descriptions

AI training data weights third-party editorial sources heavily. A review on G2, a mention in a tech newsletter, or a case study on an industry blog that accurately describes your product gives the model a reference point it can anchor to — one that is harder to override with generic pattern-matching.

Consistent category language across all owned content

If your homepage, docs, and blog all describe your product in the same specific terms — 'AI brand visibility monitoring' rather than 'AI platform' — that consistent signal compounds across crawls and reduces ambiguity.

Monitor response text, not just mention status

Detection requires reading actual responses. Build a review process for any brand mention that flags descriptions containing features or positioning you do not recognise. A hallucination caught early can be tracked and corrected before it compounds.

7. What this means for how you interpret AI monitoring data

Mention rate is a leading indicator, not a quality indicator. A rising mention rate is worth tracking — but it is not informative on its own if you are not also reading what the AI is saying.

The useful questions are:

  • Is the mention describing our actual product, or a plausible-sounding substitute?
  • Is the category framing correct — are we being positioned as a monitoring tool, or as something else?
  • Are the competitors named alongside us actually our competitors, or is the model confusing our category?
  • Is the description consistent across engines, or is one engine hallucinating while another is accurate?

These questions can only be answered by reading response text — not by counting mentions. The metric dashboard is a navigation aid. The response text is the actual data.

Key takeaways

  • → ✓ Mentioned does not mean ✓ Accurately described — AI engines hallucinate brand content when training signal is thin
  • → New brands, generic names, and niche categories face the highest hallucination risk
  • → Hallucinated mentions actively mislead buyers at the moment of evaluation — they are worse than zero visibility
  • → Detection requires reading response text, not just tracking mention counts
  • → Structured data, third-party editorial coverage, and consistent category language are the primary fixes
  • → Monitor for inaccurate descriptions as a separate quality metric alongside mention rate

See what AI engines actually say about your brand

Monitor response text — not just mention counts

Citany shows you the full AI response alongside the mention signal — so you can catch hallucinations, misattributions, and category confusion before they mislead buyers. Covers ChatGPT, Perplexity, Gemini, Grok, Claude, DeepSeek, Kimi, and Doubao.