Examples make AI visibility work easier to explain. They show what weak answers actually look like, which evidence matters, and what actions usually follow a review round.
Each example starts from an answer pattern, then moves to evidence, diagnosis, and action. The goal is not to judge one model output in isolation, but to turn patterns into repeatable work.
The AI gives a relevant answer, but never mentions the brand even when the prompt should reasonably surface it. Sometimes the answer lists only more established alternatives.
The model appears to know the brand exists, but it does not rely on the brand's own pages as evidence. That often means the public knowledge layer is too thin, too vague, or too hard to reuse.
This usually points to a positioning problem, not only a citation problem. The team may need a clearer product definition, a stronger boundary page, or more consistent wording across the public layer.