AI search does not give you keyword volume, ranking positions, or click-through data. Brands appear inside synthesized answers, either named in the text or linked as a source. To measure visibility in that environment, ALLMO runs a repeatable three-step methodology on your behalf.
β
1. Create a prompt dataset
Every report starts with a custom prompt dataset that reflects what you actually want to be discovered for: your products, use cases, buyer personas, and buying journey stages.
Coverage is the goal. A well-designed dataset covers the topics, intents, and languages your customers use when asking AI assistants questions in your category. You can build the dataset inside ALLMO, upload your own, or generate one with the free AI Search Prompt Generator.
The dataset is then run on a recurring schedule against the AI models you enable: ChatGPT, Perplexity, Claude, Gemini, Grok, and Mistral.
For details on building a good dataset, see How prompt monitoring works.
2. Extract brands and sources from every response
For each AI response captured, ALLMO extracts multiple signals to understand visibility and influence. The most important are:
Brand Mentions: which companies are named in the answer text
Domain Citations: which websites are cited or linked as grounding for the answer
Brands are attributed to the correct company entity through a multi-stage matching pipeline that handles name variations, aliases, and domain matching. See How ALLMO identifies company entities for the full mechanics.
The difference between what counts as a Mention and what counts as a Citation is explained in detail in Mention vs. Citation.
3. Aggregate results across prompts
Individual responses are noisy. The value comes from aggregation. ALLMO rolls every extracted Mention and Citation into two primary dashboards:
Visibility dashboard: Brand Mentions, Share of Voice, rank, and trends over time per company.
Domain Sources dashboard: Citations per domain, grouped by source type (Academic, Editorial Media, UGC, Directory/Review, Commercial, and more).
Explore pages: Citations per domain, grouped by source type (Academic, Editorial Media, UGC, Directory/Review, Commercial, and more).
Every metric can be sliced by model, date range, language, country and tag to isolate specific segments of your dataset. For a full reference of how each metric is calculated, see AI Search Metrics explained in detail.
What you can do with the data
Once prompts, extraction, and aggregation are in place, ALLMO unlocks the workflows you cannot run with traditional SEO tools:
Compare your Share of Voice against other companies, to view who your biggest competitors are in AI search.
Identify which of your pages are cited most often and which third-party sources drive your competitors' visibility.
Detect trend changes after model updates or optimization work.
Segment visibility by persona, region, or buying journey stage using prompt tags
Prioritize optimization based on where coverage gaps overlap with commercial importance.
Methodology overview
Step | What happens | Where you see it |
1. Prompt dataset | Build a custom set of representative prompts | Prompts section of your report |
2. Extraction | Parse responses for brand mentions and URL citations | Runs automatically on every schedule |
3. Aggregation | Roll up into metrics with filters and trends | Visibility and Domain Sources dashboards |
This methodology is the foundation for everything else in ALLMO. The deeper articles in this section explain each layer in more detail.