Skip to main content

Prompt Overview

Build, edit, and manage the prompt dataset ALLMO uses to track your AI search performance.

Written by Niclas Aunin
Updated today

1. What the Prompts section tracks

Prompts are the questions ALLMO asks AI models on your behalf to measure your brand's visibility. The Prompts section has two views:

  • Prompts List: every prompt in a report, with its models, run frequency, mention score, and citation score. This is also where you add, edit, and organize your prompt dataset.

  • Prompt Details: the analytics page for a single prompt, showing every AI response, which brands were mentioned, and which domains were cited.

Together they let you define what ALLMO monitors and then drill into the results.

2. Why managing your prompt dataset matters

Your prompts are your measurement surface. Every visibility score, citation count, and competitor comparison in ALLMO is derived from the prompts you track, so curating a strong, representative prompt set is the single most important setup decision in the product.

The Prompts page is where you:

  • Build the initial dataset (manually, via CSV, or from generated suggestions).

  • Keep it aligned with your buyer journey as your positioning and categories evolve.

  • Pause prompts that no longer reflect real user intent.

  • Expand coverage into new languages, regions, or personas.

A dataset that reflects how your buyers actually talk to AI assistants will give you meaningful insights. A stale or generic dataset won't, no matter how good the analytics look.

3. How prompt tracking works

  • Each prompt runs against the models you've selected, on the interval you set (e.g. weekly).

  • Every run produces one response per model, stored and analysed by ALLMO.

  • Mentions (extracted brand names) and citations (cited source domains) are calculated per response and rolled up into the scores you see on the list view.

  • Search queries issued by the model to answer the prompt are captured where the model supports it (see the Query Fan-Out article for detail).

Mentions and citations are explained in depth in the Mentions & Citations article β€” start there if you want to understand exactly how those scores are computed.

4. How to add prompts in ALLMO.ai

4.1 Add a prompt

  1. Click Add Prompt on the Prompts page.

  2. Fill in the prompt text and configure:

    • AI models (multi-select)

    • Language

    • Country / region

    • Tracking interval (e.g. weekly)

    • Tags

  3. Defaults are prefilled (default models, language, location, weekly tracking) so you can add prompts quickly.

  4. Click save. You'll need prompt text and at least one selected model.

4.2. Import prompts via CSV

Click the CSV button to open a multi-step import wizard:

  1. Upload a .csv file (max 10 MB, max 1,000 rows).

  2. Map your CSV columns to ALLMO's prompt fields. Headers are fuzzy-matched automatically where possible. At minimum you must map a prompt column.

  3. Preview the data and check for validation errors.

  4. Import.

  5. Summary shows how many prompts were inserted, skipped, or errored.

System columns (id, created_at, etc.) are automatically ignored.

Note: Tags must be separated by comma.

4.3 Review or generate suggestions

The page shows one of two buttons depending on your report state:

  • Review Suggestions if ALLMO has pending prompt suggestions waiting for you. Click to review and accept them.

  • Generate Suggestions if none are pending. Click to run the suggestion generation flow (keyword-based or natural-language based).

5. Manage and edit existing prompts

5.1 Edit a prompt

Click the Edit icon on any prompt row. You can update the prompt text, models, language, location, tracking interval, tags, and status (active / paused / archived). The form preloads current values, so you only change what you need.

5.2 Bulk-edit multiple prompts

  1. Use row checkboxes to select prompts, or the header checkbox to select all visible.

  2. The Bulk Action Bar appears at the bottom of the screen.

  3. Apply shared updates across the selection: language, country, status, tracking interval, AI models, and tags (including persona and knowledge-gap tags).

  4. Save to push the changes to every selected prompt. Deselect to clear.

Bulk edit is the fastest way to reconfigure large datasets, for example switching a whole persona segment to a new interval or adding a model across 50 prompts at once.

Currently, you can only update information (Language, Country, Status, Tracking Interval) and add additional parameters (AI Models, Tags).

5.3 Run new prompts

Use Run new prompts to execute any prompts that haven't been run yet. The button shows how many are eligible and why others are skipped (no credits, no models configured, etc.). The button is disabled if your visibility credits are at 0.

5.4. Filter, search, and sort

  • Search by prompt text, question, location, model, or tags.

  • Filter by model, language, region, date range, tags, and status.

  • Sort by created date, mentions, or citations

6. Open Prompt Details

Click any prompt to open its analytics page. You'll see metadata at the top, a responses table, and expandable response views with the full markdown answer, mentioned companies, extracted search queries, and every cited URL.

7. How to turn prompt data into action

If your dataset feels generic: Generate or import prompts that reflect how your buyers actually phrase questions, not how your team describes the product internally. Use tags to organize by persona, funnel stage, or region.

If scores drop suddenly on a prompt: Expand the recent responses in Prompt Details. Model behaviour changed, a competitor published something new, or your page fell out of the index. The cause is usually visible in one or two responses.

If a prompt never mentions anyone you care about: It's probably too broad or not a commercial query. Archive or rewrite it. A small dataset of sharp, intent-driven prompts beats a large dataset of vague ones.

If competitors consistently outperform you on a prompt: Open their cited URLs. That's your content gap, mapped directly from live AI answers.

If you're expanding into a new market: Bulk-edit a duplicate set of prompts to the new language and country instead of rebuilding from scratch.

Revisit your prompt dataset at least once a quarter. Your product evolves, your buyers' language evolves, and your prompt set needs to evolve with them.

Did this answer your question?