Step 1: Define the brand and category
Before setting up tracking, define three things. First: the exact brand name and any variants AI might use. Second: the primary product category you want to appear in. Third: the market segment (enterprise, mid-market, SMB, consumer, etc.).
This matters because the same product can be positioned in multiple categories. A project management tool might appear in "project management software" and "work management platforms" - different queries, different competitor sets, different AI recommendation patterns. Pick one primary category to start.
Step 2: Build your query set
The query set is the foundation of your tracking. The goal is to capture the queries your target buyers actually ask AI when researching your category. A solid 30-query set covers:
10 category queries — format: "best [category] for [buyer segment]"
- "best project management software for agencies"
- "top project management tools for remote teams"
8 feature/use case queries — format: "[capability] software/tool"
- "task management software with time tracking"
- "project collaboration tools with client portal"
7 comparison queries — format: "[Brand] vs [Competitor]" or "[Brand] alternatives"
- "Asana vs Monday vs ClickUp"
- "Asana alternatives for small teams"
5 problem queries — format: "how to [solve problem your product solves]"
- "how to manage projects for a remote team"
- "best way to track team tasks and deadlines"
Step 3: Configure competitors
Add 3–5 direct competitors. Choose the competitors that show up most frequently in your category queries when you test them manually - not necessarily your top revenue competitors, but the ones appearing in AI responses most often.
AI competitive positioning sometimes differs from Google SERP or analyst positioning. This surprises most agencies. Run 5 sample queries manually in ChatGPT before setting competitors - the brands appearing most often in responses are your real AI competitors.
Step 4: Establish the baseline
Run your full query set once and record the baseline metrics:
AI Visibility Score
Your composite score (0–100)
Mention rate
% of queries where you're mentioned
Framing distribution
% strong positive / positive / neutral / hedged
Competitor scores
Score for each tracked competitor
Top gap queries
Queries where competitors mention without you
Platform breakdown
Score by ChatGPT / Perplexity / AIO
Step 5: First 30-day monitoring cadence
Week 1
Baseline established. Identify top 5 gap queries - where competitors appear and you don't. Note the framing driving competitor advantage on those queries.
Week 2
Prioritize the highest-impact fix from the gap analysis. If gaps are review-driven: launch G2 review campaign. If editorial: identify and pitch the specific articles.
Week 3
Second tracking run. Compare scores. Note which changes are meaningful vs. noise (ChatGPT responses have some variation - look for consistent directional change).
Week 4 / Month end
First monthly report. Show: baseline score, current score, gap list with actions taken, timeline to expected improvement. Frame what you did, what the expected impact is, and when to expect it.
Common setup mistakes to avoid
Too-broad query set
Fix: Generic queries like "best software" are useless. Every query should include the specific category and ideally the buyer segment.
Wrong competitor selection
Fix: Pick the brands that appear most in AI responses for your tracked queries - not your top sales competitors.
Measuring too frequently
Fix: Daily tracking adds noise without signal. Weekly is the right cadence - ChatGPT training data doesn't update daily.
Not tracking Perplexity separately
Fix: Perplexity and ChatGPT respond to different signals on different timelines. Combine them and you lose the ability to attribute score changes correctly.
Set up AI visibility tracking in minutes
ArtificialPulse handles query submission, response parsing, framing classification, and report generation automatically. Start with a free audit.