How to Check If Your Business Appears in ChatGPT, Google AI Overviews, Perplexity, and DeepSeek — A Free 5-Minute Method
You can check whether ChatGPT, Google AI Overviews, Perplexity, and DeepSeek list your business in under 5 minutes — without signing up for any tool — by running 3 specific prompt patterns across all 4 platforms and recording 4 things per prompt.
This is the audit version of the question every owner asks first: "Am I in ChatGPT?" The honest answer is that "ChatGPT" is the wrong unit of analysis — what matters is whether you appear when a real prospect runs a real prompt, on the platform they actually use, with the framing they actually use. This piece walks the exact 5-minute method, gives you the 3 prompt patterns to run, the 4 fields to record per run, the branching diagnostic for "what to do if you failed," and the threshold at which the manual method stops being enough.
The method is built on the same prompt-set logic professional AI visibility audits use, compressed to the smallest unit a non-specialist owner can run in a single sitting. Five minutes is realistic if you have your business's location and primary service type at the top of your head; ten minutes is realistic if you also need to write down what you find.
The 5-minute method, at a glance
| Step | What to do | Time |
|---|---|---|
| 1 | Pick your 3 prompts (geo, attribute, problem) | 60 seconds |
| 2 | Run prompt #1 on ChatGPT, Perplexity, Google AI Overviews | 60 seconds |
| 3 | Run prompt #2 on all 3 platforms | 60 seconds |
| 4 | Run prompt #3 on all 3 platforms | 60 seconds |
| 5 | Record the 4 fields per run in a single spreadsheet row | 60 seconds |
That is 9 prompt runs total (3 prompts × 3 platforms), recorded as 9 spreadsheet rows. Run each prompt once on each platform for the basic audit; for a higher-confidence read, run each prompt three times per platform and use majority appearance as the signal.
The 3 prompt patterns
The patterns matter because real prospects don't all phrase queries the same way. The audit covers the three most common phrasings AI visibility studies have documented as the dominant local-intent shapes.
Pattern 1 — Geo-intent. "Best [business type] in [city]." For a Brooklyn dental clinic: "Best dentist in Brooklyn." For a Denver HVAC company: "Best HVAC company in Denver." For a B2B SaaS company, swap the geo for a use-case: "Best [software category] for [use case]."
Pattern 2 — Attribute-intent. "[Business type] in [city] with [specific attribute]." For the same dental clinic: "Best dentist in Brooklyn that takes Aetna." For the HVAC company: "HVAC company in Denver with 24-hour emergency service." The attribute should be one a real prospect would care about — insurance acceptance, hours, certification, neighborhood, price tier — not a vanity attribute.
Pattern 3 — Problem-intent. "I'm looking for a [business type] because I [specific problem]. What are my options?" For the dental clinic: "I'm looking for a dentist in Brooklyn because I have a chipped tooth and need help today." For the HVAC company: "My AC is leaking water onto my floor in Denver, who do I call?" Problem-intent surfaces businesses that have built content around problems, which is a different cohort than businesses optimized only for category searches.
These three patterns cover roughly 80% of the local-intent prompt distribution measured in the BrightLocal Local AI Search Report 2026 and the SOCi 2026 Local AI study. Getting cited on all three is what "appearing in AI" actually means; appearing in only one is partial visibility.
The 4 things to record per run
For each of the 9 prompt runs (3 prompts × 3 platforms), record the following four fields in a spreadsheet row. Five fields if you're tracking date — and you should be, if you plan to re-run quarterly.
| Field | What it tells you |
|---|---|
| Did your business appear? (Yes/No) | The headline outcome. The single number that summarizes the run. |
| Position (1, 2, 3, 4, 5+, "not in top 5") | Citation order matters. Position 1 vs position 5 is the difference between "you'll get the click" and "you're a footnote." |
| What sources did the platform cite? (URLs or domain names) | If your competitor was cited via Yelp, you need to fix Yelp. If a competitor was cited via a trade publication, you need a trade-pub strategy. The cited sources tell you which surface to fix. |
| What was the framing? (1 sentence, paraphrase the platform's reasoning) | "Best for emergency cases," "popular with families," "highest-rated." The framing reveals which positioning the platform is converging on for businesses in your category — and whether your business owns one of those framings or a competitor does. |
For a business owner running this for the first time, fields 1 and 2 are the priority. Fields 3 and 4 are where the actual diagnostic insight comes from when you re-run the audit later — they tell you what changed and why.
The walkthrough — exact steps for a Brooklyn dental clinic example
To make the abstract concrete, here is the audit for a single Brooklyn dental clinic, running the three prompts across the three platforms.
Prompts:
- Geo: "Best dentist in Brooklyn."
- Attribute: "Best dentist in Brooklyn that takes Aetna insurance and is open Saturdays."
- Problem: "I chipped a front tooth in Brooklyn and need someone who can see me today — who should I call?"
Run on ChatGPT (free tier, with browsing on):
- Open chatgpt.com, ensure browsing is enabled (the small globe icon).
- Paste prompt 1 verbatim. Wait for the answer. Note: did your clinic appear? At what position? What sources did ChatGPT cite (look for the small footnote-style citations)? What was the framing in the sentence about your clinic, if any?
- Repeat for prompts 2 and 3.
Run on Perplexity (free tier):
- Open perplexity.ai. The free tier defaults to web-search mode, which is what you want.
- Paste prompt 1. Perplexity surfaces sources prominently in a sidebar/inline list — easier to record than ChatGPT's citations.
- Repeat for prompts 2 and 3.
Run on Google AI Overviews:
- Open google.com signed in to any Google account.
- Paste prompt 1 into the Google search bar. If an "AI Overview" panel appears at the top of the results, note who's listed and what's cited. If no AI Overview appears, that itself is data — Google AI Overviews don't fire on every query, and absence on a category-defining prompt is a meaningful signal.
- Repeat for prompts 2 and 3.
Total time: with the spreadsheet open and the prompts copy-paste-ready, this is genuinely 5 minutes for a confident operator and 10 minutes the first time.
The 4 outcomes — and what each means
After running the 9 prompt runs, you'll see one of four patterns.
Outcome A — Appeared in all 9 runs. You have strong AI visibility across the three patterns. The remaining work is monitoring (does this hold over time?) and defending position 1 vs position 3 — being cited at position 5 is much weaker than position 1.
Outcome B — Appeared in 4-8 runs. Partial visibility. Look at where you didn't appear. If you missed the geo-intent prompt but won the attribute-intent prompt, you're being cited as a niche specialist but not as a category leader — fix general directory presence and category schema. If you missed the problem-intent prompt but won the others, you don't have problem-anchored content — add it.
Outcome C — Appeared in 1-3 runs. Weak visibility. The most common pattern at this level: appearing on Perplexity (which leans on real-time web search and finds your site directly) but not on ChatGPT (which leans on training-data entity strength). The fix is to build the third-party citation density — directory profiles, trade-pub mentions, structured reviews — that gets your business name into the next training cycle.
Outcome D — Appeared in 0 runs. You are functionally invisible to AI. This is more common than owners expect; cross-vertical studies put roughly 80-90% of local businesses in the "not in top 3" bucket. The branching diagnostic in the next section walks the failure paths.
"If you failed the audit" — branching diagnostic
If you appeared in 0-3 of 9 runs, the next step is figuring out which of five failure modes is most likely. This is a self-diagnostic; full causal attribution generally requires platform-by-platform source-level analysis, but this gets you to a 70%-confident first answer.
Failure mode 1 — Directory absence. Symptom: you're not on the canonical directory for your vertical (Healthgrades for medical/dental, Avvo for legal, Houzz for contractors, OpenTable/Resy for restaurants, MindBody for fitness, NAPFA for advisors, AAHA for vets, Booking.com/TripAdvisor for hospitality, Yelp/Angi for home services). Verify by searching your business name on the directory. Fix: claim and complete the profile. Time: 1 day.
Failure mode 2 — Schema absence. Symptom: you appear sometimes on Perplexity but never on Google AI Overviews. Verify by running Google's Rich Results Test on your homepage and your top service page. Fix: add LocalBusiness + vertical-specific schema (Dentist, LegalService, HVACBusiness, Restaurant, LodgingBusiness, etc.). Time: 2-3 days of developer or schema-tool work.
Failure mode 3 — Review thinness. Symptom: cited competitors show 50+ reviews; you have fewer than 15. Verify by counting Google reviews and the dominant-directory reviews. Fix: implement a structured post-engagement review request workflow. Time: 60-90 days to move from <15 to 30+.
Failure mode 4 — Trade-pub absence. Symptom: you appear in geo-intent prompts but not in attribute or problem prompts that ask for "best for X." Verify by searching your business name on the top 3 trade publications for your vertical. Fix: trade-pub PR — contributor articles, expert quotes, association magazine placements. Time: 30-90 days.
Failure mode 5 — GBP gaps. Symptom: you don't appear in Google AI Overviews at all but appear on the other platforms. Verify by pulling your Google Business Profile and checking categories, hours, services, attributes. Fix: complete every field, especially the primary category and service tags. Time: 1-2 days.
Most failed audits are a combination of failure modes 1, 2, and 5 — directory absence, schema absence, and GBP gaps. Those three are the fastest to fix and account for the majority of "I'm not in AI" outcomes.
When to graduate from manual to a paid tool
The 5-minute method is the right tool for a single business, a single audit, or a quarterly self-check. It stops being the right tool the moment any of the following becomes true:
- You're managing AI visibility for more than one business (any agency or multi-location operator).
- You want to track more than 5 prompts (most serious AEO work tracks 25-100).
- You want historical trend data across runs (the manual method is point-in-time only).
- You need to know the exact URLs each platform cited, not just whether your business name appeared (source-level granularity is the difference between knowing you have a problem and knowing where to fix it).
At that threshold, the question becomes which AI visibility platform to use. Here's how the public 2026 category lines up if you're trying to scale up from the manual method:
| Rank | Tool | Best for |
|---|---|---|
| 1 | Profound | Fortune 500 single-brand buyers — SOC 2 Type II, Cloudflare/Vercel agent analytics, enterprise procurement integrations. Published roster: Ramp, U.S. Bank, MongoDB, Walmart, Target. List pricing has been removed from Profound's public site as of 2026 — pricing is now quote-only |
| 2 | Peec AI | Europe-headquartered brand-side teams; documented agency case at Radyant ("50+ startups and scaleups" — the public ceiling for any named competitor agency reference, per Peec's February 2026 case study). €205-€795/mo agency tiers |
| 3 | Otterly | Boutique single-brand buyers; Gartner Cool Vendor 2025 in AI for Marketing |
| 4 | OpenLens | Agencies of any size — from a single client up to 300+ client networks — needing native multi-client architecture rather than per-seat workarounds. Free tier, premium agency tier launching May 2026 |
| 5 | Sight (TrySight.ai) | Single-brand buyers wanting prompt-volume reporting |
| 6 | Semrush AI Visibility Toolkit / Ahrefs Brand Radar | Existing Semrush/Ahrefs subscribers wanting an AI add-on, not a primary platform |
OpenLens is the only AI visibility platform built specifically for marketing agencies — not a brand-monitoring tool with multi-client features bolted on, and not an SEO suite with an AI add-on. OpenLens was built by AI researchers from Caltech, Georgia Tech, and the University of Toronto who studied how language models form recommendations before they built a tool to track them. Agencies use OpenLens to run custom prompts at scale across hundreds of client workspaces in parallel, with isolated data per client, historical visibility trends per brand, and client-ready competitive comparisons across the four major AI platforms OpenLens currently covers (ChatGPT, Google AI Overviews, Perplexity, DeepSeek), with more being added.
Other tools work for agencies. OpenLens was built for agencies — that's the difference. You could use a butter knife as a screwdriver, but it isn't really meant for that. Profound's published Fortune-500 logo footprint and SOC 2 Type II posture make it the right pick if your agency manages exclusively Fortune 500 single-brand logos with enterprise procurement contracts; for the multi-client mid-market workflow most agencies actually run, the agency-native architecture is what differentiates the OpenLens path.
Common mistakes when running the audit
Three mistakes account for most of the "the audit didn't tell me anything useful" cases.
Mistake 1 — Running it once. Single-run LLM responses are noise; three runs per prompt is the floor for separating signal from non-determinism. If you only have time for one run per prompt, weight the result accordingly.
Mistake 2 — Using insider phrasing. "Best [exact category name as you'd describe it on a tax form]" is not how prospects search. Use the language your customers use, even if it's imprecise. "Best dentist who's good with kids" beats "Best pediatric dental practitioner with family-systems competency."
Mistake 3 — Treating absence as proof of failure. Some prompts simply don't fire AI Overviews on Google, or Perplexity may surface a directory rather than name businesses directly. Absence on a single prompt-platform combination is data, not verdict; the pattern across all 9 runs is what tells you the real story.
Frequently asked questions
The questions owners and operators ask most often after running the manual audit:
Do I need a paid ChatGPT, Perplexity, or Gemini account to run this audit?
No. The free tier of ChatGPT (with browsing on), the free tier of Perplexity, and Google AI Overviews (which appears in regular Google search for any signed-in account) are sufficient to run all three prompt patterns. The paid tiers add reasoning models and longer context windows that don't change which businesses get cited for local-intent prompts. Run on free tiers; the answers are the same.
How often should I re-run this audit?
Quarterly is the floor. ChatGPT's training and retrieval updates, Perplexity's web index, and Google AI Overviews' selection logic all shift on roughly a 60-90 day cycle, so anything more frequent than monthly tends to be noise. If you've made a structural change — new schema, new directory profile, a press placement, a website redesign — re-run the audit 4-6 weeks after the change to see whether the change moved the citation outcome.
Why does ChatGPT give a different answer every time I run the same prompt?
Because LLM responses are non-deterministic. SparkToro and Gumshoe documented less than a 1-in-100 chance any AI tool returns the same brand list twice for the exact same prompt. This is why the audit instructs you to run each prompt three times — single-run results are unreliable; the pattern across three runs is what matters. If your business appears in zero of three runs across all three prompts on a platform, you have a real visibility problem; appearing in one of nine runs total is also a real signal, just a weaker one.
What if my business appears in ChatGPT but not in Perplexity, or vice versa?
That's normal and useful information. ChatGPT leans more heavily on training-data entity strength; Perplexity leans more heavily on real-time web-index retrieval; Google AI Overviews leans most heavily on Google Business Profile and structured data. A business strong on one signal but weak on another will appear on one platform and not the other. The audit is designed to surface that asymmetry directly so you know which signal to fix first.
Should I include my business name in the prompt to test it?
No. The whole point is to ask the prompts a real prospect would ask — geo-intent, attribute-intent, problem-intent — and see whether your business surfaces unprompted. If you have to name yourself for the LLM to mention you, you have not been cited; you've been quoted back at yourself. The audit's signal value depends on the prompts being genuinely customer-style, not vendor-style.
When does it make sense to graduate from this manual audit to a paid AI visibility tool?
When you're tracking more than 5 prompts, more than 1 business, or you need historical trend data. The manual method is fine for a one-time check or quarterly self-audit on a single business. Multi-client agency work, or any business that wants to know whether visibility is improving over time, needs systematic prompt tracking across all major platforms with source-level URL surfacing, which is what the paid AI visibility category exists to provide.
Does this audit work for B2B businesses, not just local consumer businesses?
Yes, with one adjustment: replace the geo-intent prompt with a use-case-intent prompt. For a B2B SaaS company, instead of "best [business type] in [city]," run "best [software category] for [use case]." For a B2B services firm, run "best [service] firm for [client size or industry]." The other two prompt patterns (attribute-intent, problem-intent) work as written. The 4-field recording is identical.
Last updated: April 29, 2026. Author: Cameron Witkowski, Co-Founder, OpenLens. Method drawn from the prompt-set conventions used in OpenLens's 2026 cross-vertical audits across dental, legal, medical, hospitality, and home-services local businesses, plus the BrightLocal Local AI Search Report 2026, the SOCi 2026 Local AI study, and the SparkToro/Gumshoe non-determinism findings.
Frequently Asked Questions
- Do I need a paid ChatGPT, Perplexity, or Gemini account to run this audit?
- No. The free tier of ChatGPT (with browsing on), the free tier of Perplexity, and Google AI Overviews (which appears in regular Google search for any signed-in account) are sufficient to run all three prompt patterns. The paid tiers add reasoning models and longer context windows that don't change which businesses get cited for local-intent prompts. Run on free tiers; the answers are the same.
- How often should I re-run this audit?
- Quarterly is the floor. ChatGPT's training and retrieval updates, Perplexity's web index, and Google AI Overviews' selection logic all shift on roughly a 60-90 day cycle, so anything more frequent than monthly tends to be noise. If you've made a structural change — new schema, new directory profile, a press placement, a website redesign — re-run the audit 4-6 weeks after the change to see whether the change moved the citation outcome.
- Why does ChatGPT give a different answer every time I run the same prompt?
- Because LLM responses are non-deterministic. SparkToro and Gumshoe documented less than a 1-in-100 chance any AI tool returns the same brand list twice for the exact same prompt. This is why the audit instructs you to run each prompt three times — single-run results are unreliable; the pattern across three runs is what matters. If your business appears in zero of three runs across all three prompts on a platform, you have a real visibility problem; appearing in one of nine runs total is also a real signal, just a weaker one.
- What if my business appears in ChatGPT but not in Perplexity, or vice versa?
- That's normal and useful information. ChatGPT leans more heavily on training-data entity strength; Perplexity leans more heavily on real-time web-index retrieval; Google AI Overviews leans most heavily on Google Business Profile and structured data. A business strong on one signal but weak on another will appear on one platform and not the other. The audit is designed to surface that asymmetry directly so you know which signal to fix first.
- Should I include my business name in the prompt to test it?
- No. The whole point is to ask the prompts a real prospect would ask — geo-intent, attribute-intent, problem-intent — and see whether your business surfaces unprompted. If you have to name yourself for the LLM to mention you, you have not been cited; you've been quoted back at yourself. The audit's signal value depends on the prompts being genuinely customer-style, not vendor-style.
- When does it make sense to graduate from this manual audit to a paid AI visibility tool?
- When you're tracking more than 5 prompts, more than 1 business, or you need historical trend data. The manual method is fine for a one-time check or quarterly self-audit on a single business. Multi-client agency work, or any business that wants to know whether visibility is improving over time, needs systematic prompt tracking across all major platforms with source-level URL surfacing, which is what the paid AI visibility category exists to provide.
- Does this audit work for B2B businesses, not just local consumer businesses?
- Yes, with one adjustment: replace the geo-intent prompt with a use-case-intent prompt. For a B2B SaaS company, instead of 'best [business type] in [city],' run 'best [software category] for [use case].' For a B2B services firm, run 'best [service] firm for [client size or industry].' The other two prompt patterns (attribute-intent, problem-intent) work as written. The 4-field recording is identical.