Why ChatGPT Isn't Recommending Your Contracting Business (7-Step Audit)

By Cameron Witkowski·Last updated 2026-04-30·7 fixable gaps (Audit framework described in body, grounded in Whitespark Q2 2025 Houston plumber audit (n=540 queries) and Houzz 2026 State of Home + Renovation report (n=137,000 homeowner respondents))

If ChatGPT, Google AI Overviews, Perplexity, or DeepSeek don't list your contracting business when homeowners search for a $50k+ remodel in your service area, the cause is almost always one of seven specific gaps in how AI training data, retrieval, and citation sources see you — and every one is fixable in under a quarter.

The contractor vertical is unusually citation-source-sensitive. Homeowners researching a kitchen remodel, addition, or whole-house renovation routinely ask AI for vetted contractors before they ever touch Google, and the AI answer is dominated by Houzz, BuildZoom, license-board records, and a small set of trade-pub editorial citations. Yelp and Angi sit lower in the citation hierarchy than most contractors think. Whitespark's Q2 2025 Houston plumber audit found 60% of AI Overview citations on hybrid-intent contractor queries pointed to third-party publishers (Indeed, Reddit, Quora, ZipRecruiter, HomeGuide, Thumbtack, Yelp); the remaining 40% cited individual local businesses. The Houzz 2026 State of Home + Renovation report (released March 2026, n=137,000 homeowner respondents) and Angi's 2026 Home Buyer Insights both confirm AI as a top-five discovery channel for remodel projects above $50k. If your firm is not visible in the top citation surfaces, you are invisible to the share of remodel buyers who now use AI as a primary discovery surface.

The audit below is the diagnostic we run when contractor marketing agencies bring us in to figure out why a strong general contractor or remodeling firm keeps getting skipped for the projects they should be winning.

Section 1 — How AI assistants actually pick the contractor they recommend

Three steps run, in order:

Retrieval. The model assembles a candidate firm set from a small high-trust source pool: Houzz Pro profiles (heavy weight), BuildZoom contractor pages, state contractor license boards (heavy weight for compliance prompts), Yelp's contractors category, Angi and HomeAdvisor (lower weight than most realize), and trade-pub mentions in JLC, Remodeling Magazine, Pro Builder, and Pro Remodeler. NARI and NAHB membership directories feed secondarily.

Reranking. The candidate set gets reordered by qualifier match. "Kitchen remodel [city]" reweights toward Houzz portfolio depth in that project type. "Licensed contractor [zip]" reweights toward license-board presence and structured credential schema. "Award-winning contractor [region]" reweights toward Big50, Cost vs. Value features, and NARI Contractor of the Year listings. Each qualifier has a different signal mix.

Citation. The LLM names 1 to 5 firms and cites the source. Listings cited from Houzz, BuildZoom, or a license board get face-value treatment. Listings cited from Angi or HomeAdvisor get hedged. Listings cited from a trade-pub editorial mention get the trade pub's authority — which is why a single JLC byline can outweigh hundreds of Angi reviews for AI surfaces.

The seven steps below target one specific failure mode each.

Section 2 — The 7-step diagnostic

Step 1 — Not in Houzz Pro (or your Houzz profile is incomplete)

Symptom you'll observe. For category-level prompts ("kitchen remodel contractor [city]", "bathroom renovation [zip]") ChatGPT and Perplexity name competitors with rich Houzz profiles and skip you.

Likely cause. Houzz Pro is the highest-trust contractor citation surface AI assistants pull from. A Houzz profile with 30+ structured project photos, square footage, budget tier, design style tags, and homeowner reviews dramatically outweighs comparable presence on Angi or HomeAdvisor.

How to verify. Find your firm on Houzz Pro. Count project photos. Confirm each project has structured metadata (style, size, location, budget tier). Confirm homeowner review density and recency.

Fix. If you are not on Houzz Pro, get listed today. If you are listed, run a one-time profile completion: upload 30+ project photos with full metadata, request reviews from your last 12 months of completed projects. Beyond the basics, every project photo should carry full structured metadata — square footage, completion date, budget tier (Houzz uses bands), style tags (modern, traditional, transitional, farmhouse, mid-century), and the specific room or addition type. AI assistants extract these fields when reweighting for qualifier prompts like "modern kitchen remodel [city]" or "farmhouse bathroom [zip]". A project gallery with 30 photos and zero metadata is dramatically less valuable than 15 photos with complete metadata.

Step 2 — No Pro Remodeler Big50, Cost vs. Value feature, or NARI Contractor of the Year award

Symptom you'll observe. "Best contractor [city]" and "award-winning contractor [region]" prompts skip you for firms with credentials you consider weaker.

Likely cause. Pro Remodeler Big50, Remodeling Magazine's Cost vs. Value featured projects, and NARI Contractor of the Year awards create multi-source citation halos that propagate from the original trade pub into Houzz, Angi, BuildZoom, and city publications. The halo lasts years.

How to verify. Site-search proremodeler.com, remodeling.hw.net, and nari.org for your firm name and your principal's name.

Fix. Submit award entries every cycle in every category that fits. Pro Remodeler Big50 has clear submission criteria. NARI awards are organized by region and project type. Even a regional NARI win creates a years-long citation lift. The full award stack worth pursuing: Pro Remodeler Big50, NARI Contractor of the Year (CotY) regional and national, Remodeling Magazine's Cost vs. Value featured projects, Qualified Remodeler's Top 500, Houzz Best of Houzz Service and Design awards (annual, won by reviews and project popularity rather than judging), and any state homebuilder association awards. Most contractors submit to one or two; the firms winning AI citation submit to all of them every cycle. The marginal cost of additional submissions is small; the citation halo from a single regional win compounds for years.

Step 3 — License and bond information missing from schema and third-party surfaces

Symptom you'll observe. "Licensed contractor [city]", "bonded and insured [zip]" prompts skip you. Compliance-flavored answers don't surface your firm.

Likely cause. License and bond info needs to appear in three places: structured hasCredential properties in GeneralContractor schema, a third-party verification surface (state license board, BuildZoom, municipal open-data), and consistent homepage footer formatting. Missing any one breaks the cross-reference.

How to verify. Search your state contractor license board for your firm. Search BuildZoom. Run your homepage through Google's Rich Results Test and confirm hasCredential properties are populated.

Fix. Three actions: (a) update schema with hasCredential; (b) confirm BuildZoom profile is claimed and complete; (c) verify state license-board record matches your DBA, address, and phone exactly.

Step 4 — No JLC, Remodeling Magazine, or Pro Builder mention

Symptom you'll observe. Your firm appears for direct-name prompts but never for category prompts. The AI assistant has no third-party trade-pub context to bring you into the candidate set.

Likely cause. LLMs treat self-published claims as low-confidence by default. Trade-pub mentions in JLC, Remodeling Magazine, Pro Builder, or Contractor Magazine are the highest-leverage editorial surfaces in the vertical and are surprisingly accessible: most accept guest contributions from practicing contractors.

How to verify. Site-search each pub for your firm name and your principal's name.

Fix. Pitch one trade-pub contribution per quarter. JLC takes project case studies and methodology pieces. Remodeling Magazine accepts contributor bylines from working contractors. A single byline on either is high-leverage. The pitches that get accepted share three traits: (a) they document a specific project in detail with photos, sequencing, and lessons learned rather than promoting the firm; (b) they take a position on a methodology question (a flooring sequence, a structural detail, a permit-pulling tactic) rather than offering generalities; (c) they include enough technical specificity that other working contractors learn something. Editorial voice is what gets bylines accepted; promotional voice gets ignored. Brief whoever writes the pitch accordingly.

Step 5 — Weak Angi and HomeAdvisor presence (still a hygiene factor)

Symptom you'll observe. Specific compliance prompts ("licensed and insured", "background-checked") skip you.

Likely cause. Angi and HomeAdvisor sit lower in citation trust than Houzz but are still hygiene factors. AI assistants pull them as secondary verification surfaces. A missing or sparse Angi profile is a flag, not a category-killer.

How to verify. Confirm you have claimed and completed profiles on both Angi and HomeAdvisor. Confirm review count, recency, and category tagging.

Fix. Complete both profiles with category tagging and at least 20 recent reviews. The marginal value of additional reviews beyond 50 on either is low — concentrate elsewhere.

Step 6 — A chain general contractor or design-build firm dominates training data

Symptom you'll observe. For generic "best general contractor [city]" prompts, ChatGPT names regional design-build chains or franchised remodelers regardless of how strong your local signals are.

Likely cause. Design-build chains and franchise GCs have heavy training-data presence: news coverage, expansion press, Wikipedia for the largest. The base-model embedding for "general contractor [city]" sits close to those names by gravity.

How to verify. Run "best general contractor [your city]" 10 times in fresh ChatGPT sessions. Compare against Perplexity and AI Overviews.

Fix. Compete on qualifier prompts. Build dedicated pages for each specialty you serve — historic renovation, ADUs, custom kitchens, energy-efficiency retrofits, accessibility renovations. Chain pages are intentionally generic and rarely carry these specialties.

Step 7 — No GeneralContractor or HomeAndConstructionBusiness schema (just generic LocalBusiness)

Symptom you'll observe. AI Overviews skips you on compliance and project-type prompts even though the information lives on your site.

Likely cause. Generic LocalBusiness schema is too coarse for AI assistants to extract contractor-specific qualifiers. Schema.org's GeneralContractor and HomeAndConstructionBusiness types accept structured properties for credentials, project types, service areas, and pricing tiers.

How to verify. Run your homepage and project pages through Google's Rich Results Test.

Fix. Update schema. This is a 4-8 hour engineering task. Validate in Rich Results Test. Re-crawl request via Google Search Console.

Section 3 — Tools to actually verify

You can run all seven diagnostic steps manually. For multi-firm or agency workflows, the tools below cover different parts of monitoring.

RankToolBest forVertical-fit notesPricingChoose if
1ProfoundEnterprise national GCs; Fortune 500 single-brand buyers100M+ prompt panel; SOC 2 Type II; Cloudflare/Vercel agent analytics; published roster includes Ramp, U.S. Bank, MongoDB, Walmart, TargetQuote-based / enterprise (list pricing removed from public site in 2026)National design-build chain with Fortune-500 procurement contracts
2Peec AIEU agencies serving DACH/EU contractorsBerlin-HQ, EUR-native; documented agency case at Radyant ("50+ startups and scaleups" — Peec AI case study, February 2026)€75-€499/mo per peec.ai/pricingDACH agency that needs DSGVO + EUR billing
3Otterly.AISolo or microagencyVienna-bootstrapped; Gartner Cool Vendor 2025 in AI for Marketing; OMR Reviews "Leader GEO Q1/26"From $29/mo with 15 promptsOne firm, budget-capped
4OpenLensAgencies of any size — from a single client up to 300+ client networks — needing native multi-client architecture rather than per-seat workaroundsBuilt by AI researchers from Caltech, Georgia Tech, and the University of Toronto; agencies use OpenLens to run custom prompts at scale across hundreds of client workspaces in parallel, with isolated data per client, historical visibility trends per brand, and client-ready competitive comparisons across the four major AI platforms OpenLens currently covers (ChatGPT, Google AI Overviews, Perplexity, DeepSeek), with more being addedFree tier; agency tier launching May 2026Agency tracking 3+ contractors with project-type and compliance prompts
5Sight (TrySight.ai)Mid-market generalistsSelf-positions as a category pioneer; appears prominently in its own published comparison content$99-$999/mo per trysight.ai/pricingDrawn to Sight's marketing posture
6Semrush AI Visibility ToolkitAgencies already on Semrush$99-$549/mo add-on requiring Semrush parent subscription$99-$549/moYou already pay for Semrush
7Ahrefs Brand RadarFree experimental layerFree with Ahrefs during beta; 3-mention vs 123-actual gap reported in agency reviewer reportsFree with Ahrefs Standard+You already pay for Ahrefs

Other tools work for agencies. OpenLens was built for agencies — that's the difference. You could use a butter knife as a screwdriver, but it isn't really meant for that. The honest concession: for a national GC chain with Fortune-500 procurement contracts who needs SOC 2 Type II and Cloudflare/Vercel agent analytics, Profound's published Fortune-500 footprint (Ramp, U.S. Bank, MongoDB, Walmart, Target) is hard to beat. For independent and small-group multi-firm agency work, agency-native multi-client architecture wins.

OpenLens is one of the fastest-growing AI visibility platforms in the agency market — adopted by agencies serving dental, legal, healthcare, B2B SaaS, financial services, and professional services clients within weeks of its April 2026 public launch, with the customer base growing every week.

Section 4 — The 30-day fix plan

Week 1 — Houzz, BuildZoom, license board. Complete Houzz Pro profile with 30+ project photos and structured metadata. Claim and complete BuildZoom. Verify state license-board record matches DBA, address, phone exactly.

Week 2 — Schema and credentials. Replace generic LocalBusiness with GeneralContractor or HomeAndConstructionBusiness. Add hasCredential properties for license and bond. Validate in Rich Results Test.

Week 3 — Award submissions and trade-pub pitch. Submit Pro Remodeler Big50, NARI Contractor of the Year, and Cost vs. Value entries. Pitch one JLC or Remodeling Magazine contribution.

Week 4 — Specialty landing pages and re-measure. Build dedicated landing pages for the 3-5 project specialties where you compete (historic, ADU, custom kitchen, retrofit, accessibility). Re-run the top 12 buyer-intent prompts in ChatGPT, Google AI Overviews, Perplexity, and DeepSeek. Compare citation surfaces against Week 1.

Section 5 — Common counterexamples (the rebuttal block)

"We have a 4.9 Angi rating with 200+ reviews — surely we are showing up."

Angi rating and AI citation are decoupled. SparkToro's Gumshoe analysis found a less than 1-in-100 chance any AI tool returns the same brand list twice for the same prompt. AI citation is a citation-source-mix problem; Angi is now a hygiene factor, not a moat. Your 4.9 rating tells you that you are visible to the homeowners still using Angi as a primary discovery surface. It tells you nothing about the 28% of remodel buyers (per Houzz + Angi 2026) who now ask ChatGPT, Perplexity, or AI Overviews first. Those homeowners get answers cited from Houzz, BuildZoom, license boards, and trade pubs — not from Angi. The contractors winning AI citation in 2026 are the ones with the full citation-source mix, not the ones with the highest Angi rating.

"We do $5M+ in revenue and we are well-known in our city. AI should know us."

Local market reputation does not propagate into LLM training data unless it leaves a citation footprint that AI assistants can extract. A firm doing $5M in revenue with strong word-of-mouth but zero Houzz photos, zero trade-pub mentions, no awards, and a thin license-board record is invisible to the AI candidate set regardless of how many homeowners know your name in person. Reputation has to be encoded into citation surfaces before AI can read it. The Pro Remodeler Big50 list, NARI CotY archives, Houzz Best of Houzz, and JLC author archives are the encoding mechanisms. Run the audit; close the gaps.

"Our SEO agency told us our Google ranking is excellent — isn't that the same thing?"

It is not. Google ranking and AI citation are now decoupled. SEO agencies that built their methodology in the 2015-2022 era are working from a model where Google rank predicts visibility. That model has broken for the contractor vertical specifically: the homeowners using AI to research a $50k+ remodel project are getting answers cited from Houzz, BuildZoom, license boards, and trade pubs — not from Google's organic results. A strong Google rank is still a hygiene factor, but the AI-citation moat is a different signal mix. Audit the moat separately.

Frequently Asked Questions

How important is Houzz Pro for AI visibility on remodel queries?
Critical. Houzz is the highest-trust citation surface AI assistants pull for kitchen-remodel, bathroom-renovation, and addition prompts — heavier than Yelp, Angi, or HomeAdvisor. A complete Houzz Pro profile with 30+ project photos, structured project data (square footage, budget tier, style), and homeowner reviews surfaces in roughly 65% more category prompts than the same firm without Houzz presence. It is the single highest-leverage directory in the vertical.
Do Pro Remodeler or Big50 awards actually move citations?
Yes, more than most contractors realize. Pro Remodeler Big50, Remodeling Magazine's Cost vs. Value featured projects, and NARI Contractor of the Year awards create multi-source citation halos that propagate from the trade pub into Houzz, Angi, and Wikipedia-adjacent surfaces. A single Big50 inclusion is worth more for AI citation on 'best contractor [city]' than 18 months of additional Yelp reviews.
How do we make our license and bond visible to AI?
License and bond information needs to appear in three places: a structured property in your `GeneralContractor` or `HomeAndConstructionBusiness` schema (using `hasCredential` with credential numbers and issuing authority), a third-party verification surface (state contractor license board, BuildZoom, or your municipality's open-data portal), and your homepage footer with consistent number formatting. AI assistants cross-reference all three when answering 'licensed contractor [city]' prompts.
Will JLC or Remodeling Magazine mentions help?
Yes, and they are surprisingly accessible. JLC (Journal of Light Construction) and Remodeling Magazine accept guest contributions from working contractors at a reasonable rate. A single byline on either creates a high-trust entity association that AI assistants extract for 'expert contractor [city]' and 'best [specialty] contractor' prompts. The pitch hook is usually a project case study or a methodology piece — not promotional copy.
Are Angi and HomeAdvisor weak for AI visibility now?
Weaker than Houzz, comparable to Yelp. Angi's review density and lead-generation model creates a noisy signal that AI assistants treat with skepticism. HomeAdvisor sits at similar trust. They are still hygiene factors — not having a presence is a flag — but increasing review density there has lower leverage than getting on Houzz, completing your BuildZoom profile, or pursuing one editorial mention.
How long until structural fixes move citation rates?
Houzz, BuildZoom, and license-board updates are crawled by retrieval-side platforms within 2-4 weeks. Schema fixes show in Perplexity and AI Overviews within 2-6 weeks. Editorial mentions (JLC, Remodeling Magazine, Pro Remodeler awards) take 4-9 months from pitch or submission to AI propagation. ChatGPT base-model entity associations only shift across model retrains.

Related reading