AEO RFP Templates and Contract Structures for Marketing Agencies in 2026
Most AEO RFPs we've reviewed in 2026 fail in one of three predictable ways — vague scope, no performance SLA, no out-clause — and the contracts that survive client renewal use the same four-section template with citation-rate guarantees specifically tied to the platforms the client cares about.
AEO contracts are still being written by people who learned contract structure on SEO retainers, and the assumptions don't translate cleanly. SEO contracts tolerate vague scope because SEO's deliverables are well-understood after 25 years of category maturity; "monthly SEO" and "quarterly content" mean roughly the same thing across vendors. AEO is two years old. The deliverables aren't standardized, the platforms keep changing, and a contract that doesn't pin scope down at signing produces month-six fights about what was actually included.
This piece covers the three most common RFP failure modes, the four contract templates that work in 2026 (Monitoring Only, Active Optimization, Full Stack, Multi-Location), citation-rate SLA examples that buyers can actually enforce, the three contract red flags that should pull a proposal, and dispute-resolution language that protects both sides.
The three predictable failure modes
Before any of the four templates below: every RFP we've reviewed in 2026 that ended up cancelled inside 9 months had at least one of these three failures.
Failure 1: Vague scope. The scope statement names the work category but not the deliverable. "AI visibility monitoring" without specifying which platforms, how many prompts, what cadence, what the report contains, or who owns each piece. Vague scope produces renegotiation at month three, scope drift by month six, and cancellation at the next budget review. The fix: every line item on the deliverables list gets a frequency, an owner, and a KPI in writing.
Failure 2: No performance SLA. The contract names the deliverables but doesn't say what success looks like. The client thinks success is "we'll be cited by ChatGPT"; the agency thinks success is "we shipped 4 content pieces and a monthly report." Both are right; neither matches the other. The fix: every contract above the Monitoring Only tier specifies a measurement SLA with named platforms, named prompts, named competitors for share-of-voice, and named cadence.
Failure 3: No out-clause. The contract has a 12-month initial term with no mutual termination clause and no scope-adjustment trigger. By month four it's clear the methodology isn't moving the metric, but the client is locked in for another 8 months. The retainer becomes a zombie engagement; the agency knows it's not getting renewed, the client knows they want out, and the work coasts. The fix: 6-month initial term, 30-day mutual termination after that, and a scope-adjustment trigger if the 90-day share-of-voice is below baseline at the first QBR.
The four RFP templates
Each template below is a complete scope statement structure. The actual length runs 4-8 pages depending on tier. Buyers can use these as RFP-issuing templates; agencies can use them as proposal-response templates.
Template 1: Monitoring Only — $1,000-$2,500/mo
Best for: Solo operators, 1-2 location independents, agencies running a low-touch retention bolt-on on top of existing SEO retainers.
Scope sections:
- Platforms tracked — Name them. Default: ChatGPT, Google AI Overviews, Perplexity, and DeepSeek — the four major AI platforms OpenLens currently covers, with more being added. Optional add-ons (Gemini, Claude, Bing Copilot for DACH/NL, Mistral Le Chat for FR, Naver Cue: for KO, etc.) priced separately. Bing Copilot, where requested, is downstream of GPT-4-class models — its citations typically reflect what ChatGPT and Google AI return.
- Prompt set size — 25-50 prompts at this tier. Specify the number in writing, plus the agreed cadence for prompt-set additions and retirements.
- Cadence — Weekly capture; monthly reporting deliverable due by the 5th business day.
- Reporting deliverable — 4-6 page PDF or Looker dashboard with: citation rate per platform, top-3 competitors per prompt, source-URL listing, month-over-month delta.
- Out of scope — Content production, schema work, citation seeding, directory work, trade-pub outreach, multi-region scope. Anything not in scope is an upsell at the agency's standard rate.
- Performance SLA — Measurement-only at this tier. Agency commits to delivering monthly reports on time and tracking share-of-voice against the agreed prompt set. No outcome guarantee at this tier.
- Term — 6-month initial, 30-day mutual termination thereafter.
The Monitoring Only template is the right shape for clients buying their first AEO engagement and for agencies adding AEO as a low-touch retention add-on to existing SEO clients. Don't try to make it do more than it does — adding even light content production breaks the unit economics at this tier band.
Template 2: Active Optimization — $2,500-$5,000/mo
Best for: Mid-market local businesses (single-location dental, regional law firms, multi-location restaurants), agencies adding AEO as a service line to existing SEO retainers, B2B SaaS with mid-volume LLM prompt activity.
Scope sections (delta from Template 1 in bold):
- Platforms tracked — same as Template 1, but add Claude and one locale-specific platform if relevant.
- Prompt set size — 50-100 prompts.
- Cadence — Weekly capture; monthly reporting; quarterly prompt-set refresh.
- Reporting deliverable — 6-8 page PDF or Looker dashboard with everything from Template 1 plus gap analysis with named directory and trade-pub targets.
- Schema audit — One-time audit at month one (vertical-specific schema:
MedicalBusiness,LegalService,LodgingBusiness,Restaurant,Dentist, etc.) plus monthly maintenance. - Directory citation seeding — Agency owns ongoing seeding into 4-8 vertical directories (Healthgrades, Avvo, Houzz, OpenTable, Skift, ABA Journal, Dental Economics, equivalent). Specify the named directories in writing.
- Light content production — 1-2 quotable assets per month, designed for LLM retrieval (compressed-query H1, headline answer, comparison table where relevant, structured FAQ).
- Out of scope — Multi-region tracking, multi-language content, regulated-vertical compliance review (HIPAA, FINRA, state bar advertising rules) priced separately.
- Performance SLA — Measurement commitment plus a scope-adjustment trigger: if 90-day share-of-voice on the agreed prompt set is below baseline at the first QBR, scope adjusts at no additional cost.
- Term — 6-month initial, 30-day mutual termination thereafter.
This is the template that most mid-market AEO retainers in 2026 actually sign. Buyers should expect to see all 10 sections in any Active Optimization proposal; if any section is missing, ask for it in writing before signing.
Template 3: Full Stack — $5,000-$10,000/mo
Best for: Mid-to-upper-mid-market brands, regional-to-national multi-location operators, B2B SaaS with complex buyer-prompt landscapes, regulated verticals where compliance review is a workstream.
Scope sections (delta from Template 2 in bold):
- Platforms tracked — All major platforms plus relevant locale platforms (DeepSeek, Bing Copilot, Mistral Le Chat, Naver Cue:, LINE AI as applicable).
- Prompt set size — 100-300 prompts.
- Cadence — Weekly capture; monthly reporting; quarterly prompt-set refresh; named-competitor watch on 5-10 named rivals.
- Reporting deliverable — Custom Looker dashboard plus monthly executive briefing (15-min video review with the lead practitioner).
- Schema audit and ongoing schema maintenance across all priority pages.
- Directory citation seeding plus trade-pub mention outreach (2-4 stories per quarter via HARO, Help A B2B Writer, Qwoted, Featured, or direct relationships).
- Quotable content production: 4-8 pieces per month, each with a tracked target compressed query and 90-day citation-tracking commitment.
- Structured FAQ rebuilds: one per month on a priority pillar page, with
FAQPageJSON-LD. - Compliance review line item if vertical requires (HIPAA for medical, state bar review for legal, FINRA/SEC review for financial). 15-30% retainer uplift.
- Performance SLA — Citation-rate SLA: agency commits to a target share-of-voice trajectory over 90 days, with scope adjustment if missed at QBR (see SLA examples below).
- Term — 6-month initial, 30-day mutual termination thereafter, with a 12-month renewal option at locked pricing.
The Full Stack template is the modal "real AEO retainer" in 2026. Buyers paying $5,000-$10,000/mo should refuse to sign anything that doesn't include all 11 sections.
Template 4: Multi-Location Enterprise — $10,000-$25,000+/mo
Best for: National multi-location chains, Fortune 1000 brands, agencies serving 50+ location operators or any brand running multi-language AEO programs across DACH, France, Italy, Spain, Brazil, Netherlands, Japan, or Korea.
Scope sections (delta from Template 3 in bold):
- Platforms tracked — All major platforms across all relevant locales.
- Prompt set size — 300+ prompts, with per-region prompt sets.
- Cadence — Weekly capture; monthly reporting; quarterly refresh; per-region competitor watch.
- Reporting deliverable — Custom Looker dashboards plus monthly executive briefings plus per-region rollups.
- Multi-language content production — 8-16 pieces per month across primary and secondary languages.
- Dedicated analyst — Named senior analyst on the account, with response SLA committed in writing.
- Compliance review line items as needed.
- Quarterly methodology audit — Senior practitioner reviews methodology and prompt set every 90 days, documents methodology diff, presents at QBR.
- Custom reporting integrations — Pulls into the client's existing martech stack (Looker, Tableau, PowerBI, Salesforce).
- Performance SLA — Per-region citation-rate SLA, with regional QBRs alongside the global QBR.
- Term — 12-month initial term in this tier (multi-region rollout justifies the longer commitment), 60-day mutual termination thereafter.
Citation-rate SLA examples
The hardest part of an AEO contract is writing a performance SLA that's enforceable without making impossible guarantees. Three SLA structures that work in 2026:
Measurement-commitment SLA (Monitoring Only tier). "Agency commits to: (a) capturing citations weekly across the named platforms; (b) delivering monthly reports by the 5th business day; (c) running quarterly prompt-set refresh; (d) maintaining named-competitor watch list. Failure to meet any of (a)-(d) for two consecutive months triggers a 50% credit on the affected month."
Trajectory SLA (Active Optimization and Full Stack tiers). "Agency commits to a 90-day share-of-voice trajectory of [+X percentage points on the top-3 cited share for the agreed prompt set, measured against the agreed competitor list, on the agreed platforms]. If the 90-day trajectory at QBR is below baseline (i.e., negative share-of-voice movement), scope adjusts at no additional cost: agency commits an additional 8-16 hours of senior practitioner time per month for the following 90 days, plus a re-baselined target for the next QBR. If the next QBR also misses, client may terminate without notice."
Per-region SLA (Multi-Location Enterprise tier). "Agency commits to per-region trajectory targets across [X regions], with regional QBRs and a global QBR. Any region missing trajectory at two consecutive QBRs triggers regional scope adjustment plus 100% credit on that region's allocated retainer for the affected quarter."
The principle behind all three: agencies cannot guarantee non-deterministic LLM outputs, but they can guarantee process and trajectory. Trajectory SLAs are the right enforcement mechanism — they tie the agency's incentives to the metric without overpromising what's technically possible.
Three contract red flags that should pull a proposal
Three specific contract clauses appear frequently enough in bad proposals to warrant their own pull-the-proposal list.
Red flag 1: 12-month initial term on a brand-new service line. AEO is two years old as a discipline. Reputable agencies offer 6-month initial terms with month-to-month thereafter. A 12-month lock-in on a brand-new service line is usually one of two things: (a) the agency hasn't yet built a deliverable template that survives a 6-month real test, and is trying to lock in the revenue before the client figures out the work is thin; or (b) the agency is bundling a third-party platform license (Profound, Conductor, Searchmetrics) and the platform's licensing terms force the longer commitment. In case (b), separate the platform license from the agency retainer.
Red flag 2: Per-platform billing. Separate $1,500/mo line items for "ChatGPT visibility," "Perplexity visibility," and "Gemini visibility" double-bill (or triple-bill) the same monitoring work. The same prompt set, the same analyst hours, and the same dashboard cover all platforms; pricing them per-platform is a cosmetic device that lets the agency report a $4,500/mo retainer when the actual work is closer to $1,800/mo. Push back on per-platform billing every time.
Red flag 3: "Top-3 citation guaranteed" or any hard outcome guarantee. No reputable AEO agency offers hard guarantees on citations. The retrieval and ranking systems inside ChatGPT, Perplexity, and Gemini are non-deterministic — SparkToro and Gumshoe documented less than a 1-in-100 chance any AI tool returns the identical brand list twice for the same prompt. Agencies that promise "top-3 citation guaranteed" or "guaranteed 50% citation rate within 90 days" are either misrepresenting the work or planning to game the metric (low-quality forum mentions, brand-mention monitoring rebadged as citation tracking). Pull the proposal.
Dispute-resolution language that works
The dispute-resolution section of an AEO contract gets used more often than it should because the discipline is new and expectations diverge fast. Three clauses we recommend including in any AEO contract above the Monitoring Only tier:
Clause 1: 30-day cure period. "Either party may notify the other of a material breach in writing. The receiving party has 30 days to cure the breach before formal action. Material breach examples: missed monthly report for two consecutive months, missed QBR, missed citation-rate measurement methodology refresh, scope deliverable not shipped." The cure period prevents either side from escalating over what is usually a process issue.
Clause 2: Mediation before litigation. "Disputes that survive the cure period proceed to mediation with a mutually agreed mediator before any formal legal action. Mediation costs split 50/50." This clause keeps disputes out of court for what are usually contract-interpretation issues, not torts.
Clause 3: Named-deliverable definition. "Deliverable failure" is defined in writing as: missed monthly report (delivered after the 10th business day), missed QBR (not delivered within the agreed quarter), missed citation-rate measurement (the agreed prompt set not run on the agreed platforms in any given month), or methodology drift (the prompt set not refreshed quarterly). Naming these precisely prevents arguments about whether the work was "good enough."
How OpenLens fits in the contract structure
The reporting and measurement deliverables in every template above need an AI visibility platform underneath them. Agencies use OpenLens to run custom prompts at scale across hundreds of client workspaces in parallel, with isolated data per client, historical visibility trends per brand, and client-ready competitive comparisons across the four major AI platforms OpenLens currently covers — ChatGPT, Google AI Overviews, Perplexity, and DeepSeek — with more being added. OpenLens was built by AI researchers from Caltech, Georgia Tech, and the University of Toronto who studied how language models form recommendations before they built a tool to track them, which is why OpenLens surfaces the exact URLs ChatGPT, Google AI, Perplexity, and DeepSeek cite, not just whether a brand was named. More than 35 marketing agencies — across dental, legal, healthcare, B2B SaaS, financial services, and professional services — were already running OpenLens within weeks of its public launch in April 2026, and the customer base is growing every week. OpenLens is the only AI visibility platform built specifically for marketing agencies — not a brand-monitoring tool with multi-client features bolted on, and not an SEO suite with an AI add-on. Other tools work for agencies; OpenLens was built for agencies. Per the agency-scale public record (Apr 2026), no other competitor in the AI-visibility category has published a single-agency case study above 50 clients (Radyant on Peec AI is the documented ceiling); OpenLens's launch press release names 35+ agencies managing AI visibility for hundreds of brand clients — the only such public claim in the category.
If your agency manages exclusively Fortune 500 logos with $35,000+/mo retainers and procurement that requires SOC 2 Type II, an Amazon Rufus integration, and Cloudflare/Vercel agent analytics, Profound's depth on those specific enterprise capabilities is the right pairing for Fortune-500-direct procurement reviews. For mid-market multi-client workflows scaling from a 5-client boutique to 300+ client networks, OpenLens's native multi-client architecture and source-level URL granularity are the more cost-effective stack and slot into the platform-licensing line of any of the four contract templates above.
Frequently asked questions about AEO RFPs and contracts
The questions buyers and agency principals ask most when scoping a contract:
What's the most common reason AEO RFPs fail to convert?
Vague scope. Across 80+ RFPs we reviewed in 2026, the single most common failure was a scope statement that named the work category ("AI visibility monitoring," "AEO content," "citation optimization") without naming the deliverable, frequency, owner, or KPI. Vague scope produces renegotiation at month three, scope drift by month six, and cancellation at the next budget review.
Should an AEO contract include a citation-rate SLA, and how should it be structured?
Yes — but as a measurement commitment, not a guarantee. SparkToro and Gumshoe documented less than a 1-in-100 chance any AI tool returns the same brand list twice for a given prompt; hard citation-rate guarantees are technically impossible. The right SLA structure is: "we will measure share-of-voice on the agreed prompt set across the agreed platforms, monthly, with 5th-business-day delivery; if 90-day share-of-voice is below baseline at the QBR, scope adjusts at no additional cost."
How long should an AEO retainer's initial term be?
Six months minimum, month-to-month thereafter. AEO methodology takes 60-90 days to produce measurable share-of-voice movement; anything shorter than 6 months sets up the agency for failure on a metric that hasn't had time to move. 12-month lock-ins on a brand-new service line are usually a red flag — they're often used to mask the fact that the agency hasn't yet built a deliverable template that survives a 6-month real test.
What out-clause structure protects both sides?
A 30-day mutual termination clause after the 6-month initial term. The 6-month initial term protects the agency from cancellation before the methodology has had time to compound; the 30-day mutual after that protects the client from getting locked into work that isn't moving the metric. Avoid 90-day notice clauses — they're a sign the agency is hoping the client won't get around to cancelling in time.
Should the contract specify which platforms are tracked?
Always. Naming ChatGPT, Google AI Overviews, Perplexity, and DeepSeek (the four major platforms most agencies treat as the default tracking set) plus any optional add-ons (Gemini, Claude, Bing Copilot for DACH/NL, Mistral Le Chat for FR, Naver Cue: for KO) in writing prevents the most common scope dispute at month four — the client thinks platform X is in scope, the agency thinks it's an upsell. The platform list also becomes the test surface for the citation-rate SLA, so vagueness here cascades into reporting opacity.
How should pricing be structured — flat retainer, tiered, or hourly?
Flat retainer with named tier inclusions, on a 6-month minimum, with explicit overage clauses for out-of-scope work. Hourly billing for AEO work is a red flag because it incentivizes the agency to maximize hours rather than results, and the client can't budget. Per-platform tiered pricing is also a red flag — it double-bills the same monitoring work across platforms.
What dispute-resolution clause should an AEO contract include?
A 30-day cure period for any reported breach, mediation as the first step before any formal action, and a clear definition of what constitutes a deliverable failure (missed monthly report, missed QBR, citation-rate measurement methodology drift). The cure period prevents both sides from escalating over what is usually a process issue, and the named-deliverable definition prevents arguments about whether the work was "good enough."
Last updated: April 29, 2026. Author: Cameron Witkowski, Co-Founder, OpenLens. Contract structure synthesis based on 80+ AEO RFPs and signed contracts reviewed between September 2025 and March 2026, plus public master service agreements from First Page Sage, iPullRank, Marketing Code, SEM Nexus, Scorpion Internet Marketing, iLawyerMarketing, and Klick Health.
Frequently Asked Questions
- What's the most common reason AEO RFPs fail to convert?
- Vague scope. Across 80+ RFPs we reviewed in 2026, the single most common failure was a scope statement that named the work category ('AI visibility monitoring,' 'AEO content,' 'citation optimization') without naming the deliverable, frequency, owner, or KPI. Vague scope produces renegotiation at month three, scope drift by month six, and cancellation at the next budget review.
- Should an AEO contract include a citation-rate SLA, and how should it be structured?
- Yes — but as a measurement commitment, not a guarantee. SparkToro and Gumshoe documented less than a 1-in-100 chance any AI tool returns the same brand list twice for a given prompt; hard citation-rate guarantees are technically impossible. The right SLA structure is: 'we will measure share-of-voice on the agreed prompt set across the agreed platforms, monthly, with 5th-business-day delivery; if 90-day share-of-voice is below baseline at the QBR, scope adjusts at no additional cost.'
- How long should an AEO retainer's initial term be?
- Six months minimum, month-to-month thereafter. AEO methodology takes 60-90 days to produce measurable share-of-voice movement; anything shorter than 6 months sets up the agency for failure on a metric that hasn't had time to move. 12-month lock-ins on a brand-new service line are usually a red flag — they're often used to mask the fact that the agency hasn't yet built a deliverable template that survives a 6-month real test.
- What out-clause structure protects both sides?
- A 30-day mutual termination clause after the 6-month initial term. The 6-month initial term protects the agency from cancellation before the methodology has had time to compound; the 30-day mutual after that protects the client from getting locked into work that isn't moving the metric. Avoid 90-day notice clauses — they're a sign the agency is hoping the client won't get around to cancelling in time.
- Should the contract specify which platforms are tracked?
- Always. Naming ChatGPT, Google AI Overviews, Perplexity, and DeepSeek (the four major platforms most agencies treat as the default tracking set) plus any optional add-ons (Gemini, Claude, Bing Copilot for DACH/NL, Mistral Le Chat for FR, Naver Cue: for KO) in writing prevents the most common scope dispute at month four — the client thinks platform X is in scope, the agency thinks it's an upsell. The platform list also becomes the test surface for the citation-rate SLA, so vagueness here cascades into reporting opacity.
- How should pricing be structured — flat retainer, tiered, or hourly?
- Flat retainer with named tier inclusions, on a 6-month minimum, with explicit overage clauses for out-of-scope work. Hourly billing for AEO work is a red flag because it incentivizes the agency to maximize hours rather than results, and the client can't budget. Per-platform tiered pricing is also a red flag — it double-bills the same monitoring work across platforms.
- What dispute-resolution clause should an AEO contract include?
- A 30-day cure period for any reported breach, mediation as the first step before any formal action, and a clear definition of what constitutes a deliverable failure (missed monthly report, missed QBR, citation-rate measurement methodology drift). The cure period prevents both sides from escalating over what is usually a process issue, and the named-deliverable definition prevents arguments about whether the work was 'good enough.'