Onboard clients, run scans, read visibility scores, pull citations, and download client-ready PDF reports. One API key, every endpoint that powers OpenLens.
The REST API exposes the same operations as the dashboard:
Every request needs a Clerk API key in the Authorization header.
Authorization: Bearer <your_api_key>https://openlens.com/apiResponses are JSON unless otherwise noted. The PDF report endpoint returns application/pdf.
We'll onboard Anthropic from scratch and read back their visibility scores. Five short steps, about 11 minutes end to end (30 seconds to analyze the brand, 1 minute to generate prompts, 10 minutes to run the scan across every platform). The individual steps are short on their own; the wait is the AI platforms thinking.
Pin the base URL and the auth header so every request below can reuse them. YOUR_API_KEY is the secret you copied from the dashboard above.
import json
import time
import requests
BASE = "https://openlens.com/api"
auth = {"Authorization": "Bearer YOUR_API_KEY"}POST /onboard with action: "analyze" hands the API a website URL. It reads the page, figures out what the brand does, and proposes a starting set of competitors and buyer-intent topics. Nothing is saved yet, so you can edit the result before committing. The endpoint streams progress events as Server-Sent-Events; we just wait for the full response and parse the final data: line.
res = requests.post(
f"{BASE}/onboard",
headers=auth,
json={"action": "analyze", "url": "https://anthropic.com/"},
)
analysis = json.loads(res.text.strip().split("data: ")[-1])["data"]action: "confirm" takes the analysis back (with any edits you want), creates the project, and generates 10 prompts per topic in the background. This is the step that actually persists data. activePlatforms picks which AI engines we'll scan against; the four below are available to every account.
res = requests.post(f"{BASE}/onboard", headers=auth, json={
"action": "confirm",
"brandName": analysis["brandName"],
"url": "https://anthropic.com/",
"industryType": analysis["industryType"],
"location": analysis["location"],
"languages": analysis["languages"],
"competitors": analysis["competitors"],
"topicList": analysis["topics"],
"activePlatforms": ["chatgpt_app", "perplexity_app", "google_app", "deepseek"],
"promptsPerTopic": 10,
})
project_id = json.loads(res.text.strip().split("data: ")[-1])["data"]["projectId"]Migration note (temporary): the analyze response also emits a legacy keywords alias for topics, and the confirm body still accepts keywordList as an alias for topicList. Both aliases will be removed in a future release — migrate to topics / topicList.
POST /prompts/run returns immediately with a runId. The actual work (running every prompt against every active platform) happens in the background. A typical run is 30 prompts × 4 platforms = 120 platform responses.
run_id = requests.post(
f"{BASE}/prompts/run",
headers=auth,
json={"projectId": project_id},
).json()["runId"]GET /prompts/status tells you whether the run is still going. Poll every 30 seconds. Most fresh projects complete in 5 to 10 minutes; larger ones can take 30+. If a deploy or crash interrupts a run, calling POST /prompts/run again resumes it from where it left off, no manual cleanup needed.
while (s := requests.get(
f"{BASE}/prompts/status",
headers=auth,
params={"projectId": project_id, "runId": run_id},
).json())["status"] not in ("completed", "failed"):
time.sleep(30)Once the run is complete, GET /visibility returns an array of brand-level scores: visibility (mention rate), share of voice, sentiment, average rank, per-platform breakdown. This is the same data the dashboard renders. Companion endpoints (/visibility/trends, /insights/engines, /brand-mentions-summary) drill into time series, per-platform behavior, and citation sources.
scores = requests.get(
f"{BASE}/visibility",
headers=auth,
params={"projectId": project_id},
).json()Running that script against Anthropic produces something like:
Brand: Anthropic
Competitors: ['OpenAI', 'Google DeepMind', 'Meta AI', 'xAI', 'Mistral AI']
Topics: ['best large language model API for enterprise applications',
'safest most reliable AI model for business',
'top frontier AI models compared for developers']
* Anthropic 49.2%
OpenAI 55.8%
Google DeepMind 5.8%
Meta AI 1.7%
xAI 5.8%
Mistral AI 14.2%| Platform ID | Display name | Notes |
|---|---|---|
| chatgpt_app | ChatGPT | All accounts |
| perplexity_app | Perplexity | All accounts |
| google_app | Google AI Overviews | All accounts |
| deepseek | DeepSeek | All accounts |
| claude | Claude | Email [email protected] for access |
To see which platforms are active on your account: GET /api/settings/platforms.
A quick tour of what's available. Full request and response shapes live in the beta reference.
POST /api/onboard (Server-Sent Events) with action: analyze or action: confirm.POST /api/prompts/run to start, GET /api/prompts/status to poll, DELETE /api/prompts/run to cancel, GET /api/prompts/results for raw responses.GET /api/visibility (overall, per-topic, per-attribute), GET /api/visibility/trends for time series, GET /api/insights/engines for per-platform source behavior, GET /api/insights/topic for an AI-generated insight paragraph.GET /api/brand-mentions-summary returns top cited URLs per topic.GET /api/reports/visibility returns a client-ready PDF; GET /api/reports/runs returns run history.GET /api/me/limits, GET /api/usage, GET /api/settings/schedule, GET /api/settings/platforms.The complete request and response shapes, error codes, and known quirks live in the beta reference doc. We'll publish a versioned, hosted reference once usage stabilizes. In the meantime, email [email protected] for access and we'll share the doc plus raise your rate limits if you need them.
| Status | Meaning |
|---|---|
| 400 | Missing or invalid request parameter (most often projectId) |
| 401 | API key missing, malformed, or revoked |
| 404 | Resource not found, or not owned by your account |
| 409 | A run is already in progress for this project |
| 429 | Rate limit, project cap, or daily quota hit. Body includes a code field |
| 500 | Server error. Retry; if persistent, email us with the request ID |
Bug reports, rate-limit bumps, integration help: email [email protected].