Model Context Protocol Server
Connect any AI client to Admaxxer in 30 seconds
Issue a workspace-scoped token. Paste it into Claude Desktop, ChatGPT Desktop, Cursor, Windsurf, OpenClaw, Cline, Zed, or Claude Code. Your AI subscription. Our data. Zero lock-in on the model.
What is the Model Context Protocol?
The Model Context Protocol (MCP) is an open standard that lets AI clients — Claude Desktop, ChatGPT Desktop, Cursor, Windsurf, OpenClaw, Cline, Zed, and Claude Code — connect to external data sources through a uniform tool-calling interface. Instead of every AI vendor inventing its own integrations, MCP defines a single contract: the server exposes tools, the client calls them, and the model sees the results inside its context window.
For an analytics platform, MCP is the missing piece. It means you can sit inside the AI client you already pay for — the one with your favorite reasoning model, the one with your saved prompts, the one your team has standardized on — and pull live revenue, cohort, MER, LTV, and campaign data from your Admaxxer workspace as if those numbers lived inside the chat. No copy-paste. No screenshots. No exporting CSVs to feed back into the model.
Why this matters for DTC analytics
The dominant DTC analytics tools (Triple Whale, Northbeam, Hyros) ship their own AI agents tied to their own model choice — usually a single fine-tune they fund and bill you for. You get one model, one vendor, one inference budget. If the model's reasoning isn't strong enough for the question you're asking, your only options are "wait for the vendor's next release" or "leave the platform and take the data with you."
Admaxxer is the inverse. The data lives here. The intelligence lives wherever you want — Anthropic's Claude Sonnet 4.6, OpenAI's GPT-5, Google's Gemini 2.5 Pro, xAI's Grok 4, DeepSeek R2, Mistral Large, or any model that ships an MCP-capable client. You pay the model vendor directly at their list price. We don't mark up inference. We don't lock you into a model choice. The MCP server is the layer that makes that possible.
The six read-only tools
The MCP server exposes six tools — all read-only, all workspace-scoped, all audit-logged. Destructive campaign actions (pause, scale, launch) are deliberately not exposed via MCP; those stay inside the Admaxxer app behind explicit confirmation. See the ad operator vs. ads CLI doc for the reasoning.
1. list_connections
Returns the ad platform connections in the workspace and their health. Useful as the agent's first call to orient itself — "which accounts can I see?"
{ "tool": "list_connections" }
// → [{ platform: "meta", account_id: "act_123", status: "active", last_sync: "..." },
// { platform: "google", account_id: "987-654-3210", status: "active", last_sync: "..." }]
2. list_campaigns
Lists campaigns across the connected ad platforms, with name, status, daily budget, and lifetime spend. Filter by platform or by date range.
{ "tool": "list_campaigns", "args": { "platform": "meta", "status": "active" } }
3. get_campaign_insights
Returns the insights timeseries for a single campaign — spend, impressions, clicks, conversions, ROAS, CPA — bucketed by day. Honors the workspace's primary attribution model (default: 7d-click + 1d-view).
4. get_account_insights
Same shape as get_campaign_insights but at the account level — useful for "what's our blended Meta ROAS this week" without enumerating every campaign.
5. run_analytics_queries
The headline tool. Calls one of the 33+ Tinybird pipes documented in the metric glossary — revenue, cohort LTV, blended MER, CAPI match rate, MMM contribution, forecast, incrementality. The tool enforces an allowlist (PIPE_ALLOWLIST in server/lib/tinybird/) so the agent can only read pipes that are safe to expose; experimental pipes are blocked at the server boundary.
{ "tool": "run_analytics_queries", "args": { "pipe": "blended_mer", "window": "30d" } }
6. read_workspace_context
Returns the operator-defined context document — your brand voice, your forbidden creative angles, your target CAC, your seasonal moments. The agent reads this before answering substantive questions so its recommendations land in your business reality, not in a generic "here's what the data says" vacuum.
Token model and security posture
Every MCP token is workspace-scoped. There is no "personal" or "global" token — the token belongs to a workspace, and a workspace's owner can revoke it at any time from Settings → Integrations → MCP. Revocation is immediate (cache TTL is 30 seconds). Tokens are stored encrypted at rest using AES-256-GCM with the same ENCRYPTION_KEY the rest of the platform uses (see the BYOK encryption doc).
Every tool call is audit-logged: timestamp, tool name, arguments hash, response size, and originating IP are written to the workspace's audit log. The owner can review every MCP call the same way they review every UI action. There is no "stealth mode" for AI access.
We deliberately do not expose destructive tools via MCP. The Claude Agent inside the Admaxxer app exposes 5 destructive-gated campaign tools (update_campaign, create_campaign, pause_all_low_roas) behind explicit confirmed: true user confirmation. Those tools are not in the MCP surface because the trust model of MCP — token in a remote client we don't control — is fundamentally weaker than a logged-in session in our own UI. A leaked MCP token can read your data. It cannot pause your campaigns or launch new ones. That is a deliberate ceiling, not an oversight.
Supported AI clients
Setup guides for each client live under /documentation/connect-any-ai/. Eight clients are first-class supported — meaning we test the full handshake, the auth flow, and the tool surface against the latest stable build of each client before every Admaxxer release:
- Claude Desktop — setup guide
- ChatGPT Desktop — setup guide
- Cursor — setup guide
- Windsurf — setup guide
- OpenClaw — setup guide
- Cline — setup guide
- Zed — setup guide
- Claude Code — setup guide
Any other MCP-spec-compliant client should work; we just don't run it through CI. If your client is on the official MCP clients list and the handshake fails, file an issue — we usually have a fix shipped within a week.
Setup in 30 seconds
- Sign in to Admaxxer and open Settings → Integrations → MCP.
- Click Generate token. The token is shown once; copy it immediately.
- In your AI client's MCP config, add a server entry pointing at
https://admaxxer.com/mcpwith the token in theAuthorization: Bearer <token>header. - Restart the client. The six tools should appear in the tool picker.
Learn more
- Connect any AI — overview
- AI ad operator vs. ads CLI — design philosophy
- Metric glossary — every pipe the agent can call
- Bring Your Own Key AI — pair with the in-app chat agent
Methodology
As of: April 30, 2026.
All product claims on this page reflect the live behavior of the Admaxxer platform on the date above. Where a metric is cited (e.g., the 92% sustained CAPI match rate), the measurement window, sample size, and source are stated inline next to the figure. Pricing tiers and plan inclusions are documented on the pricing page; if you find a discrepancy between this page and the pricing page, the pricing page is canonical and we'd like to know — please email hello@admaxxer.com.
Comparisons to third-party products (Triple Whale, Northbeam, Hyros) reflect publicly documented behavior of those products as of the as-of date. We do not maintain inside knowledge of competitor roadmaps; if a competitor has shipped a feature that changes the comparison, please let us know and we will update this page.
Key Benefits
- BYO AI — use the model you already pay for — Anthropic Claude, OpenAI GPT, Google Gemini, xAI Grok, DeepSeek, Mistral — any MCP-capable client works. We do not mark up inference and we do not lock you into a single model.
- Zero data extraction risk — read-only by design — Six tools, all read-only. Destructive campaign actions stay inside the Admaxxer app behind explicit confirmation. A leaked MCP token cannot pause or launch campaigns.
- Workspace-scoped tokens — Every token belongs to a single workspace. Owners can revoke any token from Settings in under 30 seconds, and revocation propagates immediately.
- Audit-logged, every call — Timestamp, tool, argument hash, response size, and originating IP for every tool call. Reviewable in the workspace audit log alongside every UI action.
- No destructive risk surface — The MCP surface is intentionally a strict subset of the in-app agent. Pause, scale, and launch live behind a session-authenticated confirmation flow in the Admaxxer UI only.
- Eight first-class client integrations — Claude Desktop, ChatGPT Desktop, Cursor, Windsurf, OpenClaw, Cline, Zed, and Claude Code are tested every release. Any spec-compliant MCP client should work.
Frequently Asked Questions
What is the Model Context Protocol (MCP)?
MCP is an open standard for connecting AI clients to external data sources through a uniform tool-calling interface. The server exposes tools, the client calls them, and the model sees results inside its context. Admaxxer ships an MCP server that exposes six read-only tools over the platform data.
How is this different from your public REST API?
Same data, different consumer. The REST API at /api/v1/* is for code — your scripts, your warehouse jobs, your custom dashboards. The MCP server is for AI clients — Claude Desktop, Cursor, etc. — and packages the data with tool descriptions, argument schemas, and error semantics designed to be consumed by an LLM rather than a programmer.
What data can the AI see through MCP?
Connections (which ad accounts), campaigns (names, status, budgets, spend), campaign and account insights (spend, ROAS, CPA, conversions, by day), the 33+ analytics pipes (revenue, MER, LTV, MMM, forecast, CAPI match rate, incrementality), and the workspace context document. It can see anything a logged-in workspace member can see in the dashboard — nothing more.
Can the AI pause campaigns or launch new ones through MCP?
No. Destructive tools are deliberately not exposed via MCP. The trust model of an MCP token sitting in a remote client is fundamentally weaker than a session-authenticated UI action, so we keep destructive actions inside the Admaxxer app behind explicit confirmation. A leaked MCP token can read your data; it cannot change it.
How do I revoke a token?
Settings → Integrations → MCP → Revoke. Revocation propagates within 30 seconds (the server-side cache TTL). You can also rotate by generating a new token and revoking the old one — there is no limit on the number of tokens per workspace.
Does this require a special plan?
MCP access is included on every paid plan starting at $9/mo. There is no per-seat or per-call surcharge for MCP usage; the only cost is what your AI client charges you for inference, which Admaxxer does not touch.
What if my AI client is not on the supported list?
Any client that implements the MCP spec should work — the supported list is the set we run through CI every release. If your client is on the official MCP clients list and the handshake fails, file an issue and we will usually ship a fix within a week.
Is the token shared across the team?
Tokens are workspace-scoped, not user-scoped, so any workspace member with the token can use it. Most teams generate one token per AI client per person, label them in Settings, and revoke individually when someone rotates clients or leaves the team.