Admaxxer · Documentation · AI Agent

Claude as your ad-ops co-pilot.

The agent at /chat reads live Meta and Google data, flags anomalies before you ask, and — with explicit confirmation — pauses, scales, or launches campaigns. Distinct from the read-only analytics chat at ⌘J.

Open the chat Jump to tools

Overview

The Admaxxer AI agent is an agentic tool-use loop bridging the Anthropic Messages API to live Meta Ads and Google Ads accounts. The model is claude-sonnet-4-6, output is capped at 4096 tokens per single API call, and the loop is hard-capped at 10 iterations per turn so a runaway can never empty your budget.

The first iteration of every turn streams via Server-Sent Events when an onToken callback is supplied — the user sees text deltas land in real time. Subsequent iterations (after the first tool_use) use non-streaming messages.create() because the UI has already shown a "thinking / calling tool" state and streaming the second pass adds latency without value.

The agent is intentionally distinct from the read-only Analytics chat surfaced at ⌘J. Same model, different toolsets: the analytics chat queries pixel data; the ad-ops agent at /chat can mutate campaigns under explicit user confirmation.

Architecture

The implementation lives at server/ads/ClaudeAgentService.ts. A singleton Anthropic client is constructed once at module load — there is no per-request client churn. The system prompt and tools array are sent with cache_control: { type: "ephemeral" } so the second-and-subsequent requests inside a 5-minute ephemeral window are billed at roughly 10% of input cost. Target prompt-cache hit rate is greater than 80% across an active session.

Conversation persistence uses two tables. chat_sessions holds one row per conversation, scoped to workspaceId and userId. chat_messages holds every turn — role ('user' | 'assistant' | 'tool'), content, toolCalls JSON when the assistant invoked tools, and createdAt as the ordering key. On reload, the last 20 turns rehydrate; tool rows replay as role:"user" messages with raw tool_result content blocks so the model sees previous exchanges verbatim and never hallucinates a tool that wasn't actually called.

The tool-use loop runs up to 10 iterations: emit messages.create, append the assistant response, capture the last text block as finalText, break on stop_reason !== "tool_use". Otherwise dispatch each tool_use block through executeTool(), push the results back as a role:"user" message containing tool_result blocks, and continue. Real tasks complete in 2 to 4 iterations.

All 6 tools

Three read-only and three destructive. Read-only tools execute freely. Destructive tools are gated by an explicit user confirmation enforced server-side (see the next section).

list_campaigns [read-only]

Discover campaigns and their basic metadata (name, status, objective, daily_budget, externalId) for a connected Meta or Google ad account.

Input schema. platform (required, 'meta' | 'google'), connection_id (required, UUID of ad_platform_connections), status_filter (optional — ACTIVE / PAUSED / ARCHIVED). Result is capped at 200 rows per call to stay under the Anthropic prompt-budget ceiling.

Output. JSON array of { id, name, status, objective, daily_budget, externalId } objects. The agent uses this output to look up the matching campaign_id before invoking any insight, update, or pause tool.

When the agent uses it. Always the first call when the user references campaigns by name. The agent uses it to map the human-friendly name to the platform-native campaign ID needed by every other tool.

Example user prompts:

  • List my active Meta campaigns.
  • What campaigns are paused on my Google account?
  • Show me every ARCHIVED campaign on Meta.

get_campaign_insights [read-only]

Pull spend, impressions, clicks, conversions, ROAS, CTR, and CPC for a single campaign over an arbitrary date range, with optional day-level breakdown.

Input schema. platform (required), connection_id (required), campaign_id (required), date_from (optional, ISO YYYY-MM-DD, default = 7 days ago), date_to (optional, default = today), breakdown (optional — pass 'day' for time-series).

Output. Aggregated metrics object plus an optional `daily` array when day-level breakdown is requested. Currency is the account currency returned by the platform — never converted by the agent.

When the agent uses it. After locating a campaign with list_campaigns. Pair with proactive flagging to surface CPA spikes, ROAS dips, or budget overruns relative to the trailing window.

Example user prompts:

  • Why did my CPA spike on the Spring Sale campaign last week?
  • Show me daily spend for campaign 12345 over the last 14 days.
  • Compare ROAS on the Mother's Day test for the last 7 days.

get_account_insights [read-only]

Account-wide aggregates with optional breakdowns by day, campaign, platform, or none.

Input schema. platform (required), connection_id (required), date_from (optional ISO date), date_to (optional ISO date), breakdowns (optional — ['day'] | ['campaign'] | ['platform'] or omitted for a single roll-up).

Output. Roll-up metrics for the date range (spend, impressions, clicks, conversions, revenue, ROAS, CTR, CPC). When a breakdown is supplied, returns one row per dimension value.

When the agent uses it. First call for portfolio-level questions: 'how is the whole account doing this week?'. Cheaper than iterating through list_campaigns + get_campaign_insights, and the agent prefers it whenever the user does not name a specific campaign.

Example user prompts:

  • What's my blended ROAS across both platforms this week?
  • Break down spend by day for the last 30 days.
  • How much did I spend on Meta vs Google last month?

create_campaign [destructive]

Create a new campaign on Meta or Google. Status defaults to PAUSED — you opt in to ACTIVE explicitly. Requires confirmed:true to actually fire.

Input schema. platform (required), connection_id (required), name (required), objective (required, platform-native enum such as OUTCOME_SALES on Meta or SEARCH on Google), daily_budget (required, number in account currency minor units), status (optional, PAUSED default), confirmed (must be true to fire).

Output. On confirm: { id, name, status, objective, daily_budget } from the platform. On the first call without confirmed:true: a confirmRequired envelope describing the planned action — no API request fires.

When the agent uses it. When the user explicitly asks to launch a new campaign and has supplied all required fields. The agent must echo every value back to the user before firing and wait for explicit confirmation.

Example user prompts:

  • Create a paused Meta campaign called 'Mother's Day Test' with a $50/day budget targeting OUTCOME_SALES.
  • Launch a new Google Search campaign 'Brand-Defense' with $100/day budget — keep it paused.

update_campaign [destructive]

Partial update to a single campaign — name, status, or daily_budget. Whatever you don't pass stays untouched. Requires confirmed:true.

Input schema. platform (required), connection_id (required), campaign_id (required), updates (required object with at least one of: name, status, daily_budget), confirmed (must be true to fire).

Output. On confirm: the updated campaign row from the platform. On first call: a confirmRequired envelope listing current vs proposed values so the user can see exactly what will change.

When the agent uses it. Budget changes, pausing or resuming a single campaign, or renaming. The agent always shows current vs proposed values before firing, then waits for an explicit user confirmation.

Example user prompts:

  • Pause campaign 67890 — it's burning budget on dead audiences.
  • Bump the Spring Sale daily budget to $250.
  • Rename campaign 11223 to 'Q2 Retargeting v2'.

pause_all_low_roas [destructive]

Bulk safeguard — pause every campaign whose ROAS is below threshold AND whose spend is at least min_spend over the lookback window. Requires confirmed:true.

Input schema. platform (required), connection_id (required), roas_threshold (required, strictly less-than to qualify), min_spend (required, floor in account currency to avoid pausing tiny tests), lookback_days (optional, 1–90, default 7), confirmed (must be true to fire).

Output. On confirm: { paused: [{ id, name, roas, spend }], skipped: [...] }. On the first call: a confirmRequired envelope listing every campaign that would be paused with its actual ROAS and spend, so the user sees the exact blast radius before agreeing.

When the agent uses it. Friday-afternoon hygiene. The agent always names every campaign that would be paused, with its actual ROAS and spend, then asks 'confirm to proceed?' before firing the bulk action.

Example user prompts:

  • Pause every Meta campaign with ROAS below 1.0 and at least $100 spend in the last 7 days.
  • Sweep my Google account for low-ROAS waste — threshold 1.5, min spend $200, lookback 14 days.

Confirmation flow

Every destructive tool — create_campaign, update_campaign, pause_all_low_roas — passes through a server-side gate before any Meta or Google API call fires. The check happens inside executeTool(), not in the system prompt, so the gate is the actual security boundary rather than a soft instruction.

  1. First call. The agent emits the tool_use. executeTool() sees the tool name in the destructive set and the confirmed arg is missing or false. It returns a { confirmRequired: true, tool, action, args, summary, nonce } envelope. No platform API request fires.
  2. UI render. The chat panel intercepts the confirm_required SSE event and renders an inline confirm/cancel card with the human-readable summary — campaign IDs, the deltas being applied, the count of campaigns about to be paused, etc.
  3. User accepts. The client posts a follow-up message instructing the agent to re-invoke the same tool with the same arguments plus confirmed: true. The agent re-calls. executeTool() sees confirmed === true, runs the platform call, returns the real result.
  4. Audit. A row is written to ad_sync_logs with the tool name as action, the status, any error message, and the raw input payload. This is the source of truth for "what did the AI actually do".
  5. User declines. The client sends a "do not proceed" message. The agent acknowledges and moves on — it does not retry.

The system prompt also instructs the agent to describe-then-confirm in plain English. But the prompt rule is defense in depth — the server-side gate is the real boundary. A "yes" smuggled into a prompt-injection payload doesn't move the LLM-controlled confirmed arg past the gate, because the nonce is issued and bound on the server.

Workspace isolation

Every executeTool dispatch resolves the connection_id argument through loadConnection(connectionId, workspaceId). Two checks happen on every call:

Tool failures return { error: message } rather than throwing, so the model sees the error and recovers gracefully. The agent loop never crashes mid-conversation because of a bad tool call — it explains and proposes the next step.

Quotas & limits

Chat quota by plan
Plan Chat messages / month On exceed
Starter ($29) 100 HTTP 402 + upsell URL to /pricing
Pro ($79) 1,000 HTTP 402 + upsell URL
Agency ($199) 10,000 HTTP 402 + upsell URL

The quota check happens before the first model call of every turn. Exceeded workspaces receive a structured 402 response with an upsell URL the UI uses to render an upgrade CTA inline. Each turn counts as one chat message regardless of how many tool iterations the loop ran — your conversation cost scales with intent, not with how chatty the agent is internally.

Token tracking

Every assistant turn persists three counters on the matching chat_messages row, summed from the per-iteration response.usage object that the Anthropic SDK returns:

Total cost per turn is roughly input * full_rate + cache_read * 0.10 * full_rate + output * output_rate. The agent's job is to keep cache_read high — which is why the system prompt and tools array never reorder.

Proactive flagging

When the agent observes any of the following in tool results, it surfaces them before answering the user's literal question. Flags are framed as "Heads up:" followed by metric and a one-line recommendation.

Proactive flagging is what makes the agent feel like a competent operator rather than a search engine. The user asks "what was my ROAS this week?" and the agent answers — then immediately flags the two campaigns that are bleeding budget without the user having to ask.

Safety guarantees

Frequently asked

How is the AI agent at /chat different from the analytics chat at ⌘J?
The /chat agent is your ad-ops co-pilot — it can list campaigns, fetch insights, and (with explicit confirmation) update or pause campaigns on Meta and Google. The ⌘J Analytics chat is read-only across pixel data — revenue, sessions, MER, LTV — and never touches your ad accounts. Same Sonnet model under the hood; different toolsets, different permissions.
What model powers the agent and why?
claude-sonnet-4-6. Strong tool-use accuracy, low latency, and aggressive prompt caching keep per-turn cost predictable. Maximum 4096 output tokens per single API call so a runaway response can't blow your monthly quota.
Can the agent secretly pause my campaigns?
No. Every destructive tool (create_campaign, update_campaign, pause_all_low_roas) has a server-side gate that returns a confirmRequired envelope on the first call — no platform API request fires. The UI surfaces a confirm/cancel card; only after the user clicks confirm does the agent re-call the tool with confirmed:true and the action actually happens. A 'yes' inside an injected user prompt is not enough — the gate is enforced at the executeTool layer, not just the system prompt.
How does prompt caching keep costs down?
The system prompt and the tools[] array are both sent with cache_control: { type: 'ephemeral' }. The first request in a 5-minute window pays full price; every subsequent request hits the cache at roughly 10% of input cost. Target hit rate is greater than 80% across an active session.
What happens if a tool errors mid-conversation?
The agent receives the error as a tool_result and explains it plainly to the user — it never retries blindly. Common errors: 422 credentials_unreadable (re-paste your token), 429 rate-limited (back off), or workspace mismatch (the connection doesn't belong to your workspace).
Are conversations persisted across sessions?
Yes. The chat_sessions and chat_messages tables persist every turn — including the raw tool_use and tool_result blocks — so reloading the page rehydrates the full thread and the model can see prior tool exchanges verbatim.
What's the iteration safety cap?
Ten. If the model is still emitting tool_use blocks at iteration 10, the loop exits and the assistant reply is replaced with '(Tool-use loop hit safety cap of 10 iterations.)'. Real tasks complete in 2 to 4 iterations — the cap is a hard ceiling against runaway loops, not a typical limit.
Where is every action logged?
Every executeTool call — success or error — writes one row to ad_sync_logs with action, status, errorMessage, and the input payload. That table is the source of truth for 'what did the AI actually do' post-hoc audits.

Ready when you are

Open the agent and ask one real question. Start with "What's my blended ROAS this week?" — the agent will find the answer in one tool call, then volunteer the next thing you ought to know.

Open chat Back to Documentation