research

AI Agents in Paid Media: Where They Help, Where They Hurt

Where AI agents earn their keep (data retrieval, pattern flagging, copy drafts) and where they hurt (launching campaigns, chasing ROAS, unsupervised destructive actions).

By Admaxxer Team • April 23, 2026 • 10 min read
Admaxxer is a DTC analytics platform with built-in Meta + Google ad ops. The honest take on AI agents in paid media: they're genuinely good at some things, genuinely bad at others, and the line between the two is mostly about **whether the action is reversible**. This post names the use cases that work, the ones that don't, and why we designed Admaxxer's Claude agent the way we did. ## TL;DR - **Good**: data retrieval ("what's my MER by campaign this week?"), pattern flagging, creative drafts. - **Bad**: unsupervised campaign launches, unsupervised budget shifts, chasing short-term ROAS without brand guardrails. - **Hard rule**: destructive actions (pause, update, launch) require **explicit user confirmation**. - **Design principle**: an agent is a read-first tool; writes are gated; any write that spends money or changes state is gated twice. ## Where AI agents earn their keep ### Data retrieval "What was my blended MER by campaign last week?" "Show me the 5 ad sets with the biggest frequency increase." "What's my CAPI match rate by pixel?" These are the sweet spot for AI in paid media. The agent queries known data sources (Shopify, Meta, Google, Tinybird pipes), formats the answer, and surfaces it conversationally. No state changes, no risk of destroying work. If the agent gets it wrong, the downside is re-running the query. Admaxxer's Claude agent has a `query_metrics` tool that exposes read-only Tinybird pipes (`visitors_by_source`, `revenue_by_channel`, `ad_level_ltv`, `mer_by_campaign`, and more). See [the Claude agent docs](/features/claude-agent) for the full tool list. ### Pattern flagging "Look at my last 30 days. Tell me which ad sets are fatiguing." A good agent can scan campaign performance, identify the campaigns matching the [restructure/kill framework](/blog/restructure-vs-kill-campaign-framework), and flag them with reasoning. The agent doesn't take action — it surfaces a recommendation for a human to approve. This is high-value because the agent can look across every campaign at once, every day, without the fatigue that leads humans to miss slow trends. ### Creative drafting "Write 5 headline variants for this product description, in the voice of the last 10 ads that performed well." The agent produces options; the human picks and deploys. The agent's false positives are cheap (ignore bad drafts); its false negatives are free (the human writes their own if none work). ### Summarization and triage Weekly reviews, anomaly triage, Slack-ready updates. All lossless operations on data the human would have looked at anyway. Agents are ~10× faster than humans at this, at the cost of occasional misclassifications the human catches. ## Where AI agents hurt ### Unsupervised campaign launches Launching a new campaign changes spend, targeting, and creative simultaneously. An agent that launches campaigns without human review will occasionally launch something that spends $2,000 before a human notices and kills it. The cost of the mistake is always larger than the benefit of the speed. ### Unsupervised budget shifts Moving $10k/day from Google to Meta based on a short-term ROAS signal is the kind of thing agents get wrong because they don't know why the signal shifted. Was there a platform-level attribution change? A creative burn-out? A competitor discount event? Humans know; agents guess. ### Chasing short-term ROAS An agent optimizing for 7-day ROAS will find ways to boost it (discount-heavy creative, retargeting-heavy audiences) that hurt 90-day LTV and brand health. Without brand guardrails and LTV-aware objectives, pure ROAS-chasing agents degrade the business over time. ### Destructive actions without confirmation Pausing a campaign, updating budgets, deleting audiences — these are operations where the cost of an agent hallucination is real money or real audience reset. We designed the Admaxxer Claude agent with **`confirmed: true` required** for every destructive tool (`update_campaign`, `create_campaign`, `pause_all_low_roas`). The agent cannot take a destructive action without the user explicitly saying "yes, do it." ## How we designed it Admaxxer's Claude agent has **6 tools**: - **Read-only**: `list_campaigns`, `get_campaign_insights`, `get_account_insights`, `query_metrics` — these retrieve and summarize, never change state. - **Destructive (gated)**: `update_campaign`, `pause_all_low_roas` — these change state, and require explicit `confirmed: true` in the tool call arguments. The rule: **read tools just run; write tools ask for permission first.** When a user says "pause my worst ad set," the agent surfaces the candidate ad set, the reasoning, and asks for confirmation. The user clicks confirm; the agent runs the tool. The confirmation is logged in `chat_messages` for audit. This design is boring on purpose. The goal is an agent that never surprises the user with an action they didn't approve, while still saving real time on retrieval and analysis. ## What to do about it 1. **Use AI for retrieval and triage.** Weekly review, anomaly detection, creative brief drafting — these are wins. 2. **Don't use AI for autonomous campaign management.** Approval gates cost very little; missing them costs real money. 3. **Pick tools that gate destructive actions.** If a product claims an agent can "run your ad account" without explicit confirmation on writes, be skeptical. 4. **Audit your agent's logs.** Every tool call should be inspectable after the fact. ## Caveats This framing is conservative. Some of the writes that are gated today will be safely automatable in 1–2 years as guardrails and eval frameworks mature. Today, the conservative design is the right one because the cost of a bad write is large and the cost of an extra confirmation click is tiny. Also: "AI agents hurt here" is not the same as "AI hurts here." Most of these bad cases are specifically about **unsupervised autonomy**. An AI-assisted human is almost always better than a human alone; an autonomous AI acting on the account is almost always worse than either. ## FAQs **Q: Can the Admaxxer agent actually change my campaigns?** A: Yes — but only with explicit `confirmed: true` on each destructive tool call. The agent cannot pause, update, or create campaigns without your approval, and every action is logged. **Q: Why not just fully automate?** A: Because the cost of a hallucinated write is real money or a reset audience, and the cost of a confirmation click is < 1 second. The ratio doesn't justify full automation today. **Q: What about read-only agents that never write?** A: Those are strictly good. Admaxxer's analytics chat uses only read-only tools; the campaign agent is where we gate writes. --- **TRIAL_LINE:** Start your 7-day free trial — no credit card required. [See Admaxxer pricing](/pricing).
ai-agents claude automation ad-ops
Try Admaxxer Free