Analytics
Analytics documentation — every metric Admaxxer ships, decomposed
Admaxxer is a DTC analytics platform with built-in Meta + Google ad ops. This page documents the full analytics surface: how the first-party pixel feeds Tinybird, what each of the 33+ pipes returns, and how we compute revenue attribution, blended MER, cohort LTV, MMM contribution, forecast bands, incrementality lift, and CAPI match rate. It also documents the Analytics AI Chat — the read-only Claude drawer that lets you query everything in natural language.
1. Pixel and event tracking
The Admaxxer pixel is a first-party, cookieless-compatible tracker — the foundation every other analytics surface in the product is built on. It captures page_view, add_to_cart, begin_checkout, purchase, and arbitrary custom events, stamps each one with a stable first-party visitor id, and ships it to Tinybird — the columnar analytics backend that powers our dashboards.
You install the pixel by dropping a single <script> tag into your site head. Admaxxer ships dedicated guides for 20 platforms including Shopify, WooCommerce, WordPress, Webflow, Next.js, Wix, Squarespace, Magento, BigCommerce, Google Tag Manager, and a universal snippet for any site where you can edit <head>. Average install time is under three minutes; full coverage of platforms is documented at /documentation/install.
Cookieless-compatible means the pixel uses first-party storage on your own domain — a single 1p cookie plus localStorage — so ITP / ETP browsers, consent-mode rejections, and third-party cookie deprecation don't break your data. When a user does opt out, we still measure aggregate session volume; we just don't write the visitor id. The pixel can be paired with the Conversions API for server-side reinforcement, which is the basis for the CAPI match rate metric documented below.
Tinybird is the analytics backend: every event the pixel captures is materialized into a Tinybird datasource within ~3 seconds of fire-time, then queried by the dashboards and the Claude AI chat through pipe abstractions. There is no parallel ETL path to a warehouse — Tinybird is the single source of truth. See /documentation/install for platform-specific guides and the developer API for documentation on event payload schemas.
2. Tinybird pipes (33+)
Every metric Admaxxer shows is backed by a Tinybird pipe — a parameterized SQL query against ClickHouse-backed columnar storage that returns sub-second even on hundreds of millions of events. The pipes are organized into eleven categories; the table below lists what each category returns.
| Category | What it returns |
|---|---|
| Visitors | Sessions, uniques, returning vs new visitors, bounce rate, time-on-site. |
| Revenue | Total revenue and revenue split by channel, product, campaign, and creative. |
| Cohorts | Daily and weekly cohorts with retention curves. |
| MER | Blended marketing efficiency ratio rolled up daily, weekly, and monthly. |
| LTV | 7/30/90-day cohort lifetime value with per-cohort revenue accrual. |
| MMM | Channel decomposition (Meta, Google, organic, baseline) with adstock-transformed spend. |
| Forecast | p10 / p50 / p90 revenue bands at 30, 60, and 90-day horizons. |
| Incrementality | Two-proportion z-test on paid-vs-organic cohort conversion uplift. |
| Creative grid | Datafast-style cross-platform pivot of creatives by spend, conversions, and ROAS. |
| CAPI match rate | Pixel-vs-CAPI deduplication rate per pixel, per ad account, hourly. |
| Ad-level LTV | Per-ad rollup of 7/30/90-day cohort revenue feeding the creative grid. |
Pipes are governed by an explicit PIPE_ALLOWLIST. Any tool that queries Tinybird — the dashboards, the Analytics AI chat, the public /api/v1 endpoints — can only call pipes the allowlist permits. This is how Admaxxer keeps the read surface small and auditable while still letting the AI agent freely explore your data: the agent has the same access a logged-in user does, no more, no less.
Pipe SQL lives in the repository under tinybird/pipes/ and is deployed via npm run tb:push. Production deploys do not push pipe SQL automatically, so pipe changes must be deployed explicitly (see /documentation/developer for the deploy contract).
3. Revenue attribution
Attribution is the per-conversion question: which ad, campaign, or creative gets credit for this specific order? Admaxxer ships a cross-platform creative grid that lets you slice every order by Meta ad set, Google campaign, organic referrer, and direct — in the same view — so you can see where revenue is actually coming from without flipping between three platforms.
We support multiple attribution windows out of the box: 7-day click + 1-day view (the Meta default), 28-day click, 30-day all-touch, and a custom window for advanced users. Attribution models supported include last-click (deterministic), time-decay (exponential weighting), position-based (40/20/40 first/middle/last), and even (uniform across touchpoints).
The grid is also where ad-level LTV lives — every row shows 7/30/90-day cohort revenue per ad, so you can identify creatives that look weak on day-1 ROAS but compound into strong long-term performers. Click any row to drill from ad → ad set → campaign → individual session and order.
Attribution data is read-only from the Claude AI chat through the query_attribution tool — see the AI Chat section below. Mutations to campaigns (pausing, budget changes, launches) live in a different surface, the ad-ops Claude agent at /chat.
4. Blended MER
MER (Marketing Efficiency Ratio) is total revenue divided by total ad spend. Blended means we sum spend across every paid channel — Meta, Google, TikTok, etc. — and divide it into the revenue your pixel actually saw on-site. The formula is simple and deliberately so:
MER = total_revenue / (meta_spend + google_spend + tiktok_spend + ...)
Why blended MER and not platform-specific ROAS? Because platform ROAS is structurally inflated — Meta and Google each over-credit themselves when conversion paths overlap, and the sum of platform-reported ROAS frequently exceeds 100% of revenue. Blended MER cuts through that: it is pixel-side revenue (the truth) over total spend (also the truth), which makes it the most honest single number you can put in front of a CFO.
Admaxxer rolls MER up daily, weekly, and monthly, with a 7-day moving average overlay to dampen weekend volatility. Below the headline number we show platform-level ROAS breakdowns so you can still see where each marginal dollar is going — but the headline metric is always blended.
The pipe behind blended MER joins the Tinybird revenue datasource (pixel-side) with the spend rollup pulled from the Meta Marketing API and Google Ads API by the connection workers. It is recomputed every hour and emits a delta vs the prior 7-day average — a 15% week-over-week MER drop is the kind of thing the dashboard surfaces with a red badge.
5. Cohort LTV (7/30/90 day)
Cohort LTV (Lifetime Value) answers: of everyone who first purchased in week W, how much revenue have they cumulatively generated by day 7, 30, or 90? Admaxxer computes three windows because they each tell you something different:
- LTV-7 — early signal. Tells you whether a cohort has high repeat-purchase intent right out of the gate. Useful for replenishment-skewed brands (consumables, nutrition).
- LTV-30 — the operating metric. Most DTC brands run their CAC ratio against LTV-30 because the signal is mature enough to trust without being so long that you can't act on it.
- LTV-90 — the strategic metric. Used for budget allocation and LTV:CAC payback modeling — most cohorts are 80%+ of their eventual 12-month LTV by day 90.
Cohort LTV also rolls up to ad-level — every Meta ad and Google ad in your account gets its own 7/30/90-day cohort attached, so you can score creatives not just on day-1 ROAS but on the long tail of repeat revenue they produce. This is one of the surfaces that distinguishes Admaxxer from platform-native reporting, where ad-level LTV is essentially absent.
Cohort assignment is based on the visitor's first purchase event; subsequent purchases are attributed back to the original cohort. We do not currently support multi-buy cohort splits (cohort by 2nd-purchase, etc.) — that's a v1.5 follow-up.
6. CAPI match rate
CAPI (Conversions API) is Meta and Google's server-side event ingestion endpoint — the browser-independent reinforcement to your pixel. Every pixel-side event should be twinned with a CAPI event so the ad platform's optimization model has dedup-friendly, identity-rich signal even when browser tracking is blocked.
Match rate is the percentage of pixel-side events that successfully reconcile with their CAPI twin — same event_id, same user, same timestamp window. Hyros pioneered surfacing this metric prominently because it is the single best leading indicator of attribution quality.
- 90%+ match rate
- Healthy. Meta's optimization model has full signal — both browser and server events are arriving, deduplicating cleanly, and being attributed to ad clicks.
- 70–90% match rate
- Degrading. Investigate dedup keys (
event_id,fbp,fbclid). Common causes: missingevent_idon server-side events, clock skew between browser and server, or missingfbclidcapture on landing pages. - Less than 70% match rate
- The optimization model is flying partially blind. Expect rising CPAs and degrading ROAS. This is the "fix it now" tier.
Admaxxer computes match rate hourly per pixel, per ad account, and surfaces it on the dashboard with a delta vs last week — so a sudden drop is impossible to miss. We also emit a Slack/email alert when match rate crosses the 80% threshold downward.
7. Forecasting
Admaxxer ships a v0.1 revenue forecast built on OLS regression with weekly seasonality regressors. The model trains on at least 30 days of pixel-side daily revenue and predicts forward at 30, 60, or 90-day horizons.
revenue_t = β₀ + β₁·trend_t + Σ βᵢ·dow_i + ε_t
Where trend_t is a linear time index and dow_i is a one-hot day-of-week dummy (Monday through Saturday, with Sunday as the baseline). We emit p10 / p50 / p90 prediction bands — the p10 is your soft-cap downside, the p90 is your stretch upside, and the p50 is the median expected path.
v0.1 is intentionally simple. It is good enough to set weekly cash-flow expectations and feed into the MMM budget-allocation suggestions documented in the next section. v1.5 will upgrade to Prophet for additive holiday modeling, automatic changepoint detection, and uncertainty intervals tuned to your historical volatility.
Note: we will not return a forecast if you have less than 30 days of pixel history. The dashboard shows a "collecting signal" state until you cross the threshold. If you have at least 30 days but high variance, the prediction bands will widen accordingly — that is the model honestly communicating uncertainty rather than overfitting a noisy series.
8. Marketing Mix Modeling (MMM)
MMM decomposes total revenue into channel contributions over time so you can answer the budget question: if I shift $10K from Google to Meta, what happens to revenue?
Admaxxer's v0.1 MMM is OLS regression with geometric adstock. Adstock models the lagged effect of advertising — an ad impression today doesn't only drive conversions today, it contributes to conversions for several days afterward, with the effect decaying geometrically. Default carryover is 0.5 (each day's spend contributes 50% of the previous day's effect on top of its own), which is in the middle of the 0.3–0.7 range typical for performance channels.
adstock_t = spend_t + 0.5 · adstock_{t-1}
revenue_t = β₀ + Σ β_channel · adstock_channel_t + ε_t
Channels modeled in v0.1: Meta, Google, organic, baseline. The model emits a channel decomposition chart — a stacked area showing how much revenue each channel contributed each day — and a budget reallocation suggestion based on marginal channel coefficients.
What v0.1 does NOT include: hill saturation curves (or Michaelis–Menten transforms), Bayesian priors with credible intervals, geo-level holdouts, or cross-channel interaction terms. A simple OLS without saturation will overstate the marginal return at high spend levels — we surface this caveat on the dashboard alongside the channel decomposition.
The v1.5 upgrade ships a Robyn-style Bayesian MMM with hill saturation, channel-level credible intervals, and seasonality decomposition baked in. v1.5 also adds adstock per-channel (instead of a single global parameter) so high-frequency channels (Meta) and low-frequency channels (CTV) can be modeled with appropriate decay characteristics.
9. Incrementality
Incrementality is the question MMM and attribution can't answer alone: how much of this revenue would have happened anyway, without paid?
Admaxxer's v0.1 incrementality test uses a two-proportion z-test on paid-vs-organic cohorts. We split visitors who arrived via paid (Meta or Google) and organic, compute conversion rates for each, and calculate whether the difference is statistically significant at p<0.05. The output is a lift estimate — for example, "paid is converting 18% better than organic, p=0.02" — with a 95% confidence interval.
Important caveat: this is observational, not causal. We always surface this disclaimer alongside the lift number. Paid and organic cohorts differ on many dimensions other than the ad exposure (intent, demographics, device, prior brand awareness), and a z-test on observational data cannot distinguish ad effect from selection bias.
v1.5 ships geo-lift holdout tests — Admaxxer pauses campaigns in a randomized subset of geos for two weeks, compares against a synthetic control, and emits a proper causal lift estimate. That is the gold standard for incrementality measurement and what serious DTC brands eventually graduate to. Geo-lift requires meaningful spend volume per geo (typically $50K+ monthly per channel) so it is a Pro / Agency tier feature.
10. Cross-platform creative grid
Inspired by Datafast's pivot table, the cross-platform creative grid lets you slice creatives across Meta and Google in a single view — one row per creative concept, columns for spend, impressions, clicks, conversions, ROAS, and 7/30/90-day cohort LTV.
You can pivot the grid by any combination of: campaign, ad set / ad group, ad creative (image or video hash), audience segment, placement, or date range. Sorting and filtering are sub-second because the grid queries Tinybird directly rather than aggregating client-side.
The killer feature: a creative that exists in both Meta and Google (same image asset, different platforms) is rolled up into a single row with platform-split columns, so you can see at a glance which creative concepts work cross-channel versus which are platform-specific. This eliminates the manual reconciliation work most DTC teams do in a Sunday-night spreadsheet.
The grid is the dashboard companion to the AI agent — once you've identified the row that needs action, you can hand it off to the Claude ad-ops agent at /chat with a sentence like "pause every Meta ad in the 'Cool Mom' creative concept under $1.50 ROAS for the last 7 days." The agent reads the same data and confirms the action before firing.
11. Ad-level LTV
Most platform-native reporting shows you day-1 ROAS and stops there. Ad-level LTV closes the loop by attaching cohort LTV to every individual ad — so an ad that looks weak on day 1 but drives 2.4× cohort revenue at day 30 is no longer invisible in your reporting.
Admaxxer computes three windows per ad: 7-day, 30-day, and 90-day cohort LTV. The 7-day window catches replenishment buyers; the 30-day window is your operating metric (the one to optimize ad budget against); the 90-day window is for strategic budget allocation across creative concepts.
Ad-level LTV plugs into the creative grid (above) so the same concept-level rollups carry through. It also feeds back into the attribution dashboard, where you can sort ads by 30-day LTV instead of day-1 ROAS to see which creatives actually deserve more budget. For consumables brands this routinely flips the leaderboard — the highest day-1 ROAS ad is often not the highest LTV-30 ad once repeat purchases kick in.
12. Analytics AI Chat (⌘J)
The Analytics AI Chat is a global drawer — press ⌘J anywhere in the app to open it — that lets you query your analytics in natural language. It runs on claude-sonnet-4-6 with prompt caching on the system block and tools array, streamed via SSE for sub-second time-to-first-token.
The chat has access to 8 read-only tools that wrap our Tinybird pipes:
| Tool | Returns |
|---|---|
query_revenue |
Total and by-channel revenue with date filters. |
query_mer |
Blended MER with daily / weekly / monthly rollups. |
query_ltv |
7/30/90-day cohort LTV by date or by ad. |
query_attribution |
Cross-platform creative grid slices. |
query_capi_match |
CAPI match rate per pixel, per ad account. |
query_forecast |
p10 / p50 / p90 revenue forecast bands. |
query_mmm |
Channel-level MMM decomposition output. |
query_incrementality |
Paid-vs-organic z-test results with confidence interval. |
Every tool call is gated by the PIPE_ALLOWLIST mentioned in the Tinybird section. The agent can only invoke pipes the allowlist permits — there is no SQL injection surface, no arbitrary query path, no escape hatch.
The Analytics AI Chat is read-only. It cannot pause, scale, or launch campaigns. That is a deliberate scope split — the ad-ops Claude agent at /chat handles destructive campaign actions and requires explicit confirmed:true before firing any of its 5 destructive tools (update_campaign, pause_all_low_roas, etc.). The Analytics chat is your always-available data assistant; the ad-ops agent is your campaign operator. Same Sonnet model, different tool surfaces, different scopes. See /documentation/ai-agent for the ad-ops agent specification.
Both surfaces share the same prompt-caching scaffolding so token cost stays predictable as conversations grow.
Frequently asked
- What does CAPI match rate measure?
- CAPI match rate is the percentage of pixel-side conversion events that successfully reconcile with their server-side (Conversions API) twin via shared event_id, fbp, and fbclid keys. A high match rate (90%+) means Meta and Google have the deduplicated, identity-rich signal they need to optimize delivery; a low match rate means the optimization model is flying partially blind. Admaxxer surfaces match rate per pixel, per ad account, every hour — and emits an alert when it crosses the 80% threshold downward.
- How is MMM (Marketing Mix Modeling) computed in Admaxxer?
- Admaxxer's v0.1 MMM uses ordinary-least-squares (OLS) regression with a geometric adstock transform (default carryover 0.5) to attribute revenue to channels (Meta, Google, organic, baseline). It does not yet include hill saturation curves, Bayesian priors with credible intervals, or geo-level holdouts. v1.5 will upgrade to a Robyn-style Bayesian MMM with hill saturation, channel-level credible intervals, and seasonality decomposition.
- Which Tinybird pipes power the analytics dashboard?
- Admaxxer ships 33+ pipes covering visitors, sessions, revenue (total / by channel / by product / by campaign), cohorts, blended MER (daily/weekly/monthly), cohort LTV (7/30/90-day), MMM channel decomposition, forecasting (p10/p50/p90 bands), incrementality tests, the cross-platform creative grid, CAPI match rate, and ad-level LTV. Every dashboard tile and the Claude AI chat tool query_metrics reads from the same pipes — there is no parallel data path.
- What is the forecast model?
- OLS regression with weekly seasonality regressors (one-hot day-of-week dummies plus a linear trend term), trained on at least 30 days of pixel-side daily revenue. We emit p10 / p50 / p90 prediction bands at 30, 60, and 90-day horizons. v1.5 will move to Prophet for additive holiday and changepoint modeling with uncertainty intervals tuned to historical volatility.
- What's the difference between attribution and MMM?
- Attribution assigns credit for a specific conversion to a specific touchpoint — last-click, time-decay, position-based — and answers 'which ad drove this order?'. MMM works at the aggregate level: it decomposes total revenue into channel contributions over time and answers 'how much of last month's revenue would have happened anyway?'. Use attribution for daily ad ops, MMM for budget allocation across channels.
- Is the Analytics AI Chat the same as the Claude ad-ops agent?
- No. The Analytics Chat (⌘J global drawer) is read-only and uses 8 PIPE_ALLOWLIST-guarded tools to answer questions about your data — query_revenue, query_mer, query_ltv, query_attribution, query_capi_match, query_forecast, query_mmm, query_incrementality. The ad-ops Claude agent at /chat has 6 tools, 5 of which are destructive (pause, scale, launch) and require explicit confirmed:true before firing. Different surfaces, different scopes, same Sonnet model.
- Does the Admaxxer pixel work without third-party cookies?
- Yes. The Admaxxer pixel is first-party and cookieless-compatible by design. It uses first-party storage on your domain (a single 1p cookie plus localStorage), falls back gracefully when storage is blocked, and can be paired with the Conversions API for server-side reinforcement. ITP/ETP browsers and consent-blocked sessions are still measured at the aggregate level, even when the visitor id can't be persisted.
- How accurate is the v0.1 incrementality model?
- The v0.1 model is observational, not causal — we explicitly surface this disclaimer alongside every lift number. It uses a two-proportion z-test on paid-vs-organic cohort conversion rates, which can identify directional uplift but cannot rule out selection bias (paid and organic visitors differ on intent, demographics, prior brand exposure). v1.5 ships geo-lift holdout tests, which randomize ad pause across geographic regions and produce a proper causal lift estimate.
Next steps
Now that you know what every analytics surface does, the fastest paths from here are:
- Connect your platforms — paste Meta + Google tokens and start pulling spend in 60 seconds.
- Claude AI agent — the destructive-gated ad-ops agent that pauses and scales campaigns.
- Developer API — REST endpoints under
/api/v1for pixel events, analytics, and ads. - Install the pixel — 20 dedicated install guides plus a universal snippet.
- Back to documentation hub — index of every documentation surface.