research

The Case for Blended Attribution Over Multi-Touch

Blended MER is honest and hard to game; MTA looks precise but is a model of models. Anchor on blended, use MTA as a directional tiebreaker.

By Admaxxer Team • April 23, 2026 • 10 min read
Admaxxer is a DTC analytics platform with built-in Meta + Google ad ops. **Blended MER is the most honest number in paid media. Multi-touch attribution is a model of models.** That doesn't make MTA useless — it makes it a tool you use the right way, not a source of truth. This post walks through the argument. ## TL;DR - Blended MER is **directly observable**: total revenue / total spend. - MTA is **a model built on top of platform models**, each with their own assumptions. - MTA looks precise, but the precision is often **false** at small scale. - MTA is useful for **directional channel-mix decisions at scale**, not for per-ad optimization. - The synthesis: **anchor on blended MER, use MTA as a tiebreaker**. ## What blended MER actually measures Blended MER is brutally simple: total revenue across every channel, divided by total paid media spend, over a defined window. That's it. There is no model, no attribution window assumption, no "how do we credit YouTube view-throughs" debate. It's a ratio of two numbers you can pull from your payment processor and your ad accounts. The honesty of MER comes from this simplicity. If MER is 3.2 and you increase spend 30%, you can watch whether the ratio holds, drifts, or collapses. You do not need to know which channel is "really" driving the revenue — you need to know whether your paid spend as a whole is profitable. In most cases, that's the only decision you actually need. ## What MTA actually does Multi-touch attribution takes a user's journey — a sequence of touchpoints like "saw Meta ad → clicked Google ad → opened email → purchased" — and distributes credit across those touchpoints using a weighting model (linear, time-decay, position-based, data-driven Shapley). Each step of that process has assumptions: 1. **The touchpoints are correctly identified.** In reality, iOS ITP, Safari's anti-tracking, and cross-device behavior mean a significant percentage of touchpoints are missing from the sequence. 2. **The platform data feeding the model is accurate.** But Meta and Google each supply their own attribution-model outputs, which are themselves models. 3. **The weighting scheme is correct.** There is no "right" answer here — any scheme is a choice. The result is a model of models of models. The precision of "Meta contributed 34.2% of revenue this week" is a number produced by stacked assumptions, not a measurement. Compare that to blended MER, which is a measurement. ## When MTA helps MTA earns its keep in specific situations: - **Channel-mix decisions at scale.** If you're spending $5M/month across 6 channels and trying to decide whether to shift 10% of budget from Google to CTV, MTA (especially data-driven Shapley-style models) gives you a directional answer. - **Identifying under-credited channels.** MTA can surface patterns where last-click is systematically missing a channel's contribution — email is a common example. - **Medium-term validation.** Over 6–12 weeks, MTA outputs that contradict blended MER direction deserve investigation. ## When MTA hurts - **Per-ad optimization.** MTA is too noisy at the ad level for most DTC brands. Use ad-level incrementality tests and cohort LTV for this. - **Small-scale decisions.** If you're spending $50k/month, MTA precision is below the noise floor. Use blended MER and common sense. - **False confidence.** The worst failure mode is treating MTA numbers as measured rather than modelled. "Google contributed 28% last week" sounds like a fact; it's an output. ## The synthesis: anchor and tiebreaker The right way to use both: - **Anchor** on blended MER. It's the ratio you report to leadership, the number you plan against, and the number that determines whether the business is healthy. - **Tiebreak** with MTA. When you have two channel-mix choices that look equally good on MER, use MTA to pick the direction. - **Use incrementality tests** for high-stakes decisions. Geo-lift, on/off tests, conversion-lift studies — these give you measured answers where MTA gives you modelled ones. See our [CAPI match rate guide](/features/capi-match-rate), the [new-customer MER breakdown](/guides/new-customer-mer), and [forecast methodology](/features/forecasting) for how Admaxxer weights each of these inputs. ## What about incrementality? Incrementality testing (geo-split, holdout groups, conversion-lift) is the actual gold standard. It answers the question MTA tries to model by running a proper experiment. The catch: incrementality tests are expensive to run (you turn something off), slow (2–4 weeks), and only answer one question at a time. The right hierarchy: 1. **Incrementality tests** for the biggest channel-mix decisions, 2–4 times per year. 2. **Blended MER** as the weekly anchor. 3. **MTA** as a directional input between incrementality tests. 4. **Per-platform ROAS** as a sanity check only — never as a primary signal. ## What to do about it 1. **Pick your anchor.** Make blended MER the single headline metric in your weekly paid media review. 2. **Don't report MTA numbers as facts.** Label them as modeled estimates in dashboards and reviews. 3. **Run one incrementality test per quarter.** Even a crude geo-split is better than no experimental evidence. 4. **Ignore per-platform ROAS as a primary signal.** It's a sanity check, not a north star. ## Caveats Blended MER is not a perfect metric either. It can be gamed by pushing up AOV via bundles while hiding margin compression, it masks channel-level problems until they're large enough to affect the aggregate, and it doesn't tell you anything about new vs returning revenue mix. The right supplement to MER is **new-customer MER**, which isolates the part of MER that depends on new acquisition. Also: if you're a B2B or long-sales-cycle business, blended MER loses much of its power (revenue lags spend by months). In those cases, MTA and pipeline-stage attribution earn more of their keep. ## FAQs **Q: What's the difference between MER and ROAS?** A: ROAS is platform-specific (Meta ROAS, Google ROAS). MER is blended across all paid spend, divided into total revenue. **Q: Should I stop looking at platform ROAS entirely?** A: No — use it as a sanity check. If Meta ROAS goes from 3.2 to 1.8 week over week, something's wrong, even if blended MER looks stable. **Q: How often should I run incrementality tests?** A: Quarterly at minimum for channels > 15% of spend. Yearly is too infrequent; monthly is usually too costly. --- **TRIAL_LINE:** Start your 7-day free trial — no credit card required. [See Admaxxer pricing](/pricing).
attribution mer mta incrementality
Try Admaxxer Free