Signostic  ›  Research  ›  Lexicon  ›  Attribution

Lexicon entryMeasurement · Decision model

Attribution

The rule set that decides which ad touchpoints get credit for a conversion. The model you choose determines which campaigns look like they are working — and which you are about to wrongly pause.

Direct answer

Attribution is the rule set that determines which ad touchpoints get credit for a conversion. Last-click gives all credit to the final interaction; data-driven distributes credit across the path. The model you choose determines which campaigns look like they are working — and which you are about to wrongly pause.

A decision model, not a measurement tool.

Attribution is the answer to the question: when a conversion happens, which ad deserves the credit? The answer is never obvious, and the rule you pick changes what you do next.

Every conversion on a modern digital campaign involves multiple touchpoints: a Display impression seeds awareness, a paid-social ad drives consideration, a branded Search click closes. Attribution is the framework that decides how credit is allocated across that path. There is no “correct” attribution model in an absolute sense — each model embeds a specific assumption about how ads cause conversions, and each produces a different view of which campaigns are worth running.

The commercial consequence of the attribution choice is that it determines budget allocation decisions. A brand running last-click attribution typically under-invests in upper-funnel media (Display, YouTube, paid social) because those touchpoints rarely get credit — and then puzzles over declining overall performance when upper-funnel demand dries up and branded Search volume falls. Data-driven models correct for this but introduce their own assumptions about the incremental value of each touchpoint.

The attribution models, ranked by assumption weight.

The models differ in how much credit they give to the first touchpoint, the last touchpoint, and the touchpoints between. Understanding the assumption each model embeds is the difference between using attribution to inform decisions and using it to justify them.

Last-click attribution gives 100% of the credit to the final ad touchpoint before conversion. It is the default in many measurement systems because it is the simplest to implement, and it is the most commonly misused because it systematically under-credits upper-funnel activity. A last-click model will always make branded Search and retargeting look like the most efficient channels, because those are the channels closest to the click.

First-click attribution gives 100% of the credit to the first ad touchpoint. It is the mirror image of last-click and produces the opposite distortion: upper-funnel ads look highly efficient; lower-funnel conversion-driving ads look wasteful. First-click is rarely used as a primary model but is useful in paired analysis with last-click to bound the true contribution of different touchpoints.

Linear attribution splits credit equally across every touchpoint in the path. A four-touch path gives 25% credit to each ad. Linear is the most “democratic” model and corrects for both first- and last-click bias, but it assumes every touchpoint contributes equally — which is rarely true in practice.

Time-decay attribution gives more credit to touchpoints closer to the conversion, decaying the credit given to earlier touches on an exponential curve (typically a seven-day half-life). Time-decay corrects for last-click bias while still recognising that later touchpoints are often more causal. It is the most commonly recommended rule-based model.

Position-based attribution (also called U-shaped) gives 40% of the credit to the first touchpoint, 40% to the last, and splits the remaining 20% evenly across middle touchpoints. It embeds the assumption that the first and last touches matter most, with the middle serving a reinforcement role. It is useful in campaigns where awareness and closing are explicitly differentiated.

Data-driven attribution uses machine learning to assign credit based on the conversion-path data in the account itself. Rather than applying a fixed rule, Google’s data-driven model estimates the incremental lift of each touchpoint by comparing conversion-path patterns between converters and non-converters. It became the default for Google Ads in 2021 and is now available in all accounts with sufficient volume. Its assumption — that the observed data contains the causal structure — is the strongest of any model, and it is usually the most accurate.

The budget-allocation consequence.

The attribution model determines which channels look like they are working. When an operator pauses a campaign because it “isn’t converting,” they are pausing it according to a specific attribution model. The model may be wrong. The pause is often not.

Insurance. Insurance brokers who run last-click attribution consistently under-invest in YouTube, Display, and upper-funnel paid social, because those channels rarely get last-click credit. When the broker pauses them to “focus on what’s working,” branded Search volume drops eight to twelve weeks later — at which point the broker has no way to connect the cause (paused upper-funnel) to the effect (declining branded-search demand). The 2024 IPA Databank case analysis shows this pattern in at least 60% of B2C financial-services advertisers using short-term attribution.

Retail. Retail operators face a different attribution failure: they often treat Shopping as primarily last-click-converting and Display as primarily brand-building, when in practice Shopping assists Display conversions and Display assists Shopping conversions through retargeting. A retailer running last-click attribution on both will typically overweight Shopping and underweight Display, despite the channels being more interdependent than the model suggests. Moving from last-click to data-driven attribution in retail accounts typically shifts 10–25% of budget allocation and produces measurable improvements in blended ROAS within a quarter.

Attribution does not change what is actually happening in the market. It changes what you can see about what is happening. The model you choose determines which blindspots you are comfortable tolerating.

Four things operators get wrong.

Myth

Attribution and measurement are the same thing.

Fact

Measurement is the count of what happened (how many conversions, from which source). Attribution is the rule that assigns credit to touchpoints along the conversion path. The same measurement data produces different attribution results depending on which rule is applied. A channel can be “driving conversions” in measurement terms while receiving zero attribution credit under last-click.

Myth

Data-driven attribution is always right.

Fact

Data-driven attribution is usually the most accurate model available in Google Ads, but its accuracy depends on having enough conversion volume for the model to train properly (typically 3,000+ conversions per account per month for reliable lift estimates). Accounts below that threshold should use time-decay or position-based as a bridge until volume supports data-driven.

Myth

Incrementality testing is the same as attribution.

Fact

Attribution assigns credit under assumptions about causality. Incrementality testing isolates the actual causal effect of a channel by comparing exposed versus unexposed audiences. Incrementality is the gold standard but is expensive and slow; attribution is the daily working tool. They answer related but distinct questions: attribution says “which touchpoint gets credit,” incrementality says “what would have happened without this ad at all.”

Myth

Moving from last-click to data-driven will always show more conversions.

Fact

Moving attribution models redistributes credit; it does not generate new conversions. The total number of conversions in the account stays the same; the distribution across campaigns and channels changes. Campaigns that looked efficient under last-click may look less efficient under data-driven and vice versa. The decision to move models should be made before, not after, a budget reallocation.

Attribution, answered.

What is last-click attribution?

Last-click attribution assigns 100% of the conversion credit to the final ad touchpoint before the conversion. It is the simplest model to implement and the most commonly misused, because it systematically under-credits upper-funnel advertising. A last-click model always makes branded Search and retargeting look like the most efficient channels, because those are the touchpoints closest to the click. Google deprecated last-click as its default attribution model in 2021.

What is data-driven attribution?

Data-driven attribution uses machine learning to assign credit based on the actual conversion-path data in the account. Rather than applying a fixed rule, Google’s data-driven model estimates the incremental lift of each touchpoint by comparing the conversion-path patterns of converters against non-converters. It became the default for Google Ads in 2021 and is available in all accounts with sufficient volume to train the model reliably.

Why does attribution matter?

Attribution determines which campaigns look like they are working, which in turn determines budget allocation. An operator using last-click attribution will typically under-invest in upper-funnel channels (YouTube, Display, paid social) because those channels rarely receive last-click credit. When those channels are paused for appearing “inefficient,” branded Search volume often declines eight to twelve weeks later — an effect the attribution model was not equipped to predict.

Which attribution model should I use?

For accounts with 3,000 or more monthly conversions, data-driven attribution is usually the most accurate and should be the default. For smaller accounts, time-decay attribution (exponentially weighting touchpoints closer to conversion) is the best rule-based alternative. Accounts still using last-click are typically under-crediting upper-funnel activity by 20–40% and over-crediting branded Search and retargeting by an equivalent amount.

What is the difference between attribution and incrementality?

Attribution assigns credit to touchpoints under assumed causal rules. Incrementality isolates actual causal effect by comparing audiences exposed to a channel against comparable audiences who were not. Incrementality is the more rigorous test but is expensive, slow, and requires hold-out populations. Attribution is the daily working tool; incrementality tests are the occasional validation of whether the attribution picture matches reality.

Why did my conversions change when I changed attribution models?

Changing attribution models redistributes credit across campaigns; it does not generate new conversions. The total number of conversions in the account stays the same, but the distribution changes. Campaigns that looked efficient under last-click often look less efficient under data-driven because the earlier touchpoints in the path now receive partial credit. This is working as intended — the new view is usually more accurate, not the old one.

Where this definition comes from.

Referenced in this entry
  1. Google Ads Help. About attribution models. 2025. support.google.com/google-ads/answer/6259715
  2. Google Ads Help. Data-driven attribution. 2025. support.google.com/google-ads/answer/9483017
  3. Binet, L. & Field, P. Media in Focus. IPA, 2017.
  4. IAB. Cross-Media Measurement and Attribution Framework. 2024 update.

Get a diagnosis

If your attribution model is telling you to pause upper-funnel campaigns and something in your gut is telling you otherwise, Chris Gardner reads every audit personally. No templates. No generic recommendations. A diagnostic built on your account data.