Signostic  ›  Research  ›  Lexicon  ›  6-Month Cusp

Lexicon entryEffectiveness · Measurement window

6-Month Cusp

The time-frame at which rational/activation campaigns stop dominating sales effects and emotional/brand campaigns begin to dominate — and the measurement problem disguised as a strategy debate.

Direct answer

The 6-month cusp is the time-frame at which rational/activation campaigns stop dominating sales effects and emotional/brand campaigns begin to dominate. In the IPA Effectiveness Databank — covering 996 campaigns across 80+ categories — short-term rational campaigns produce stronger sales response in the first six months. After roughly six months, emotional campaigns overtake them and continue to widen the gap indefinitely.

A measurement problem disguised as a strategy debate.

The crossover is not a sharp line; the realistic range is three to twelve months, but six months is the working anchor.

The cusp is empirical, derived from longitudinal analysis of the IPA Effectiveness Databank by Les Binet and Peter Field. Rational/activation campaigns — the kind built around price, promotion, immediate offer, or task-stage capture — produce measurable sales response in the first six months and then taper. Emotional/brand campaigns — the kind built around fame, mental availability, and category-entry-point salience — produce limited sales response in the first six months but compound thereafter, and continue to widen the gap indefinitely beyond the cusp.

The cusp is a measurement problem disguised as a strategy debate. Any campaign evaluated on a window shorter than six months will systematically favour rational/activation work — and discriminate against brand work that is generating more long-run profit. When a prospect says “we tried brand and it didn’t work,” the first question is the measurement window. If the answer is under six months, the prospect did not measure brand performance; they measured a window where activation is structurally favoured.

Most SMB and mid-market reporting cadences are calibrated against month-end ROAS or quarterly revenue. Both windows sit below the cusp. The result is a near-universal pattern: brand investment is approved, gets measured before its effects exist, gets cut before the second half of its return arrives, and then gets blamed for not working. The measurement window, not the channel mix, is the load-bearing decision.

Audit the window before the channel mix.

A short measurement window is a strategy-level constraint that biases all downstream decisions toward activation, regardless of stated brand intent. The audit sequence has to start with the reporting cadence — not with the budget allocation.

The cusp test

1. What is the longest time horizon on the prospect’s standard report? 2. Does any metric on that report take more than six months to materialise? 3. If “brand didn’t work” — over what window was that evaluated?

If the standard report covers a window shorter than six months and contains no leading-indicator brand metric (Share of Search, branded search volume, prompted awareness), the methodology is structurally unable to see brand performance. The measurement scorecard has to change before the budget conversation can land usefully.

Where this definition comes from.

Referenced in this entry
  1. Binet, L. & Field, P. The Long and the Short of It: Balancing Short and Long-Term Marketing Strategies. Institute of Practitioners in Advertising (IPA), 2013. Based on 996 IPA Effectiveness Awards case studies, 1980–2010.
  2. Binet, L. & Field, P. Media in Focus: Marketing Effectiveness in the Digital Era. IPA, 2017.
  3. Field, P. The B2B Effectiveness Code. LinkedIn B2B Institute, 2021. (For the B2B-specifier amendment.)

Get a diagnosis

If your reports stop at the month-end mark and your brand investment keeps getting cut before its returns show up, the measurement scorecard is the constraint — not the budget. Chris Gardner audits measurement-window mismatches personally. Findings translate into strategy — execution runs through LocaliQ when you’re ready.