Tomer Shadi
Senior Product Manager
Tomer Shadi
Senior Product Manager
The first post in the Skai Blog’s Measuring Up series examined the pressure facing advertisers to quickly identify the products and methods which provide the most impactful business outcomes, overviewing use cases of widely adopted attribution strategies as well as incrementality testing. In this installment, we’ll discuss flaws inherent to attribution measurement, understanding that as valuable a function as attribution performs, it can fall short in some scenarios and is often best accompanied by complementary tools. We’ll also glimpse how incrementality testing can help overcome shortcomings of common attribution methods, to help you pick the strategies which will best meet your measurement needs!
Attribution uses discrete models to correlate advertising investments with resulting conversions. A huge advantage of attribution modeling is data continuity, meaning that a brand doesn’t need to alter marketing plans mid-stream in order to achieve actionable insights—a well-formulated model can account for regular fluctuations and realignments. Data can be narrowed down to individual keywords and ads, allowing for refinement among different points in a conversion funnel and cross-channel consumer journeys.
While attribution has proven a largely scalable solution across verticals, it also poses a new set of reporting challenges. Today we’ll review five common pitfalls of attribution, and consider ways to combat them with incrementality testing.
Advertisers must be able to account for the full consumer journey in order to associate ads with their impact. But consumer journeys can be difficult to track, especially those which occur across multiple channels and devices! An ideal attribution model identifies individual users across devices and channels, but in cases when this isn’t easily accomplished, data association is compromised. When measurement is incomprehensive, modeling is inaccurate at best, and modeling dependent upon blind spots in walled-garden channels fails to expose individual data points and breaks attribution chains.
Many advertisers have reason to believe that mobile advertising is, in fact, more effective than their reporting indicates, and is undervalued due to insufficient tracking. Linear measurement often overlooks a large volume of conversions which begin on mobile but finish on desktop devices. This imprecision is particularly evident with mobile-heavy publishers such as Facebook, Snap, Pinterest, and Twitter. Similarly, lack of sufficient data to measure video impact has led marketers to believe that video promotions are regularly under-valued.
In those scenarios, incrementality testing proves particularly advantageous by focusing on individual investments, directly measuring their impact upon overall business results and eliminating the need to identify and measure each step in the consumer journey. Incrementality testing accomplishes cause-and-effect measurement, eliminating guesswork!
Any path to conversion with multiple steps can create uncertainty regarding how and how much each step contributed to the final purchase. Competing attribution models paint conflicting stories about how each step ties to the end action. For example, first-click and last-click will result in completely different measurement! Today, many companies even recruit AI teams to develop in-house attribution models based on machine learning. While this can result in greater efficiency and robust data sets, it leads to even more subjective models. Moreover, these models can be considered black boxes were given their sophisticated, untransparent logic.
As advertisers seek the newest and most advanced attribution modeling available, old modeling quickly becomes outdated and redundant. In recent years, the breadth of attribution models available has given data teams additional challenges as they struggle to coordinate disparate reporting, even among individual industries, companies, teams, and time periods!
In instances in which attribution’s subjectivity is a cause for concern, incrementality testing lends additional assurance. By testing a specific investment in the customer journey, incrementality testing can measure its direct impact and halo effects on the ecosystem of investments without making assumptions based on black box modeling. The result is both transparent and empirical!
A key benefit of attribution is that it accounts for ongoing business investments and fluctuations such as budget changes, holidays, and special events. However, one must expect that these fluctuations will affect the measured business outcomes. This holistic analysis, assessing an array of inputs, leaves room for ambiguity regarding the actual value of individual investments and impressions and tends to mistake correlation with causation. For companies and industries subject to seasonality, how can advertisers determine which growth is resultant of their campaigns rather than external factors?
This gap is frequently manifested in upper-funnel actions and initial user interactions. For instance, attribution often credits a high volume of impressions to display ads which can result in an exaggerated evaluation of their efficacy. In 2017, The New York Times reported that Chase saw almost indistinguishable results between a promotion which served display ads on 400,000 websites and another which served ads on only 5,000 sites! That a volume of 400,000 sites could have negligible impact compared to 5,000 suggests a disconnect in the actual effect of those display ads served.
Incrementality testing can help combat ambiguity by isolating individual investments and standardizing extraneous parameters, such that measurement is agnostic of influences such as seasonality, geography, and cross-marketing.
Successful attribution requires comprehensive data for engagement and conversion actions in order to correlate the domain in which the advertisement is delivered with the domain in which the conversion occurs. This is often feasible for fully online conversion funnels, but funnels with offline components, especially traditional formats such as TV, radio, and billboards are complicated by the inability to measure impressions and engagement. In these cases, extractable statistical insights cannot be conclusively connected to specific consumers, such that isolating offline promotions’ true impact is virtually impossible.
The resultant gap often means that conversions are inadequately attributed to offline ads, undermining the effect of offline channels. In this case, incrementality testing makes for a strong supplementary tactic because it allows for advertisers to measure the difference between a test group, which has been exposed to the offline ad, and a control group which has not.
Attribution modeling attempts to identify causality between conversion actions and preexisting investments. However, with the addition of new publishers and advertising methods to the consumer funnel, marketers need to consider the value of new investments. Because attribution models are historical in nature and therefore lack sufficient data to perform these calculations, advertisers are left to make investments and measure the impact afterward rather than in advance.
In 2017, Skai launched full support for Pinterest campaigns. While some advertisers adopted immediately, others questioned the value of investing in a new channel. But by using small budgets to test efficacy within established statistical criteria, clients were able to quantify the effect of introducing a new platform into their consumer journeys! Using an incrementality test, Skai clients Belk and iCrossing determined that Pinterest advertising increased their online ROAS 2.9x and in-store ROAS 31.4x!
The value of incrementality testing for future investments stems from its ability to use small budgets to assess the value of the new investment with statistical confidence.
Conscious of these gaps, advertisers tasked with making decisions based on attribution face even more questions. How can attribution be directly translated to user actions? Can models be adjusted during the course of a promotion? Can advertisers determine which scenarios justify switching models?
We have seen a shift over the last three years in the way marketers evaluate attribution modeling and results. Whereas a decade ago marketers considered full funnels measurable and data accurate and actionable, today they feel comparably limited, with mobile, in particular, disrupting measurement and necessitating new solutions. Proactive advertisers today will often combine attribution with incrementality testing. By employing both philosophies, marketers can use incrementality testing to validate attribution measurement and adjust modeling to better measure true ad efficacy.
During our annual K8 conference, we introduced Skai Testing Services to help marketers enter the world of incrementality testing supported by our experienced team of research marketers. Our team consists of data scientists who’re experts in digital advertising across verticals, with experience scaling incrementality testing across platforms and strategies. If your team faces challenges related to testing, attribution, or measurement, we’re eager to help and would love to connect!
Want to learn more about incrementality testing in the meantime? Check out our case studies for more info:
Stay tuned to the Skai Blog for the next post in the Measuring Up series where we’ll dive into the world of incrementality testing to better understand differences between incrementality and A/B testing, and how to increase confidence levels!
You are currently viewing a placeholder content from Instagram. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Wistia. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information