Moti Radomski
VP, Product @ Skai
Moti Radomski
VP, Product @ Skai
I’ve been here at Skai for some time now talking to marketers about the ongoing industry data deprecation issues, the challenges of multi-touch attribution, and of course, about how incrementality is the cookieless measurement solution marketers need. Let’s examine the subject to its fullest.
TABLE OF CONTENTS
Incrementality testing is a testing methodology where a marketer makes a specific change to a Test Group in order to determine the incremental value of a marketing strategy or tactic.
Incrementality testing software empowers you to conduct tests with ease, delivering results in a matter of weeks with just a few clicks. The self-service nature of the solution means that you have complete control to run tests according to your specific needs and preferences.
A very simple example of an incrementality test would be:
Testing the value of Facebook ads on Search ad performance
That’s a very basic example. Incrementality can be used to test a variety of channels, tactics, and elements. In my very first post here at Skai back in January, Marketing Measurement Cannot Be Reliant on Individual Customer Journeys: Incrementality is the Fix, I went into a bit more detail:
Incrementality is the experimental process, and resulting measurement, that allows a business to identify the composition of outcomes that were the direct cause of an advertising tactic, and those that would have occurred even without it as a result of other tactics or external factors.
While most marketers understand in theory the principles of the Test/Control methodology of Incrementality measurement, what most marketers still have a bit of trouble wrapping their heads around is what to even test in the first place.
With measurement solutions like multi-touch attribution (MTA), most of the work is upfront to connect platforms and then let the MTA tool tell you the value of each of your channels. But, with Incrementality, the marketer has to figure out an area within their program to better understand, translate that question into an incrementality test, and then run the test & analyze the results.
I took a stab at explaining how to think about testing design in my post, Putting Incrementality Measurement Into Action: How to Figure Out What to Test, which is a good read for those of you that want to get into the nitty-gritty.
But, today, I wanted to take a more practical approach to explain incrementality testing. Below, you will find 12 easy-to-follow types of incrementality tests that should better illustrate how marketers can use them to answer difficult measurement questions.
1. New Tactic Test: Quantifying the value of adding a new channel, tactic, or partner into your media mix.
How can we better manage the testing of new display DSP partners?
How can we understand the impact of adding upper funnel video into the media mix?
What is the true impact of investing in Google’s latest product offering?
2. Existing Tactic Test (Holdout Test): Quantifying the value of your existing media investments.
What impact do our current Facebook investments have on category sales?
Does retargeting help grow our business? Would we have gotten those customers anyway?
How can we better tie our upper funnel investments to revenue metrics?
3. Growth Test. What would be the impact if we increased our activity/investment?
How would increasing our Online Video budget by 50% impact registration volume?
How many more leads would we get if we doubled our Instagram spend?
Would traffic to our site be impacted if we spent more on paid search?
4. Reduction Test. What would be the impact if we decreased our activity/investment?
What would happen to repeat purchase volume if we cut back on our Display Retargeting investments?
If we slashed our upper funnel video budgets, would lead volume be affected?
How would our RoAS change if we invested 30% less in our ABM programs?
5. Investment-Level Test. What is the right level of budget for maximum impact & efficiency?
How much should we be investing in Non-Brand Search to maximize our ROI?
What is the right level of investment in Snapchat to reach 100k new customers this year?
How can I figure out what my CTV budget should be next year to achieve our target of net-new customers?
6. Cross-Platform Tests. How does one channel’s activity/investment impact another channel?
How can we better understand the impact of Facebook & Pinterest on total revenue?
Does my new out-of-home marketing program increase email’s ability to drive more traffic to our website?
How much should I spend on YouTube to best increase organic search conversions?
7. Cross-Channel Tests. Measuring the impact of multiple channels working together to achieve an objective.
How can we better understand the value of our online and offline shopper marketing?
What is the interplay of organic and paid search to drive revenue on my site?
Does spending more on both email and direct mail increase conversions from my digital advertising?
8. Partner Tests. Investigate which partners work best for your programs.
How can we quantify the efficiency of our various DSP partners?
Which social advertising provider works more efficiently to generate leads? Pinterest or Snapchat?
Should I switch from one programmatic data vendor to another to get better results?
9. Audience Tests. Analyze your audience strategy effectiveness.
Do we need to be investing in both prospecting & retargeting? Which is adding greater value for us?
Is this segment targeted best with email or with social advertising?
Is the additional cost of data segmentation offset by increased results?
10. Creative Tests. Discover which creative and copy directions work best.
Which of Pinterest’s creative formats drives the greatest sales from new customers?
How well does my new promotional campaign work to generate sales?
What call-to-action is most effective at driving customers beyond the initial inquiry?
11. Online-to-Offline Tests.
How can we better understand the impact of Google & Microsoft shopping on both online and in-store sales?
Do my app marketing ads drive calls to my call center?
What is the impact of my Facebook ads on in-store visits?
12. Offline-to-Online Tests.
What is the impact of our direct mail campaigns on my website’s new registrations?
Which radio spots lift sales in my online retail partners?
What is the value of my out-of-home on my app orders?
As more and more data deprecation challenges impact the ability of digital marketers to track advertising effectiveness, multi-touch attribution (MTA) continues to look like it will not be the measurement solution for the future.
Incrementality testing’s cookieless approach to handling the growing data issues helps brands & agency marketers solve the new data challenges.
Once integrated, the MTA tool does all of the rest. It tracks all exposures, connects the dots of the customer journeys, does the number crunching, and spits out the new attributed metrics. Under MTA, the analyst spends most of their time simply figuring out the complicated puzzle of building integrations and maintaining them.
This isn’t the role that your math-adept, analytically creative professionals should be doing!
But, with incrementality testing, your best and brightest will spend much more time analyzing the strategic puzzles rather than solving how to best integrate one ad tech tool into another. They will be able to spend more time figuring out how best to test, measure, and analyze results to figure out what is working—and not working—with your advertising investments.
To help better illustrate this point, let’s check out a fictional day in the life of an agency marketer, Mary, who is helping to drive client value with an incrementality testing practice.
Mary has just come back from her run around her neighborhood, brews a cup of coffee, and sits down to start her day. The very first thing Mary does every morning is to check in on how her current incrementality tests are doing. She wants to make sure that nothing looks wonky. She knows that plans change, ad tech can have hiccups, marketers can accidentally change their campaign settings, etc.
Having checked her live tests and seeing that everything looks in order, she begins checking out the initial results. Sometimes the agency leads on their accounts like to check in and see if there’s anything interesting to share with their brand clients.
Test #1 is trying to determine the incremental value of a brand’s YouTube video ads on the overall program. These ads are generally very upper funnel and often hard to determine effectiveness, even MTA has trouble tracking them back to conversion activity. But, with incrementality testing, individual customer journeys don’t matter. The test is already showing that YouTube is performing much better than assumed.
Test #2 is a critical one for an app-based gaming client whose sales have been negatively impacted by the iOS14 changes earlier this year. Facebook has been an essential channel for years, but now the agency team is having a lot of trouble figuring out which of the new creative directions are actually working and have even been discussing doing the unthinkable and reducing their investment in iOS users. Right now, the Earn a Free Badge campaign is outperforming the others, and iOS users are proving to be quite profitable.
Mary shoots off a quick email to the lead on the account to let them know that even though the five-week test isn’t over yet, if they need to make a fast decision, they should push that campaign right now—but not in the handful of regions where the tests are running. If Mary is correct, the client will get some much-needed revenue this week.
Test #3 is an offline/online experiment to determine if the CPG client should be pushing customers to stores or online retailer partners. This is very important because the uncertainty around the next six months of the pandemic creates some hesitation on which way to go.
However, if the extensive fall marketing program can effectively drive ecommerce sales, it makes sense to keep investing in that direction given the challenges of selling in-store. This test just launched a few days ago, so the results are inconclusive right now. The client is on the cusp of having a tough quarter, and this test may be their last chance to save the year.
Mary meets with some of her agency colleagues interested in running their first incrementality test for a large travel client. The client has started seeing traction recently, and its CMO has gotten the green light to make a big marketing push. However, before they implement the plan, the team thought it would be a good idea to do a few tests to figure out where best to spend the entire budget.
This is the first time the team has done an incrementality test, but they’ve heard about great results from other groups at the agency, so they are ready to try it. They’ve invited the CMO and his team to the Zoom call to learn more. Mary presents her Incrementality Testing 101 deck and fields questions from the client. The meeting is very encouraging. The CMO understands the need for a cookieless measurement approach and has been reading incrementality testing for the last few months. He’s ready to get started and was happy to know that his agency offers it.
One of the most critical parts of Mary’s role is to help translate business problems into incrementality tests. She listens to the CMO, and her agency lead explains some of the existing challenges they face and the essential questions they have about their marketing programs. Like many companies, they struggle with quantifying the online/offline connection, measuring upper-funnel effectiveness, and other common issues that MTA could never solve fully.
Mary listens closely and makes notes. Then, when it’s her turn to speak, she asks the questions that will help her zero in on the business metrics and desired outcomes they want to achieve with their upcoming marketing push. After fully understanding these variables, Mary promises to have some ideas for the team to review by the end of the day. Translating business problems to incrementality tests can be tricky, but if done correctly, a few monthly tests can offer critical insights to help guide planning and optimization.
After a quick lunch in her home, Mary walks her dog and contemplates the upcoming monthly measurement meeting. She makes it back home right before the Zoom call begins.
The monthly measurement meeting is attended by the other analysts at the agency and some of its key leaders: various Senior Account Directors (ADs), the VP of Client Services, and their key stakeholder, the agency’s holding company’s SVP of Strategy.
The meeting’s agenda is the same each time. First, they discuss any significant issues facing the agency and how their incrementality practice can help solve those challenges. The VP of Client Services lets them know that their big, global retail client has become a bit unhappy with the recent results of their latest promotional campaign.
To make sure they can nail the next campaign, one of Mary’s fellow analysts offers some ideas on how a few incrementality tests could be run to determine the best creative strategy across channels as the recent new direction doesn’t seem to be resonating with customers. The team agrees and tasks the analyst with developing that program in conjunction with the agency team working on that client.
In the next part of the meeting, each analyst quickly presents a slide with recent learnings from the previous months’ tests. Some of the ADs ask probing questions as they’re still getting up to speed with how incrementality works and how to best train their teams to explain the ins and outs to clients.
The remainder of the meeting is to discuss upcoming tests so that they can all not just know what’s going on across the agency as it relates to their new measurement practice, but also to pick each other’s brains and offer good suggestions into how they can continue improving their programs.
The meeting lasts an hour and a half, and the SVP of Strategy offers some closing words of encouragement and how she will be reporting the solid progress in the holding company leadership team meeting next week.
Mary finishes a few critical emails and joins the meeting on time. In this session, some of the other agencies in the holding company have been interested in what their agency has been doing with their incrementality program. So, Mary’s boss “volunteered” her to meet with some of their senior analysts to discuss the topic.
Data deprecation has been on everyone’s minds for the last few years and its impact on measurement, but each agency has been dealing with it differently:
Mary offers her advice when asked and is interested to learn more about where the other agencies are finding challenges. Then, she makes a few notes to share with her team lead at her next one-on-one weekly meeting on Friday.
Mary races to the end of her day. Checking her list, she remembers she needs to brainstorm some test ideas from the business challenges she heard with the client in the 11:00 am meeting.
One of the critical things that the CMO client has struggled with for years has been the online/offline connection. The brand knows that these channels must be influencing each other, but without the proper measurement in place, they continue to operate in silos and cannot take advantage of the synergy. Even worse, the CMO worries that their disparate strategies might not just be wasting precious budgets that could go elsewhere, but even that their siloed online/offline marketing might even be working against each other.
So, Mary decides this is where she should start.
Mary uses Skai’s Impact Navigator solution to analyze the client’s historical marketing performance data to develop three geo-split pairings so they can run the next set of incrementality tests in paired Test and Control markets.
Of course, in all of the Control Groups, the advertising spending will continue as already planned without any changes. For each test cell, Mary will measure the shift in digital media investment on total sales and revenue. Although the client asked many questions about measuring success, she thinks the impact on total revenue is an excellent place to start, so they design further tests to dig even deeper once they have initial results.
Because the tests will only run for a month or slightly longer, there’s limited investment and disruption to the media plan. Mary knows that the “test budgets” will also drive awareness, traffic, and sales so that dollars won’t be wasted simply for testing.
Once finishing her analysis, Mary sends an email to her agency colleague with a quick explanation so he can share it with the client. It’s time to break for the day and enjoy an early dinner with her family.
Skai’s VP of Retail Media and Head of Industry Leads, Kevin Weiss, shared key insights he presented at the recent Path to Purchase Institute event. The conversation revolved around crucial topics such as optimizing retail media investments for maximum incrementality, ambitious growth objectives, and the latest trends in retail media capabilities. Additionally, Kevin provided valuable insights into the metrics that truly matter and the cutting-edge methodologies for conducting retail media incrementality testing.
Many advertisers will find it easy to prove incrementality, not because they have a magic solution but because they have a subjective opinion on the confidence level required to satisfy their definition of proof. Other advertisers will find it impossible to prove incrementality, not because they aren’t equipped to do so but because they cannot meet the data science rigor that incrementality testing imposes on marketers.
I suggest looking at retail media as an environment where marketers need to test in an imperfect environment for testing. In that sense, scrutinize the claims you hear, educate yourself on what incrementality means to each stakeholder, seek out new data sources, form hypotheses, conduct tests with as much scientific rigor as possible, and ensure that test & learn spend is included in your budgets as its line item.
PROS:
CONS:
In an ecosystem like retail media, where FOMO runs high, the pros often seem to outweigh the cons when there’s a debate about using new data sources.
In the past, advertisers grappled with limited access to data, hindering their ability to gain a comprehensive understanding of the customer journey. However, as per the findings from our recent 2023 State of Retail Media survey, it seems that many advertisers are now satisfied with the capabilities offered by various retail media networks.
Imagine being able to demonstrate that one of the touchpoints in your customer’s journey is driving brand interest and accurately measure its contribution. This is precisely where incrementality on Amazon with Amazon Marketing Cloud (AMC), Amazon’s data clean room, becomes indispensable.
AMC shifts away from the limited perspective of last-touch attribution, which tends to overemphasize lower-funnel ad expenditures. Instead, it provides a holistic view of the shopper’s journey.
Incrementality testing can span channels, tactics, and business outcomes. The following are just some examples of what you can test with an incrementality program. If you don’t see a marketing element that you wish to measure, just ask us. Chances are it can be part of an incrementality test.
Search | Google, Bing, Yahoo!, skai, SA360, Adobe, Acquisio, Wordstream, AdMedia |
Social | Facebook, Instagram, Twitter, Snap, LinkedIn, Pinterest, Smartly.io, 4C, TikTok, Reddit |
Display | The TradeDesk, MediaMath, Adobe, DataXu, Amobee, Google, Verizon, Criteo, TipleLift, Kargo, Nativo, Amazon DSP |
Video | YouTube, Teads, YuMe, BrightRoll, Adobe, Videology, MediaMath, Amobee, DataXu, Tremor, The TradeDesk, Amazon DSP |
Apps | Facebook, Google, Apple, Skai, Bidalgo, AppsFlyer, Singular, Branch, Adjust |
CTV / OTT | Hulu, YouTube, Roku, Sling TV, Verizon, Comcast, AT&T, Charter, Sky, Dish, Sony, Adobe, The TradeDesk |
Offline | Out of Home, Direct Mail, Audio |
Other Channels | Email, Affiliate, Mobile Navigation Apps (Waze) |
Skai’s Impact Navigator measures the real-world effectiveness, or incrementality, of a marketing tactic in the only place that matters: the real world, with real people, as part of a real marketing test measuring business responses that matter like revenue impact, client acquisition, and brand engagement.
Leave the guesses and hunches to your competition and speed ahead with your growth strategy powered by real consumer insights.
“With Skai’s Impact Navigator, we can measure the real impact in the marketplace before starting significant investment, and focus on where there’s really incrementality.”
Marina Casas, Advertising Specialist – Privalia [Read the case study]
For more information or to see a brief demo of Impact Navigator, reach out today.
You are currently viewing a placeholder content from Instagram. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Wistia. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information