I’ve been here at Skai for some time now talking to marketers about the ongoing industry data deprecation issues, the challenges of multi-touch attribution, and of course, about how incrementality is the cookieless measurement solution marketers need. Let’s examine the subject to its fullest.
TABLE OF CONTENTS
- What is Incrementality Testing?
- What are Examples of Incrementality Testing?
- Day in the Life of Measurement Marketer
- Examples of Marketing Programs To Measure
What is Incrementality Testing?
Incrementality testing is a testing methodology where a marketer makes a specific change to a Test Group in order to determine the incremental value of a marketing strategy or tactic.
What is An Incrementality Testing Example?
A very simple example of an incrementality test would be:
Testing the value of Facebook ads on Search ad performance
- Two equal sets of markets are chosen: A) Control Group: A set of markets with only Paid Search running (the campaign as it is today) and B) Test Group: A set of markets with Paid Search and Facebook ads running (the campaign with Facebook added in the mix)
- A test period (maybe 3-4 weeks)
- Once the test is over, it is determined that the Test Group performed X% better when running with Facebook ads
That’s a very basic example. Incrementality can be used to test a variety of channels, tactics, and elements. In my very first post here at Skai back in January, Marketing Measurement Cannot Be Reliant on Individual Customer Journeys: Incrementality is the Fix, I went into a bit more detail:
Incrementality is the experimental process, and resulting measurement, that allows a business to identify the composition of outcomes that were the direct cause of an advertising tactic, and those that would have occurred even without it as a result of other tactics or external factors.
Incrementality requires testing. How do you choose what to test?
While most marketers understand in theory the principles of the Test/Control methodology of Incrementality measurement, what most marketers still have a bit of trouble wrapping their heads around is what to even test in the first place.
With measurement solutions like multi-touch attribution (MTA), most of the work is upfront to connect platforms and then let the MTA tool tell you the value of each of your channels. But, with Incrementality, the marketer has to figure out an area within their program to better understand, translate that question into an incrementality test, and then run the test & analyze the results.
What are Examples of Incrementality Testing?
I took a stab at explaining how to think about testing design in my post, Putting Incrementality Measurement Into Action: How to Figure Out What to Test, which is a good read for those of you that want to get into the nitty-gritty.
But, today, I wanted to take a more practical approach to explain incrementality testing. Below, you will find 12 easy-to-follow types of incrementality tests that should better illustrate how marketers can use them to answer difficult measurement questions.
Testing the Impact of One Media Decision
1. New Tactic Test: Quantifying the value of adding a new channel, tactic, or partner into your media mix.
How can we better manage the testing of new display DSP partners?
How can we understand the impact of adding upper funnel video into the media mix?
What is the true impact of investing in Google’s latest product offering?
2. Existing Tactic Test (Holdout Test): Quantifying the value of your existing media investments.
What impact do our current Facebook investments have on category sales?
Does retargeting help grow our business? Would we have gotten those customers anyway?
How can we better tie our upper funnel investments to revenue metrics?
3. Growth Test. What would be the impact if we increased our activity/investment?
How would increasing our Online Video budget by 50% impact registration volume?
How many more leads would we get if we doubled our Instagram spend?
Would traffic to our site be impacted if we spent more on paid search?
4. Reduction Test. What would be the impact if we decreased our activity/investment?
What would happen to repeat purchase volume if we cut back on our Display Retargeting investments?
If we slashed our upper funnel video budgets, would lead volume be affected?
How would our RoAS change if we invested 30% less in our ABM programs?
Testing the Impact of Multiple Media Decisions
5. Investment-Level Test. What is the right level of budget for maximum impact & efficiency?
How much should we be investing in Non-Brand Search to maximize our ROI?
What is the right level of investment in Snapchat to reach 100k new customers this year?
How can I figure out what my CTV budget should be next year to achieve our target of net-new customers?
6. Cross-Platform Tests. How does one channel’s activity/investment impact another channel?
How can we better understand the impact of Facebook & Pinterest on total revenue?
Does my new out-of-home marketing program increase email’s ability to drive more traffic to our website?
How much should I spend on YouTube to best increase organic search conversions?
7. Cross-Channel Tests. Measuring the impact of multiple channels working together to achieve an objective.
How can we better understand the value of our online and offline shopper marketing?
What is the interplay of organic and paid search to drive revenue on my site?
Does spending more on both email and direct mail increase conversions from my digital advertising?
8. Partner Tests. Investigate which partners work best for your programs.
How can we quantify the efficiency of our various DSP partners?
Which social advertising provider works more efficiently to generate leads? Pinterest or Snapchat?
Should I switch from one programmatic data vendor to another to get better results?
9. Audience Tests. Analyze your audience strategy effectiveness.
Do we need to be investing in both prospecting & retargeting? Which is adding greater value for us?
Is this segment targeted best with email or with social advertising?
Is the additional cost of data segmentation offset by increased results?
10. Creative Tests. Discover which creative and copy directions work best.
Which of Pinterest’s creative formats drives the greatest sales from new customers?
How well does my new promotional campaign work to generate sales?
What call-to-action is most effective at driving customers beyond the initial inquiry?
Testing Omnichannel Impact
11. Online-to-Offline Tests.
How can we better understand the impact of Google & Microsoft shopping on both online and in-store sales?
Do my app marketing ads drive calls to my call center?
What is the impact of my Facebook ads on in-store visits?
12. Offline-to-Online Tests.
What is the impact of our direct mail campaigns on my website’s new registrations?
Which radio spots lift sales in my online retail partners?
What is the value of my out-of-home on my app orders?
As more and more data deprecation challenges impact the ability of digital marketers to track advertising effectiveness, multi-touch attribution (MTA) continues to look like it will not be the measurement solution for the future.
Incrementality testing’s cookieless approach to handling the growing data issues helps brands & agency marketers solve the new data challenges.
Marketing measurement professionals are going to love incrementality testing vs. MTA
Once integrated, the MTA tool does all of the rest. It tracks all exposures, connects the dots of the customer journeys, does the number crunching, and spits out the new attributed metrics. Under MTA, the analyst spends most of their time simply figuring out the complicated puzzle of building integrations and maintaining them.
This isn’t the role that your math-adept, analytically creative professionals should be doing!
But, with incrementality testing, your best and brightest will spend much more time analyzing the strategic puzzles rather than solving how to best integrate one ad tech tool into another. They will be able to spend more time figuring out how best to test, measure, and analyze results to figure out what is working—and not working—with your advertising investments.
To help better illustrate this point, let’s check out a fictional day in the life of an agency marketer, Mary, who is helping to drive client value with an incrementality testing practice.
Day in the Life of Measurement Marketer
9:00 am: Morning check-in
Mary has just come back from her run around her neighborhood, brews a cup of coffee, and sits down to start her day. The very first thing Mary does every morning is to check in on how her current incrementality tests are doing. She wants to make sure that nothing looks wonky. She knows that plans change, ad tech can have hiccups, marketers can accidentally change their campaign settings, etc.
Having checked her live tests and seeing that everything looks in order, she begins checking out the initial results. Sometimes the agency leads on their accounts like to check in and see if there’s anything interesting to share with their brand clients.
Test #1 is trying to determine the incremental value of a brand’s YouTube video ads on the overall program. These ads are generally very upper funnel and often hard to determine effectiveness, even MTA has trouble tracking them back to conversion activity. But, with incrementality testing, individual customer journeys don’t matter. The test is already showing that YouTube is performing much better than assumed.
Test #2 is a critical one for an app-based gaming client whose sales have been negatively impacted by the iOS14 changes earlier this year. Facebook has been an essential channel for years, but now the agency team is having a lot of trouble figuring out which of the new creative directions are actually working and have even been discussing doing the unthinkable and reducing their investment in iOS users. Right now, the Earn a Free Badge campaign is outperforming the others, and iOS users are proving to be quite profitable.
Mary shoots off a quick email to the lead on the account to let them know that even though the five-week test isn’t over yet, if they need to make a fast decision, they should push that campaign right now—but not in the handful of regions where the tests are running. If Mary is correct, the client will get some much-needed revenue this week.
Test #3 is an offline/online experiment to determine if the CPG client should be pushing customers to stores or online retailer partners. This is very important because the uncertainty around the next six months of the pandemic creates some hesitation on which way to go.
However, if the extensive fall marketing program can effectively drive ecommerce sales, it makes sense to keep investing in that direction given the challenges of selling in-store. This test just launched a few days ago, so the results are inconclusive right now. The client is on the cusp of having a tough quarter, and this test may be their last chance to save the year.
11:00 am: Client meeting
Mary meets with some of her agency colleagues interested in running their first incrementality test for a large travel client. The client has started seeing traction recently, and its CMO has gotten the green light to make a big marketing push. However, before they implement the plan, the team thought it would be a good idea to do a few tests to figure out where best to spend the entire budget.
This is the first time the team has done an incrementality test, but they’ve heard about great results from other groups at the agency, so they are ready to try it. They’ve invited the CMO and his team to the Zoom call to learn more. Mary presents her Incrementality Testing 101 deck and fields questions from the client. The meeting is very encouraging. The CMO understands the need for a cookieless measurement approach and has been reading incrementality testing for the last few months. He’s ready to get started and was happy to know that his agency offers it.
One of the most critical parts of Mary’s role is to help translate business problems into incrementality tests. She listens to the CMO, and her agency lead explains some of the existing challenges they face and the essential questions they have about their marketing programs. Like many companies, they struggle with quantifying the online/offline connection, measuring upper-funnel effectiveness, and other common issues that MTA could never solve fully.
Mary listens closely and makes notes. Then, when it’s her turn to speak, she asks the questions that will help her zero in on the business metrics and desired outcomes they want to achieve with their upcoming marketing push. After fully understanding these variables, Mary promises to have some ideas for the team to review by the end of the day. Translating business problems to incrementality tests can be tricky, but if done correctly, a few monthly tests can offer critical insights to help guide planning and optimization.
1:00 pm: Monthly measurement meeting
After a quick lunch in her home, Mary walks her dog and contemplates the upcoming monthly measurement meeting. She makes it back home right before the Zoom call begins.
The monthly measurement meeting is attended by the other analysts at the agency and some of its key leaders: various Senior Account Directors (ADs), the VP of Client Services, and their key stakeholder, the agency’s holding company’s SVP of Strategy.
The meeting’s agenda is the same each time. First, they discuss any significant issues facing the agency and how their incrementality practice can help solve those challenges. The VP of Client Services lets them know that their big, global retail client has become a bit unhappy with the recent results of their latest promotional campaign.
To make sure they can nail the next campaign, one of Mary’s fellow analysts offers some ideas on how a few incrementality tests could be run to determine the best creative strategy across channels as the recent new direction doesn’t seem to be resonating with customers. The team agrees and tasks the analyst with developing that program in conjunction with the agency team working on that client.
In the next part of the meeting, each analyst quickly presents a slide with recent learnings from the previous months’ tests. Some of the ADs ask probing questions as they’re still getting up to speed with how incrementality works and how to best train their teams to explain the ins and outs to clients.
The remainder of the meeting is to discuss upcoming tests so that they can all not just know what’s going on across the agency as it relates to their new measurement practice, but also to pick each other’s brains and offer good suggestions into how they can continue improving their programs.
The meeting lasts an hour and a half, and the SVP of Strategy offers some closing words of encouragement and how she will be reporting the solid progress in the holding company leadership team meeting next week.
3:00 pm: Knowledge sharing
Mary finishes a few critical emails and joins the meeting on time. In this session, some of the other agencies in the holding company have been interested in what their agency has been doing with their incrementality program. So, Mary’s boss “volunteered” her to meet with some of their senior analysts to discuss the topic.
Data deprecation has been on everyone’s minds for the last few years and its impact on measurement, but each agency has been dealing with it differently:
- Agency A has been working hard to keep their MTA programs working by cobbling together some data workarounds with big Universal ID (UID) vendors. While it helped create some stopgaps early on, the agency has been feeling the impacts of increasing data deprecation and is ready to start testing incrementality as a potential solution.
- Agency B started its incrementality program even before Mary’s agency. Their measurement lead reports that things are going well and even points to a few significant client renewals driven by the new practice.
- Agency C is a pure-digital performance agency that thought it could avoid the data issues altogether by going “all-in” with the big digital publishers. Early on, their fixes worked, but they too are starting to hit a wall as their clients are now asking some concerning questions about cross-platform measurement and as the industry blogosphere heats up on this issue. Without satisfying answers to tell their clients, they know they need a real solution and are interested in hearing how the other agencies are solving it before deciding which direction to go.
- Agency D is way behind. Historically, it has been more focused on creative and branding, so the measurement hasn’t been as scrutinized by its clients as other agencies. However, the lead analyst knows that it’s only a matter of time before they lose clients over this issue and even sees measurement as an opportunity to drive innovation. They are interested in the resource costs involved and vendors they should meet to start thinking long-term.
Mary offers her advice when asked and is interested to learn more about where the other agencies are finding challenges. Then, she makes a few notes to share with her team lead at her next one-on-one weekly meeting on Friday.
4:30 pm: Brainstorming
Mary races to the end of her day. Checking her list, she remembers she needs to brainstorm some test ideas from the business challenges she heard with the client in the 11:00 am meeting.
One of the critical things that the CMO client has struggled with for years has been the online/offline connection. The brand knows that these channels must be influencing each other, but without the proper measurement in place, they continue to operate in silos and cannot take advantage of the synergy. Even worse, the CMO worries that their disparate strategies might not just be wasting precious budgets that could go elsewhere, but even that their siloed online/offline marketing might even be working against each other.
So, Mary decides this is where she should start.
Mary uses Skai’s Impact Navigator solution to analyze the client’s historical marketing performance data to develop three geo-split pairings so they can run the next set of incrementality tests in paired Test and Control markets.
Of course, in all of the Control Groups, the advertising spending will continue as already planned without any changes. For each test cell, Mary will measure the shift in digital media investment on total sales and revenue. Although the client asked many questions about measuring success, she thinks the impact on total revenue is an excellent place to start, so they design further tests to dig even deeper once they have initial results.
Because the tests will only run for a month or slightly longer, there’s limited investment and disruption to the media plan. Mary knows that the “test budgets” will also drive awareness, traffic, and sales so that dollars won’t be wasted simply for testing.
Once finishing her analysis, Mary shoots off an email to her agency colleague with a quick explanation so he can share it with the client. It’s time to break for the day and enjoy an early dinner with her family.
Examples of Marketing Programs To Measure
Incrementality testing can span channels, tactics, and business outcomes. The following are just some examples of what you can test with an incrementality program. If you don’t see a marketing element that you wish to measure, just ask us. Chances are it can be part of an incrementality test.
Channels, Platforms, & Tactics
|Search||Google, Bing, Yahoo!, skai, SA360, Adobe, Acquisio, Wordstream, AdMedia|
|Social||Facebook, Instagram, Twitter, Snap, LinkedIn, Pinterest, Smartly.io, 4C, TikTok, Reddit|
|Display||The TradeDesk, MediaMath, Adobe, DataXu, Amobee, Google, Verizon, Criteo, TipleLift, Kargo, Nativo, Amazon DSP|
|Video||YouTube, Teads, YuMe, BrightRoll, Adobe, Videology, MediaMath, Amobee, DataXu, Tremor, The TradeDesk, Amazon DSP|
|Apps||Facebook, Google, Apple, Skai, Bidalgo, AppsFlyer, Singular, Branch, Adjust|
|CTV / OTT||Hulu, YouTube, Roku, Sling TV, Verizon, Comcast, AT&T, Charter, Sky, Dish, Sony, Adobe, The TradeDesk|
|Offline||Out of Home, Direct Mail, Audio|
|Other Channels||Email, Affiliate, Mobile Navigation Apps (Waze)|
Tactics & Audiences
- Brand, Non-Brand, Competitive, Shopping
- Prospecting, Retargeting, Existing Customer, Lookalike
- Investment Level
- Pricing & Promotions
- Bidding/Optimization Algorithms, Frequency Capping
- Creative Formats & Types
- High/low performing stores or markets
- High/low value/profit products
Business Outcomes (KPIs) to Measure
- Revenue / Purchases:
- Online, in-store, omnichannel
- Key Products & Categories
- Purchase Funnel: Adds to cart, new customer, repeat customer, upsell
- Brand Awareness (Search Query Volume)
- Signups, Registrations, Openings
- Leads, Applications, & Calls
Get started with incrementality testing now to build your best-in-class measurement practice
Skai’s Impact Navigator measures the real-world effectiveness, or incrementality, of a marketing tactic in the only place that matters: the real world, with real people, as part of a real marketing test measuring business responses that matter like revenue impact, client acquisition, and brand engagement.
Leave the guesses and hunches to your competition, and speed ahead with your growth strategy powered by real consumer insights.
“With Skai’s Impact Navigator, we can measure the real impact in the marketplace before starting significant investment, and focus on where there’s really incrementality.”
Marina Casas, Advertising Specialist – Privalia [Read the case study]
For more information or to see a brief demo of Impact Navigator, reach out today.