Looking for Part 2? Read now.
As more and more data deprecation challenges impact the ability of digital marketers to track advertising effectiveness, multi-touch attribution (MTA) continues to look like it will not be the measurement solution for the future.
Incrementality testing’s cookieless approach to handling the growing data issues helps brands & agency marketers solve the new data challenges.
In recent posts, I’ve shared some of the initial best practices I’ve learned that early adopters of incrementality testing have shared with me, including:
- The 4 Ingredients of a Best-in-Class Incrementality Measurement Practice
- Putting Incrementality Measurement Into Action: How to Figure Out What to Test
- Life Without Multi-Touch Attribution: How Your Marketing Measurement Program Will Change Under Incrementality
But there’s one exciting outcome from moving from MTA to incrementality testing that many may not have been prepared for when they started their transition…
Marketing measurement professionals are going to love incrementality testing vs. MTA
As I talk to more and more marketing measurement professionals trying out incrementality testing, I’m sensing that this methodology challenges them in new and fulfilling ways.
Why? With multi-touch attribution, measurement pros and analysts spent most of their time connecting their ad tech to their MTA vendor’s platform. Once integrated, the MTA tool does all of the rest. It tracks all exposures, connects the dots of the customer journeys, does the number crunching, and spits out the new attributed metrics. Under MTA, the analyst spends most of their time simply figuring out the complicated puzzle of building integrations and maintaining them.
This isn’t the role that your math-adept, analytically creative professionals should be doing!
With MTA, we put a great deal of faith and reliance on the MTA vendor. Does their technology unify the consumer journeys correctly? Do their algorithms work correctly to attribute marketing exposure to conversions accurately?
But, with incrementality testing, your best and brightest will spend much more time analyzing the strategic puzzles rather than solving how to best integrate one ad tech tool to another. They will be able to spend more time figuring out how best to test, measure, and analyze results to figure out what is working—and not working—with your advertising investments.
To help better illustrate this point, let’s check out a fictional day in the life of an agency marketer, Mary, who is helping to drive client value with an incrementality testing practice.
Part 1: A Day in the Life of Mary, the Marketing Measurement Pro
9:00 am: morning check-in
Mary has just come back from her run around her neighborhood, brews a cup of coffee, and sits down to start her day. The very first thing Mary does every morning is to check in on how her current incrementality tests are doing. She wants to make sure that nothing looks wonky—she knows that plans change, ad tech can have hiccups, marketers can accidentally change their campaign settings, etc.
Having checked her live tests and seeing that everything looks in order, she begins checking out the initial results. Sometimes the agency leads on their accounts like to check-in and see if there’s anything interesting to share with their brand clients.
Test #1 is trying to determine the incremental value of a brand’s YouTube video ads on the overall program. These ads are generally very upper funnel and often hard to determine effectiveness—even MTA has trouble tracking them back to conversion activity. But, with incrementality testing, individual customer journeys don’t matter. The test is already showing that YouTube is performing much better than assumed.
Test #2 is a critical one for an app-based gaming client whose sales have been negatively impacted by the iOS14 changes earlier this year. Facebook has been an essential channel for years, but now the agency team is having a lot of trouble figuring out which of the new creative directions are actually working and have even been discussing doing the unthinkable and reducing their investment in iOS users. Right now, the Earn a Free Badge campaign is outperforming the others, and iOS users are proving to be quite profitable.
Mary shoots off a quick email to the lead on the account to let them know that even though the five-week test isn’t over yet if they need to make a fast decision, they should push that campaign right now—but not in the handful of regions where the tests are running. If Mary is correct, the client will get some much-needed revenue this week.
Test #3 is an offline/online experiment to determine if the CPG client should be pushing customers to stores or online retailer partners. This is very important because the uncertainty around the next six months of the pandemic creates some hesitation on which way to go.
However, if the extensive fall marketing program can effectively drive ecommerce sales, it makes sense to keep investing in that direction given the challenges of selling in-store. This test just launched a few days ago, so the results are inconclusive right now. The client is on the cusp of having a tough quarter, and this test may be their last chance to save the year.
11:00 am: client meeting
Mary meets with some of her agency colleagues interested in running their first incrementality test for a large travel client. The client has started seeing traction recently, and its CMO has gotten the green light to make a big marketing push. However, before they implement the plan, the team thought it would be a good idea to do a few tests to figure out where best to spend the entire budget.
This is the first time the team has done an incrementality test, but they’ve heard about great results from other groups at the agency, so they are ready to try it. They’ve invited the CMO and his team to the Zoom call to learn more. Mary presents her Incrementality Testing 101 deck and fields questions from the client. The meeting is very encouraging. The CMO understands the need for a cookieless measurement approach and has been reading incrementality testing for the last few months. He’s ready to get started and was happy to know that his agency offers it.
One of the most critical parts of Mary’s role is to help translate business problems into incrementality tests. She listens to the CMO, and her agency lead explains some of the existing challenges they face and the essential questions they have about their marketing programs. Like many companies, they struggle with quantifying the online/offline connection, measuring upper-funnel effectiveness, and other common issues that MTA could never solve fully.
Mary listens closely and makes notes. Then, when it’s her turn to speak, she asks the questions that will help her zero in on the business metrics and desired outcomes they want to achieve with their upcoming marketing push. After fully understanding these variables, Mary promises to have some ideas for the team to review by the end of the day. Translating business problems to incrementality tests can be tricky, but if done correctly, a few monthly tests can offer critical insights to help guide planning and optimization.
Read Part 2 now!
Skai Impact Navigator, the incrementality testing platform
Impact Navigator measures the real-world effectiveness, or incrementality, of a marketing tactic in the only place that matters: the real world, with real people, as part of a real marketing test measuring business responses that matter like revenue impact, client acquisition, and brand engagement.
Leave the guesses and hunches to your competition, and speed ahead with your growth strategy powered by real consumer insights.
Want to learn more about how Impact Navigator can benefit your brand or agency?
Please reach out today with questions or to schedule a brief demo.