Moti Radomski
VP, Product @ Skai
Moti Radomski
VP, Product @ Skai
Using marketing measurement for reporting campaign results is really just for accounting purposes.
But the real value that marketers need from a solid measurement practice is to find out what is and isn’t working so that they can improve campaign performance.
This usually falls into two buckets for digital advertising:
Maybe the best framework for marketers to think about measurement would be in three ways:
This framework fills the gaps of today’s measurement philosophy and offers a complete, end-to-end approach to maximize the impact that optimization can have on driving campaign performance.
With in-flight measurement, practitioners launch a program and give it enough time to accumulate results. How long? It depends on the conversion window. If most of the conversions happen immediately after an ad click, then marketers only need a few days to begin optimization.
However — as with many conversion types — it can take days or weeks (or longer) to see post-click conversions so marketers shouldn’t make any changes immediately before they have solid results to use to optimize. Simply put, if the majority of your conversions generally take 10 days to come in, you don’t want to optimize on day 5 because it looks like your media isn’t working when it could just take time for your buyers to buy. You actually might want to wait 20 or 30 days so that you can get a much more accurate look at how your campaigns are working.
With digital advertising, in-flight optimizations are generally changes in creative, bids, budgets between channels, keywords, targeting, etc. Marketers use measurement to know if they need to change, pause, or increase activity on various elements of their programs. And then, once they make those campaign optimizations, they use measurement techniques to see if those changes worked.
Post-campaign optimization happens after a campaign is over. So, marketers can’t impact the performance of the campaign that has ended, but rather use the insights of what worked or what didn’t work to build best practices on how they plan and run future campaigns.
For example, an eight-week social advertising campaign is analyzed and it is found that targeting older audiences wasn’t as effective as targeted younger groups. So, in the next campaign, the optimization would be to allocate more budget to young groups or try a different messaging strategy with older groups.
In-flight optimization works. We know that in a 10-week paid search campaign, for example, if the marketer goes in each week and finds ways to stop wasting the budget while also doubling-down on things that are working really well, that KPIs generally do improve. Marketers will tell you that the first week of the campaign is usually the “worst-performing” week because once in-flight optimization begins, each subsequent week will see incremental gains.
But what if your campaign is just one week? You don’t have the time to progressively improve results. In-flight optimization doesn’t work on extremely short campaigns — and marketers often have a lot of short campaigns.
Another challenge with in-flight optimization is that it is almost always done at the channel level. Paid search marketers optimize paid search campaigns. Social advertisers optimize social campaigns. Programmatic marketers, email marketers…etc. etc. etc. are always optimizing their campaigns in a bubble. But, we know that channels impact other channels. So, maybe in week three of a campaign, a channel practitioner finds an opportunity to raise bids on things that are working in the first two weeks but didn’t realize that the only reason why those things were working was that another channel was assisting those conversions. So, now the “optimization” backfires as results drop in week four.
That’s a simple example I’ve shared, but things like this happen all of the time. Especially for large companies investing heavily in many different channels, the interplay between channels is constantly impacting the rest of the media plan. This is why sometimes in-flight optimizations work and sometimes they don’t. “Results can vary” some may say.
With post-campaign optimization, the goal is to analyze a campaign as soon as it finishes and get those insights over to the media team immediately so that they can build a better campaign for next time. However, we all know that is more of an ideal scenario than what’s really happening. What really happens is that it can take weeks to get the post-campaign numbers, aggregate them, analyze them, and derive insights.
And the bigger the marketing organization, the slower this process is. For example, there might be a separate analyst team outside of the media team. So, not only does it take time to do the number-crunching and insights building, but then a meeting needs to get scheduled to relay the information to the media folks. Armed with those insights, the media team needs to use what is basically now second-hand information to better plan and buy.
If there’s an agency involved — or even just more layers to an in-house organization of people who need to see & sign-off on insights before they are shared with the media buyers — it pushes the timeline out even further. Sometimes there’s even back-and-forth discussion over the best way to interpret and utilize the insights which add even more time to the chain.
Thus, the problem sometimes with post-campaign optimization is that by the time the insights get to the people who plan campaigns, the next campaign has already started and so the insights are [hopefully] applied to the campaign that follows. In the case of long campaigns, marketers might not get to use the post-campaign insights from the Spring program until the Fall program because the Summer campaign will have already been planned and begun by the time the insights get to them.
Even though I’ve pointed out the issues with in-flight and post-campaign optimization, the truth is that some optimization is usually better than no optimization. I’m not debating whether or not these optimizations shouldn’t be applied — they absolutely should be — but what I’m seeing from the most successful marketers is a shift in thinking to how measurement before a campaign might actually be the most effective way to fuel performance optimization.
Pre-campaign measurement in the form of experiments and testing can help marketers optimize campaigns before they run.
Let’s say that you have a new creative direction that you plan to launch in conjunction with a huge new Facebook advertising campaign. Why not take out a very small portion of the budget and run some tests to gauge the performance of these new ads? Test them with different audiences, different bids, different formats, etc. You can also run tests to see how your Facebook ads perform while running at the same time with certain levels of spend on other channels.
Using those insights, social advertisers might be able to switch to a new creative strategy because they discover through these pre-campaign tests that the ads they were going to use weren’t working very well. Using pre-campaign measurement, the opportunity to start a campaign at a much more optimized state — which performs better right out of the gate — is higher than with an un-optimized campaign.
Pre-campaign measurement may actually save what could have been a disastrous program before it begins. Or even turn what would have been an average campaign into a highly lucrative one. Why not go from good to great with just a few weeks of testing before launch?
Yes, it will take more planning by marketers to institute a pre-campaign experiment phase to every campaign, but the return on their advertising investment should warrant the additional time.
What do you think? Are you ready to add a pre-campaign measurement process to your campaign strategy?
You are currently viewing a placeholder content from Instagram. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from X. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information