A/B testing is an important component in marketing and marketing experiments, in general, are key to getting great performance. In this article, we’ll examine some of the key testing rules you should follow to guarantee a fair and successful test.
A/B Testing is a concept that gets kicked around quite a bit in digital advertising. The inconvenient truth is that most agencies and direct advertisers launch tests without considering the most important factors of an experiment. Why are you running an A/B Test, what should you consider, and how will you decide on a single success metric?
A/B Testing step-by-step
The first step you should take before launching any A/B test is to decide on one success metric and one constraint (if necessary). Common success metrics are revenue, conversions, profit, or clicks. A constraint would be that the metrics directly correspond to a KPI for example; ROAS, CPA, CPC, or spend. If you do not decide on the success metric and constraint ahead of time you run the risk of your subjective feelings coming into play.
A/B Testing requires that you are only testing one variable at a time. This means that everything—and I mean everything—is constant. For example, if you’re testing Manual Bidding vs. Google’s Smart Bidding Target CPA. The only variable, in this case, would be Smart Bidding, so all of the other settings must be the same in both the test and control group—location targeting, budget, ads, keywords, etc. Next, you should take into account how much volume you are testing and is it enough data—too little and the results might not be accurate and too much isn’t necessary so there’s no need to let it run too long which can waste the budget and your time. A good rule to follow is to have at least 100 daily conversions in the entire test before the A/B split.
The next rule is to know your “True North” especially if you are testing a bid strategy. If you are testing a bid strategy, confirm that both groups are bidding toward the same conversion source. Google’s Smart Bidding traditionally uses its own pixel to track conversions, however, it’s possible to import your own conversion data into Google Ads for bidding. Remember to consider this prior to launching any bid strategy tests, “Are both bidding solutions optimizing to our True North and the same metric?”
The last rule that’s usually overlooked is having a baseline period during which no other changes are made to the control and test group. This gives you something to compare when the test is complete as you measure the lift of the test versus the control groups. It also gives you a chance to gut check the actuals post-split to guarantee traffic is, in fact, being divided between each group correctly.
During the baseline, it’s important to make sure that both groups are managed/optimized in the same way. Including new ad copy, new keywords, budgets, etc. In my experience, a two to three-week baseline will be sufficient to establish a comparison before you launch the test.
A/B split approaches
Now that we have covered the rules to consider before launching a test, let’s review the different split methodologies that are available to you as a marketer.
The three most common split methods are:
- Geographic Split (“Geo-Split”)
- Google Drafts & Experiments (D&E)
- Campaign Split
Geo splits are the most sophisticated because it can be used to measure the incrementality effect of the test, it’s transparent, and this method makes it possible for cross-publisher testing.
Google’s Drafts & Experiments is another common split method. Within this tool, there are two D&E settings: Cookie Based and Search-Based Split. Cookie Based Splits will only show one version of your campaign even if the user searches the same keyword multiple times. Search-Based Splits randomly assigns users to both groups every time they search for those keywords. This can help you see results faster, but you open yourself up to the risk of an invalid test. “Why might you ask?” Because the split randomly assigns a user to a group, it’s possible the same user could see both the test and control campaign if they search more than once.
Ultimately, a Cookie Based Split is the preferred D&E method as it helps ensure a user isn’t exposed to both the test and control groups, which could influence the outcome of the results.
How long to run an A/B test?
Now that we have covered the testing guidelines, split methods, and settings, we can move on to a testing timeline. The truth is that A/B testing takes time. Depending on your test volume and your bandwidth, a test can take anywhere from 10 to 15 weeks. Below is an example testing timeline for reference.
In this example, the test is broken into three different parts—define the hypothesis, the experiment, and the evaluation period.
Defining a hypothesis is often a step that gets forgotten by marketers. Usually, they just wait until the end and then look to see if the test group beat the control group. However, setting a hypothesis of what you think will happen is key to data-driven decision making. Once the test is complete, being able to compare the actual results to your hypothesis will help you better understand how accurate you are with regards to guessing what might happen. Over time, you can learn if you are overestimating the impact or underestimating how bid changes might help your campaigns.
The experiment period
Within the experiment period, there are three different parts:
- The Baseline
- Ramp-Up and Test
The Baseline is for comparison purposes and no changes should be made to either group during this time. The Ramp-up and Test is when you actually kick off the experiment. The Cooldown is not a part of the testing evaluation period. This time is designated for you to disable optimization tool you are testing on both or one of the A/B groups.
Once the test has been successfully disabled you can move to the evaluation stage. Was the test statistically significant? What is your risk tolerance and confidence level? Marketers have moved to a new level of sophistication and must feel confident that new bid strategies have a mathematically proven statistic before they widely adopt a new digital strategy.
How are your A/B tests going?
With over a decade in performance-based advertising, Kenshoo provides SaaS solutions for measuring your true marketing incrementality. If you are interested in learning more about how Skai can assist in identifying the true impact of your digital strategy, please contact us.