When creating ads, there are usually a couple of ideas on the table.
In this situation, people tend to pick out their favorites. However, it often turns out that the audience does not feel the same way.
And just like that – POOF – your advertising budget has gone down the drain.
In order to avoid such scenarios, all marketers should be A/B testing ads.
What exactly does that mean? How does A/B testing work in mobile marketing? What are the best practices to conduct A/B tests?
Buckle up – here come the answers to these and other burning questions.
A/B Testing and Mobile Marketing
A/B testing (also known as split testing) is a method of comparing two different options and analyzing their results.
This method can be used for all sorts of things, from web and app design to advertising.
Essentially, A/B testing is a shift from guesswork to a scientific approach.
As I’m sure you know, there is no room for guesswork in the data-driven environment of mobile marketing.
In mobile marketing, advertisers use this method to test different elements of their ad campaigns. For example, creatives, audiences, placements, and campaign objectives.
The possibility of A/B testing ads is one of the biggest benefits of mobile marketing. In traditional marketing, the results depend on the advertisers’ initial choice. Their choices determine if the campaign will fail or succeed, and there is no coming back from it.
On the other hand, in mobile marketing, everything can be tested. It’s not something marketers can do, but something they need to do.
Testing is the only way of finding out if something will work or not. In this case, it is the only scientific way to find the best-performing ads.
A/B Testing Ad Creatives
In this article, I’m going to focus on one particular form of A/B testing ads – testing ad creatives.
Creatives are at the heart of every ad and play a critical role in ad performance.
Every mobile marketing ad creative consists of different elements (visual, textual, messaging).
Even if just one element doesn’t work, it can affect the performance of the whole ad. For this reason, it is necessary to test different elements within the creative.
However, not all at once.
One of the main rules of A/B testing is changing only one thing at a time. I’ll explain this in more detail in the following section, so make sure to keep reading.
The Process behind A/B Testing Ads
A/B testing ads is a process.
And just like every process, it consists of different steps and practices. Advertisers must know what they are and which ones are the most important.
Let me guide you through it.
Set Objectives and KPIs
Before anything else, you need to answer the question – “What am I trying to learn by A/B testing ads?”
For example, if you’re an app marketer, you’re probably trying to understand how to acquire more users at a lower cost. Or you’ve just launched a new app, and you’re eager to get as many new installs as possible.
Based on this, you should determine the objectives you want to achieve.
Just like all things in mobile marketing, these objectives should be measurable. For this reason, you should pick out your goal metrics – KPIs. These are the metrics you should be paying the most attention to when analyzing A/B test results.
Once you have an idea of what you want to achieve with A/B testing, it’s time to make some assumptions.
Or in the scientific jargon – hypotheses.
Essentially, hypotheses are ideas and predictions that can be tested. They need to be clear and measurable, so an A/B test can confirm or deny them.
In this phase, advertisers must assume what they will achieve by testing different approaches.
For example, a hypothesis could be that a pink-colored ad background will work better than a previously used purple one.
The more hypotheses you come up with, the better. That means you have a worked-out plan ready to be tested, documented, and analyzed.
Pick Out the Elements to Test
Once the assumptions are on the table, it’s time to get down to the nitty-gritty.
As I’ve mentioned before, ad creatives contain different elements. Each of them has a varying effect on ad performance.
Only by A/B testing can you figure out which elements have the greatest impact on the performance of your ad creatives.
What can you test? Brace yourself – the list is quite extensive:
- Ad dimensions
- Ad copy
- Main feature
- Characters (e.g., by gender)
- Colors and brightness level
- Music and soundtracks (video)
- Speed (video)
- Buttons and boxes
- Video length (e.g., 15 seconds vs. 25 seconds)
- Aspect ratios
All of these elements, big or small, allow you to create variations of an ad creative. The crazy thing is, this is not even the full list. If I were to get into the specifics for different types of ads (image, video), there are even more elements to test.
As you can see, a lot of the elements I’ve listed refer to video ads. This is because video advertising is huge on mobile, and most advertisers are aware of this. If you’re not one of them yet, I strongly advise you to give it a try.
Create Two Ads with a Single Variation
I can’t stress this enough, so I’ll repeat myself – in A/B testing, it’s only allowed to change one variable at a time.
Running an A/B test for an ad means releasing two versions of it. These two versions should be exactly the same, except for that one element you’re testing.
Say you’ve decided you want to test two video ads with a different soundtrack, a pop and a rock one.
You’re free to do that, but make sure not to change anything else in the video or the ad (e.g., headline, copy).
Many advertisers make the same mistake and ignore this rule. They make multiple changes at once and end up comparing apples and oranges.
This is a big no-no.
Here’s the thing… If two ads have multiple differences, it is impossible to determine which ones have caused better or worse performance.
Select and Split Your Audience
After you’ve decided what you want to test, it’s time to pick out who will participate in the test.
Typically, advertisers start A/B tests with their most valuable audience.
For example, if you know most of your revenues come from 20-something-year-olds, this is who you want to deliver your A/B test ads to.
For the most accurate results, it’s essential that these groups are big enough. If your audience is too small, your A/B tests will be more prone to under-delivery.
For this reason, most ad networks have their own audience size recommendations for A/B testing.
Once you’ve identified the target audience and made sure it’s big enough, it’s time to split it into two groups.
While 50% of the selected audience will see version A of your ad, the other 50% will see version B.
Let’s get back to the soundtrack example. With this method, half of the audience will get the video with a pop soundtrack, while the other one will hear some rock music.
Run the Test and Give It Some Time
When you launch your A/B test, both ad versions should be released simultaneously.
This is very important, and here’s why.
Only if the two versions run simultaneously can their results be precise.
If version A ran one month and version B the month after, not all variables would be the same. The advertisers wouldn’t know if the performance difference happened because of different months or different elements.
It’s not only timing that matters here. The duration of the test is equally important.
A/B tests have a lot of great characteristics, but being quick is not one of them. You need to give A/B tests enough time to provide you with statistically significant results. The last thing you want is to declare a winner too soon and make wrong decisions.
Now, you’re probably wondering how much time is enough time.
This depends on a couple of factors. The most important one is – how much traffic do your ads typically drive?
Depending on the answer, A/B tests should run anywhere from a couple of days to a couple of weeks. According to Facebook Business Center guidelines, they should last for at least four days.
Declare a Winner
When your A/B test is complete, you should examine the results to figure out which version performed better.
Sometimes, everything will be clear as daylight because one version will beat the other across all metrics.
However, in some situations, both ad versions will be successful in their own way.
For example, one may have a much better CTR, while the other one will come at a slightly better CPA.
In such cases, you need to think about the objective you’ve set at the beginning of the process. For example – is your main goal increasing ad engagement or lowering acquisition costs? Which KPIs (metrics) have you determined as primary?
Here’s another possible outcome that may leave you confused.
Your A/B test may yield results with little or no difference across key metrics. If this happens, you shouldn’t rush to conclusions. Such results are within the margin of error and are not something to rely on. Instead, run the test again and see what happens.
You may be thinking, “But that’s both expensive and time-consuming!”
In a way, that is correct. However, if you want your A/B tests to deliver clear results and answer all of your questions, this is the only way to do it.
Implement Changes and Continue A/B Testing Ads
Based on the results of your A/B tests, you will be able to decide which changes to implement and which not.
Obviously, you want to implement elements that have improved your ad performance.
For example, if an “Install now” CTA has proven to work better than an “Install for free” CTA, this is the CTA to use in your future ad creatives.
With this information in mind, it’s time to proceed with the tests.
If you want to optimize your ads to the fullest, A/B testing should be an ongoing process. It should look something like this:
- pick out the next element from the list you want to test;
- run the test;
- review the results;
- implement changes;
- return to 1.
Bonus Tips for A/B Testing Ads
Congrats, you now know what makes the process of A/B testing ads!
With all of this in mind, let me give you some extra tips that can help you get even better results from your A/B tests.
Start with the Most Important Elements
Wondering which ad elements you should pick out for your first A/B test? Don’t choose randomly, but wisely.
Pick out the most important elements of your ads to determine if they serve their purpose. For example, if you’re advertising a mobile game, a major element would be the character featured in the ad.
Think about it… What is more important for your video? To find out if you’re using the right character or the right button color?
Lay off the button and similar details for now.
Instead, choose the variables that significantly alter your ads and address them first.
No, I’m not underestimating the impact of small changes. They can also carry great value and drive significant improvements. However, they are not the first thing you should be focusing on.
Pick KPIs Down the Funnel
As I’ve mentioned before, one of the first things to do before an A/B test is to set objectives and determine KPIs.
A KPI can be any kind of metric you choose.
While making a choice, you should consider how important different metrics are for your business.
Some ad-related metrics are simple and obvious. You can see the results directly on the ad platform where you’re running the test.
For example, the CTR of a Facebook ad for a mobile app might seem like a reasonable choice.
However, it’s actually not so relevant.
The thing is, CTR is a metric at the beginning of the funnel. It is followed by different, more significant metrics for the app – e.g., installs, registrations, purchases.
My point? Clicks shouldn’t be your main focus. When choosing KPIs, focus on what happens after the click.
Try Testing More than Just Creatives
As you can see, ad creatives are at the heart of A/B testing.
However, if you want more in-depth results, you can test some other things along with ad creatives.
For instance, you can run the same creative A/B test as before but with a different campaign objective. Instead of the Conversions objective you chose the first time, you can now pick out the Traffic objective.
Then, analyze if the same A/B test yields different results.
Difference between A/B Testing vs. Multivariate Testing
If you are eager to change more things at the same time, you might want to consider multivariate testing.
Essentially, multivariate testing is an upgraded version of A/B testing. Today, almost all major ad platforms offer some kind of multivariate testing.
One of the most popular examples is Facebook Dynamic Creative Testing.
With this feature, advertisers can upload multiple variations of their creative assets. For instance, images, videos, headlines, ad copy, descriptions, CTA buttons, etc.
Once added, all of these variations can be tested at once. Dynamic creative testing takes care of the whole process and tries to identify the most effective ad versions.
As you can see, multivariate testing is a sophisticated process that brings deeper answers faster than A/B tests.
However, this comes at a cost.
To run such a test and get results with statistical significance, you’ll need to have a sufficient budget. In most cases, the ad platforms will make their recommendations about it.
A/B Testing Ads Wrap Up
Hopefully, this article helped you understand the importance of A/B testing in mobile marketing.
As you can see, it is a never-ending process that requires both time and effort. It may seem like a daunting task, but once the results come in – you’ll just want to make more of them.