Split testing is where you present your audience with two or more variants of something so that you can collect data on user activity, segmented by which variant they were presented with. Once sufficient data is collected, and with a KPI in mind, you can determine with a degree of statistical certainty which variant is the most effective at achieving the desired effect and help you achieve your goal (In eCommerce marketing, that would be sales).
I tried to make the above definition as concise as possible, and while feeling that I did rather well, I do recognise that it is probably just as confusing to most as it is well written by me, so let’s look at a simple example…
Pretend you have a new set of left handed screwdrivers you want to sell on your store that has absolutely cornered the left handed tractor mechanic niche. You get the new set in and book a photoshoot with a professional to expertly capture the product in a variety of outside settings.
The photos come back, color graded and looking excellent, but you and your husband / business partner are arguing about which picture is best to advertise this beast of a product offering with. He says Picture A will definitely ship more units but you insist that Picture B would bring in the most orders.
In the time of print advertising you would have to flip a coin, maybe get a small focus group’s opinion or someone would have to pull rank for a decision to be made. Fortunately it’s 2021 and data is more readily available than drinking water. What does this mean for eCommerce marketers?
Opinions don’t matter, only data matters.
I have spouted this expression, or a variant of it (Unintentionally, not for split testing purposes), so many times in my life that I quite possibly consider it to be the number one most important thing to always remember in marketing and should be used as a mantra by all in the industry.
OK, maybe always split testing everything is a bit too much. But seriously, there isn’t much you can’t split test. In fact, split testing is often the only way in which you can optimize something. So let's have a look at some things you can split test…
Returning to our previous and unfinished example of an ad image selection dispute. For a simple test to put an end to the bickering, you head over to Facebook Ads and set up two different ads that are identical in every way except for one difference, the picture. After running these ads for long enough, you can work out which picture garners the best results and how statistically certain you can be that the apparently better picture actually is the best (95% statistical certainty is generally considered enough to declare a winner).
Creative alone has many components that you can split test from the basic image example to description, headlines, call to action, video, link description and more. You can and should always be testing ad creative. Firstly, because you can always be doing better, and secondly, because of something called “ad fatigue” which can cause an ad to go from effective to ineffective as it saturates your audiences and people become bored of or blind to it.
You can also split test your targeting. Example… Instead of making one ad set on Facebook that targets an array of interests, make a separate ad set for each interest so you can identify which ones perform well and which perform bad, allowing you to scale your winners and cut your losers. This way you are optimizing your targeting by zeroing in on sales as opposed to just rapid firing in their general direction.
Onpage conversion optimization is an awesome topic. While things like speeding up your website fall under this umbrella, that can be achieved and measured by simply tweaking things while tracking how it affects loading time with something like Google PageSpeed Insights. No need for split testing there.
But maybe you think that the current and boring “Buy Now” button color blends in with the background too much and you want to give it a bright orange makeover to make it seemingly scream out to visitors. Easy (in concept), just have your website show half your visitors the boring old button and the other half your obnoxiously orange call to action and see how each variant fairs using a metric like conversion rate. Once you have enough data to know with sufficient certainty that one drives more sales than the other, you can make the commitment.
Thanks to some tricky tracking techniques, it is now possible to know (for the most part) when an email has been opened and if a link within it has been clicked. This gives you the ability to track open rates and click through rates of emails… perfect data for split testing.
When it comes to newsletters, most email marketing platforms like MailChimp and aWebber come with testing features built in so you can split test subject line variants to optimize open rates and email body content to optimize for link clicks. You may want to look for this feature in any other scenario where you send emails to try and drive action like abandoned cart follow up emails to draw almost-shoppers back to check out.
The truth is, when you start to get that conversion optimization itch in your brain, you begin to see split testing opportunities everywhere. This doesn’t mean that you should dive in head first and start testing everything. There’s a lot of methodology, considerations to be made and many ways you can trip up. It is actually quite an advanced topic that falls under data science & statistical analysis, so don't think I am giving you all you need to know to go out and split test like a pro yourself. This article was just to get people up to speed on what it is if they weren’t already aware, why it’s awesome and where they can and should apply it so they can go out and learn more about the topic and make data drive decisions!
A common problem is people go fishing when they should be hunting for whales. What I mean by this is that instead of randomly testing everything, identify where the biggest disparities are most likely to be found and start there.
Split testing the font used in your site's navigation menus will probably make little difference as it is largely insignificant, while an image or video split test on an ad could potentially show huge differences between variants and as such, greater optimization opportunities.
Never forget, data is the essential ingredient in all of this. To achieve statistical significance in your test to validate your results, you need enough data per variant. You have to take this into consideration when testing. So if your store is only getting 20 visitors per week, probably better focusing your efforts on advertising and testing there, instead of measuring which banner design on the homescreen led to more purchases because 20 people a week isn't worth the set up time.
How much data you receive dictates how many variants you can test as well. If you are going to be showing you ads to 30,000 people per day, then why not try 6 different variants? This could be in the form of a multivariate test, combining variations of more than 1 asset. If you had 3 images and 2 descriptions then you could make every possible combination and now have 6 variants to test. Of course, multivariate testing is the superior method but once you start going down that route you will need lots of data.
A good way to think of it is that smaller tests means quicker results, which you can iterate through faster, while larger tests on the same data set mean greater accuracy in optimization, but at the cost of more time or money required to gather the data.
Just remember, it doesn’t matter how convinced you are that your ad copy is 10x better than competing copy written by a 10 year old who is high on cough syrup with a fever, if the data says otherwise then you are objectively wrong because…
Opinions don’t matter, only data matters!
So, stop thinking that you know best, or umming and erring over which choice to make. Put tests in place and make data driven decisions.
If you are interested in contributing to our blog, wanna ask us something or just have something to say, then send us a message.