AB Testing – How and Why to Use
A/B testing (or split-testing) is, generally speaking, one of the simplest testing methods you can use. It can be used to test different designs, offers, subject lines, landing pages, send times and more. Pretty much all forms of marketing can be A/B tested in one way or another, to help you define the best way forward for your campaigns.
Variables – How and What to Change
First you need to identify what change you want to make: what variable you want to test. Some of the variables you might want to test.
- Would a new data segment / target audience achieve better conversions?
- Would a different offer produce better results?
- Would sending on a different day or time make a difference to opens?
- Would incorporating an image make the piece more powerful?
- Would a different message work better?
You are likely to have your own ideas on what you feel you should change and test, such as those above, but it is also worth considering gathering client feedback on what they would like to see in your communications with them.
However, don’t change more than one thing at a time or you will end up with unreliable data; you will be unsure which change made that important difference to your results.
Data - Who to Send the Versions to
Make sure you send to a good-sized list – comparable to your usual campaigns where appropriate. Once you have determined your list you will then need to split this to form the two (A/B) segments.
To ensure your testing of the impact of the variable is effective, the best method is to split the list in half in a completely random manner. However, it is worth also considering:
- If you are looking to analyse the effect of your A/B test within different demographics (such as age or gender) you will need to first segment these demographics – then create an A list and a B list for each segment – and remember: don’t change more than one thing per test.
- You may want to hold back a third ‘control group’ of around 10% of your list so you can report the difference in sales rates between Option A, Option B and those who received No Marketing piece in your test – the ‘control group’.
Analysis – Why we Test!
After your predetermined period has passed, it’s time to review the results. That period of time will need to be identified in accordance with your business, products and sales cycle. For example, you may want to report a week after the campaign send to get initial response, web visit and enquiry results; but then report again on conversions and sales revenue a month later – you will know how best to modify these timings for your specific business.
When this information has been gathered for all recipients, you will be able to identify which of your variables A or B was most successful in achieving your goals and objectives; whether these were more web visits, incoming calls, conversions, new social followers, or sales.
Obviously the benefit of A/B testing is to improve our future campaigns, while testing theories that otherwise may never have left the meeting room.
Once you have tested and are confident that one approach delivers better results than another, you will likely opt to use this method with the majority of your contacts moving forward. However, because we can always improve our communications, you may then want to A/B test other variables too.
Testing should always be a regular and ongoing process. Many things change, including consumer and business opinions, and tests can be affected by seasonality and other factors outside of your control.
We all need to ensure that we stay up to date to be confident that we are producing the most effective campaigns and therefore the best possible ROI.
- Do you do a lot of A/B testing?
- What’s the most surprising result you’ve seen from an A/B test?
- What challenges do you encounter with the testing?