Lily McIlwain, content manager at TripTease
In February 2018, A/B testing platform Optimizely called time on their free A/B test tool in a move heralded by many as the death knell of indiscriminate split-traffic testing.
For many years, businesses and individuals have turned to A/B testing as the catch-all solution for conversion optimisation, the ‘scientific’ nature of the test leading many to believe that they were gathering ironclad evidence for the profitability of whatever change they were trying to introduce.
In reality, the majority of websites do not have the required traffic or resource to run a statistically significant A/B test. In fact, it would take the average independent hotel two years to run a statistically significant A/B test.
One of the most damaging misconceptions about A/B testing is that anybody with a website can perform one.
The truth is, running an A/B test on too little traffic is a waste of a hotelier’s valuable time – and worse, could lead them to make important business decisions based on seriously dodgy data.
Triptease’s latest white paper Spotlight on… A/B testing sets the record straight on why independent hotels are unlikely to be able to run statistically significant tests, why hotels and OTAs are on a very different playing field when it comes to testing, and why a 10% uplift in an A/B test doesn’t translate to a 10% uplift in your bottom line.
The report is designed to address the questions and concerns many hoteliers hold about A/B testing and takes a look at some of the most common myths surrounding the topic.
A/B testing: Mythbusting
Whilst a lot of people have heard of or come across A/B testing before, many of them have very different expectations of the process than can actually be achieved in reality
Given the limited time available to most hoteliers, it’s all the more important that the time they do have to spend on their website is spent efficiently and economically.
The level of misunderstanding surrounding A/B testing is hindering the ability of many hoteliers to make meaningful changes to their websites.
Myth #1: I should A/B test every cosmetic change I make to my website
The example often given of an A/B testing hypothesis is something like “my conversion rate will increase if I change the colour of this button from red to green”.
While changes to the superficial appearance of certain parts of your website do indeed have an impact on the experience of users, they are unlikely to be revenue drivers in the same way as making fundamental changes to the structure, layout or composition of your site.
Analytics company Qubit published a meta-analysis of thousands of their experiments in 2017.
Their results showed that changes grounded in behavioural psychology – social proof, abandonment recovery, etc – had the biggest average impact across all of their tests.
Qubit described these changes as those which “alter the users’ perception of the product’s value.”
So, a feature like Triptease’s Price Check falls into this category (as it alters the user’s perception of where they can obtain the best price), whereas a button colour change does not.
Qubit’s meta-analysis also demonstrated that cosmetic changes “do not constitute an effective strategy for increasing revenue”.
Though often the subject of A/B tests due to the relative ease with which businesses can implement them on their website (cosmetic changes often don’t require the input of developers), Qubit suggests that “the probability that these simple UI changes have meaningful impact on revenue is very low”.
Rather than looking to colour or wording as change agents for your business, Qubit recommends just “choosing a design and sticking with it based on preference or through a qualitative process.”
Myth #2: My test has shown a 10% uplift, so I’ll see a 10% uplift in revenue this year
Unfortunately, this is not the case. The results of an A/B test only inform the tester of whether they should reject their null hypothesis – making this change will not have a positive impact on my conversion rate.
The results do not tell you the exact amount by which your change will increase your chosen metric, only whether it is likely to be increased at all.
If your test is positive and you accept your alternate hypothesis – making this change will have a positive impact on my conversion rate – the actual uplift could well be higher or lower than the observed uplift.
It is also possible that the actual uplift will vary with time.
Myth #3: I read about a hotel who ran an A/B test on their ‘Book Now’ button and saw a 15% uplift in conversion. I’m going to make the change they made to achieve the same uplift
It’s never wise to assume that one specific change made by another hotel will work in the exact same way for you.
While we wholeheartedly recommend speaking to your peers, asking for (and giving) recommendations, and sharing advice, this should be used to inform your own thinking and strategy rather than as a blueprint for one-size-fits-all success.
There are any number of reasons why a successful change on one website will not be successful on another: the website structure, the expectations of the audience, the type of hotel, even the timing of the change.
If you have the necessary scale and capability to conduct a test on the suggested change then this should always be your first port of call.
If you don’t, there is nothing to stop you trying it out (with a knowledge of the potential risks) if you really think it is the right decision for your audience.
Just don’t be surprised if your website doesn’t experience exactly the same results as the example you read about.
Triptease will be running a free webinar on the 25th April to explore the questions raised by the report – registration is open now.