At Luminary we advocate for your website being an asset that you continually nurture and grow. After the discovery and build, then comes learning, optimisation and growth, hence our agency framework: Explore, Build and Grow. The Growth phase of our process is where we work with existing clients, analysing their digital performance and optimising their sites through research, testing and upgrades.
In our recent blog article ‘What’s next after Google Optimize’s Sunsetting’ we outlined some of the options for businesses to consider when assessing the transition options for their future A/B testing. While choosing the right tool to support your testing requirements is important, there are other relevant factors that will ensure you get the most out of your testing. To ensure you’re across the key considerations, we’ve caught up with some of Luminary’s digital experts to discuss the different perspectives you need to consider when conducting A/B testing in 2023.
Analytics perspective: Sarah Crooke, Data Analyst
As a digital analytics expert, I can’t help but love the richness that experimentation data can bring to a website. The data itself will tell us that those organisations who are conducting upwards of 120 tests a year are outperforming their competitive set, and it makes sense that verifying your customer journeys and tweaking your site is a great way to achieve an edge. With the sunsetting of Google Optimize, it’s a good opportunity to not only assess the other tools that could work for your organisation but to also check back in with what makes successful and focused experimentation.
There are many reasons to experiment on your website. These can include:
- to improve the user flow of the website and help visitors achieve their goal quickly
- to bring attention to features that customers might not know about
- to increase conversion value by up-selling and cross-selling.
These can be accomplished by testing out new ideas on smaller audiences to understand their value before rolling out to your broader target customers. Not only will this allow you to attribute an actual value to changes you make to the site, you will also get the chance to capture any issues with a new feature before it’s rolled out fully.
You can also test a simplified version of a new idea before investing heavily in development. For example, you may wonder if having reviews on your products is worth investing in. To test this, you could add some static reviews on popular products and assess if that increases conversion value. If the test doesn’t win (i.e. the original performed better) you have actually saved money by not investing in this feature.
Regardless of the experiment, it is important to have a clearly defined objective. This is easily done by using a template for creating your tests, i.e: By changing X, we expect Y, measured by Z. With X being what you will change, Y being the behaviour you are hoping customers to display, and Z being a tangible measurement, not just ‘engagement’ or warm fuzzy feelings.
So, for the reviews example, it would be:
By changing the product page to include reviews
We expect that customers will have a greater trust in the product they are buying
Measured by ‘add to carts’ and an increase in purchases of products with reviews.
Using this template helps avoid unnecessary tests and sets a clear goal for all your tests to measure success by.
With this approach in mind, the sunsetting of Google Optimize leaves many brands on the hunt for new tools to support their testing. While there are several options available (as outlined in our recent blog article) there are a few must-haves from an analytics perspective:
- Being able to feed experiment data into other analytics programs, such as Google Analytics
- A tool that can give you in-depth details about the results, to ensure confidence in decision making
- An option to complete other types of tests, such as personalisation, and multivariate testing, so it can grow with your needs.
With these considerations and the right tool at your disposal, A/B testing can turn your insights into growth for your business.
SEO perspective: Shayna Burns, Senior SEO Specialist
Undoubtedly A/B testing can be an effective process for fine-tuning your content. Who would not want to gain some extra percentage points of conversion through conveying your messaging in the most optimal form? However, there are some key considerations to take into account from an SEO perspective to ensure your testing doesn’t result in a drop in your SEO performance.
To expand on this, it’s critical to understand where A/B testing can come undone. Essentially when we are doing an A/B test, or multivariate test, we are duplicating a page and then comparing user results from the two versions. However, there is a risk of confusing search engines if not executed well, which can mean a hit to your rankings and, in some cases, keyword cannibalisation.
There are two key reasons Google can mistakenly penalise you when you are conducting your A/B testing.
- Cloaking - Cloaking occurs when the content Google sees (your primary page) differs from what users see (your test variants). This goes against Google’s Webmaster Guidelines, and this can have a negative impact on your ranking, with Google potentially demoting or completely removing your content from its index.
- Page Duplication - On the flip side of cloaking, if the pages are too similar, then you can run into duplication issues for which you can be penalised. Google can get confused as to which page should be indexed and ranked, resulting in none of your versions performing well.
So how can you ensure these potential issues don’t become a big downside to your testing? Well, essentially search engines are looking for stability. There are a few key ways you can help Google keep track of your key pages and keep your rankings intact:
- Google is served the primary content page you want to rank with. You can do this by including this URL (and not its variants) in your XML sitemap and internal linking structure.
- Ensure that the variance between your pages isn’t too stark, so you can avoid running into cloaking issues.
- Another layer of security when working with pages with different URLs is to utilise canonical tags, referencing the primary URL. This is a safer approach than blocking test pages, as this could lead Google down a cloaking path as well.
- If your testing involves redirects, using a 302 redirect can further inform Google that the redirect to the test page is temporary and that it should not remove your primary page from the index.
If you take these measures, along with ensuring you aren’t running your test at the same time as any major site changes, then A/B testing can be a fantastic tool to further your content optimisation.
Content perspective: Victoria Whatmore, Content Strategist
Another facet to testing that is often overlooked is copy testing: testing the comprehension of your content. Design does a marvellous job of helping steer, highlight and prompt engagement but research into how well your content is tracking is rarely undertaken.
Alongside usability testing there are three main ways you can test copy. These are: Cloze, Highlight and recall-based testing.
Taking a short sample of your text, remove specific words and ask your participants from your target audience to fill in the gaps.
To set up the test you remove every N word. According to NNG in a typical cloze test N=6 but to make the test easier you can always use a higher N value - and this might be advisable for a more complex piece of copy.
Working on their own, participants are then asked to fill in the blanks using their prior knowledge of your product or service and the subject matter.
To score the test, count the number of correct answers and divide by the number of blanks. To get a percentage divide the number of right answers by the number of omitted words. For example 12/30 would give you a score of 40 percent. Ideally, you want a score of 60 percent or above to show that your copy makes sense to your readers.
This is a fairly straightforward test where you ask your participants to highlight text that is clear to them and the text that isn’t.
Participants are asked to read the text through once, then using two different colour highlighter pens: yellow for confusing or unnecessary content and green for useful or helpful information, asking them to reread the passage, using the highlighters to mark it up.
It is useful to moderate this test as you can ask questions of your participants as you go along to gain qualitative insights to add to your score.
To score the test, simply add up the most frequent yellow and green sections.
This test lends itself well to longer-form texts.
As the name suggests, this test is to assess how memorable your content is and how well it was understood. We know that readers often scan website copy so if you truly want to see how well they have comprehended your content, a recall-based test is perfect. Just remember to reassure your participants that you are not testing them, you are assessing how well your copy is working (did you get your word choice right!).
To set up the test, choose a piece of copy that your readers might be finding confusing, for example instructional copy on how to engage with a tool or an example of how an application works. Determine the questions you want to ask about that text – for example, once you downloaded the software, do you recall what was the next step?
Ask your participants to read through the text, giving them as much time as they need. Then, removing the text, ask them your pre-determined questions. If your participants are having difficulty recalling the information it may be because your copy is not concise, which indicates an edit or even a complete rewrite may be needed.
You can follow up this activity by giving them the text again, going through it with them and asking them questions as to why they found a passage confusing or how they would make the information clearer.
As with all user testing, determine your goals upfront as this will not only help you decide which test is the most suitable, but also what you are focusing on, and in the copy tests detailed here, we would be looking at comprehension of your content.
UX perspective: Josh Smith, UX Director
Experimentation plays an important role in the UX toolkit. Once built, it is only then that we can see quantifiable volumes of traffic, giving us the ability to truly evaluate and optimise performance. This step in the product life cycle is unfortunately too often overlooked. But, with diligence, can yield a striking uplift in not just performance but also in the user experience.
From a UX perspective, we’re always interested in human behaviour and how behavioural norms can be used to influence experimentation. Taken from Behavioural Economics, this craft of influence through design has been aptly named, Behavioural Design. Its purpose is to identify blockers to desired outcomes and insert benefits to remove or overcome them. The process by which this is done and its results have been dutifully aligned with that of A/B testing. Most often these blockers and benefits are biases and there are over 180 of them that we all carry. Biases that stop us from taking an action (e.g. compassion fade and effort bias) and benefits that make an action feel effortless or worthwhile (e.g. defaults and social proof). Whether we decide to experiment with them or not, these biases are already at play, affecting A/B testing results.
To get the best results from experimentation, we recommend auditing the experience we’re trying to optimise, prioritising the blockers and benefits we see having the greatest impact, and one by one, experimenting with singular changes. To learn more about this process in action, read another of our posts on behavioural design.
Strategy perspective: Anna Potter, Digital Strategist
The term ‘growth mindset’ has been bandied around the corporate world so much in the last decade that it can be easy to become cynical about its value. But never has the term been more apt, than when applying it to the evolution of your digital channels. The fluid nature of digital means that if you’re not learning, tweaking and optimising, you’re being left behind. One method of constantly testing the assumptions of your site is A/B testing. As mentioned by my colleagues, A/B testing can be a great string in your bow of continuous optimisation strategies to ensure you are validating (or disproving) what you ‘thought’ you knew about your users and their needs.
To approach A/B testing strategically, rigorous prioritisation should be applied, otherwise there can be the potential to either be paralysed by indecision (where on earth do I start!?) or to run head first after low value or overly difficult ideas. After you’ve brainstormed some potential opportunities from what you have learnt about your users, there are a multitude of prioritisation methods you can employ to help you rank your idea, often attributing a number ranking based on the impact/potential, importance and ease of the change.
Focusing on areas of your site that are most strategically important to your business is also essential to ensure value and impact from your testing. For example if a key tenet of your digital strategy is to increase subscribers and enquiries, then focus your A/B testing efforts on your path to subscription (testing out wording, timing of prompts, imagery and CTAs) and the layout and requirements of your enquiry form. By being purposeful and strategic in your approach to online testing, you can ensure that your efforts will never be wasted and the performance of your site is clearly aligned with your business goals.
By leveraging the insights, tips and guidance provided by Luminary’s Strategy and UX teams, you have the potential to supercharge your site's continuous improvement plans and avoid some unnecessary headaches in the process. We hope it’s abundantly clear that any established site can gain benefit from testing and tweaking its content, design and navigation – so what’s going to be first on your testing priority list?
Want to know more?
We have a whole team of experts who would love to talk to you.Get in touch
Want more? Here are some other blog posts you might be interested in.