What Is Split Testing? A Guide To Becoming A Split Testing Expert

All marketers, salespeople, and industry experts concur that one of the most straightforward approaches to increasing post-click landing page conversion rate also happens to be the most successful.

Due to its effectiveness and ease of use, split testing is becoming an increasingly popular approach of optimization for companies all over the world.

split testing

What is Split testing?

Split testing, often known as A/B testing, enables marketers to evaluate the performance of two different iterations of a web pagea control (the original) and a variantwith the aim of increasing conversions.

The tester should be able to identify the cause of the performance change if there is just one difference between the two pages. For example

The split testing‘s razor-sharp precision

Consider testing whether a new headline on your post-click landing page would result in a higher conversion rate. The alternative title says, “Learn The One Secret Method The Experts Use To Generate More Leads,” while the original reads, “How To Generate More Leads For Your Business.”

You discover that the variation generates more conversions than the original after driving traffic to both. You can be sure that the headline was what caused the increase because there is only one difference between the two pages.

In a perfect world, each split test would be carried out this way, one change at a time. Unfortunately, the world is not perfect where we live.

The useful split test method

The main issue with the optimal split test strategy, which involves just changing one element per test, is that each test frequently necessitates tens of thousands (or even hundreds of thousands) of visits before it can be concluded (more on why that is, later).

So there is also a more useful way to go about it. Think about this…

You make the following adjustments to your post-click landing page to increase conversions:

  • The heading has been changed to “Learn The One Secret Method The Experts Use To Generate More Leads” from “How To Generate More Leads For Your Business.
  • The “Submit” and “Show Me the Secret” calls to action.
  • A four-field form (name, email address, phone number, and firm name) to a two-field form (name, email address).

You run traffic to both and discover that your new variation, with the aforementioned improvements, produces 8% more conversions than the original. Hooray! Success!

But… hold on a sec… You are unaware of the cause of that conversion rise. You could confidently provide an explanation if you had performed a multivariate test or examined each ingredient separately. Drats

If you think about it, do you really care?

Many companies don’t. They don’t care why or how it happens; all that matters to them is increasing conversions. “Change only one element per test” isn’t practical advice when you don’t have a lot of time, traffic, and employees available, or when you’re performing a significant redesign.

But keep in mind that conducting a split test isn’t as simple as “bringing equal traffic to each page and making a change.” Whichever approach you choose to use—practical or pin-point accurate—there’s a whole lot more to it.

What to watch out for before beginning split testing

Every marketer wants to increase conversions, so when they learn that split testing can achieve that for web pages, they jump right in. However, because of their haste, they commit the following mistakes:

What things need to be avoided before you begin split testing?

They conduct tests without reason

If you Google “Split testing case study,” you’ll come up with several blog posts suggesting that a particular button color generates the most conversions and that your form should have a specific number of fields.

It makes sense that you would try to apply these improvements to your post-click landing page since, if they were successful for them, they might be successful for you as well.

However, there is an issue. Their company, post-click landing page, product, and target market are different from yours. Do yourself a favor and refrain from trying something that someone else has found successfully. Stop straight away.

Instead, you should be developing your own ideas to test using your own information. Customer interviews, analytics software, and heat mapping tools are all excellent for figuring out where your website is lacking.

You can draw a conclusion regarding testing that might increase conversions from those.

They adhere to best practices mindlessly.

There are two articles disputing every one that discusses the best button color or the quantity of form fields.

The marketer will exclaim, “Wrong!” “Orange is powerless against red!”

Another will exclaim, “Silly!” “Green clearly wins over both!”

And the funny thing is that everyone is so preoccupied with debating who is incorrect that they fail to notice that, well, they are all correct (that is, barring some methodological mistake in testing).

Marketer 1 is correct that red is the best color for her post-click landing page if she tested a red button against an orange one on the page and discovered the red one produced more conversions.

Marketer 2 should use a green button if he tested it against a red one on his post-click landing page and discovered that green outperformed red.

Are the findings of Marketer 2’s experiment evidence that green is a superior button color over red?

Without a doubt. It’s possible that Marketer 1 will compare red to green and discover that red still generates higher conversions on her post-click landing page.

Your audience and the color scheme of the remainder of the page, for example, are two factors that differ from business to business and have a significant impact on conversions.

You might not experience the same results as someone else. All of your tests ought to be grounded in your own data because of this.

They absolutely do not adhere to best practices.

Even while you shouldn’t blindly adhere to best practices, doing so can still be detrimental. All post-click landing pages must adhere to a few basic principles.

Because we already know that navigation decreases conversion rate by providing prospects with several exits off your page, it would be a waste of time to test variants of your post-click landing page with and without navigation.

Similar to this, you wouldn’t test a call-to-action button with a blue backdrop on a page with a blue background because it wouldn’t stand out as much as one with a contrasting hue.

This gets us to the following error: These are generally acknowledged design best practices that, 99 times out of 100, aren’t worth your time and effort to test.

They put items to the test that were unlikely to increase lift

Google once experimented with 41 different hues of blue to see which one had the greatest effect on sales. You too, could?

Absolutely. Would you, though?

Ideally, no. As you can see, while companies like Google have entire departments devoted to testing like this and the funding to support them, most companies don’t.

In fact, according to CXL’s 2016 State of the Industry Report, just 53% of companies who use conversion rate optimization have a specific budget set out for it. Additionally, the majority of conversion optimizers are employed by companies with yearly revenues of under $100,000.

Frivolous tests that seek to identify the ideal color tone are a waste of your company’s time and money unless you have access to resources like Google. Instead, you should concentrate on significant adjustments that could have a significant impact on your conversion rate, which takes us to our second major error.

They believe split testing will result in the greatest increase in campaign conversions.

Here at flashreviewz, we’re big fans of split testing because it consistently helps us increase conversion rates, but we’re even larger fans of optimization techniques that have the most positive effects on your bottom line.

Split testing is merely one component of your conversion equation, and sometimes, another optimization strategy will result in a greater increase in conversions. Derek Halpern does a fantastic job of clarifying what we mean in detail:

“If I get 100 people to my site, and I have a 20% conversion rate, that means I get 20 people to convert… I can try to get that conversion rate to 35% and get 35 people to convert, or, I could just figure out how to get 1,000 new visitors, maintain that 20% conversion, and you’ll see that 20% of 1,000 (200), is much higher than 35% of 100 (35).”

Changing your post-click landing page may not always result in the highest increase in conversions. Increasing your traffic may occasionally. Sometimes, enhancing your advertising campaigns will.

What we’re trying to say is, before you start split testing, make sure your campaign doesn’t have any other flaws that need to be fixed.

Now that you’re aware of some frequent mistakes to avoid, it’s time to assess your readiness for split testing. Is the rest of your campaign running smoothly?

Okay, let’s discuss how to begin running a split test.

How to run post-click landing page split tests

Here are the steps you should follow while doing a split test from beginning to end.

1. Introduce the purpose of the test.

Your justification for split testing should be data-driven, as we previously discussed. Did Google Analytics statistics reveal that your visitors were only remaining on your website for an average of 5 seconds before leaving?

Perhaps you might do a better job of drawing their attention with your headline and feature image. Or perhaps they believe they have been mislead. Perhaps you need to improve the message alignment between your post-click landing page and advertisement.

2. Formulate a premise.

Form a hypothesis based on that justification. What are you attempting to improve upon?

When describing this situation, you might say something like, “After observing that the average post-click landing page user session is only 5 seconds, we believe that creating a more compelling headline will get them to read the body copy and spend more time on the page, which will ultimately lead to more conversions.”

You can decide whether to accept or reject the idea by testing.

3. Determine the sample size.

You must achieve “statistical significance” before you can deem your test done. The phrase describes the quantity of visits that each of your test pages (control and variation) will require before you can be sure of your findings.

In the majority of disciplines, including conversion optimization, 95% is the generally accepted level of importance. This basically means that, after your test, there is only a 5% possibility that your results are the result of chance.

With a 95 percent degree of significance, you can be 95 percent certain that the modifications you made to your post-click landing page caused the change in your conversion rate.

There is a method for calculating sample size manually right now, but it requires some complex arithmetic. For those of us who lack the statistical expertise or the time to complete the task manually, tools are fortunately available to help.

The calculator from Optimizely works well for this. You must enter the following information for it to provide a precise sample size:

What is your original (control) page’s conversion rate as a baseline? The greater it is, the less trips you’ll need to make before a test is finished.

The smallest relative change in conversion rate you want to be able to notice is the minimal discernible effect.

At the conclusion of the test, you can only be certain that a rise or fall in conversion rate larger than 20% is due to your changes if there is a minimum discernible effect of 20%.

More visits will be required before your test can be considered finished, the lower your minimum detectable effect is.

You can increase or decrease the statistical significance using Optimizely’s calculator, although it is not advised that you use a value lower than 95%.

You can’t afford to rely on shaky data if you want to base your business decisions on your outcomes accurately. The more visits you need before you may call your exam, the greater your level of relevance.

Once you have a sufficient sample size,

4. Make your modifications.

Update the headline if you’re modifying it. Change the featured image if that is your hypothesis. Without the aid of IT, platforms like Instapage make it simple to quickly change the elements of your page for split testing.

Ensure that your initial post-click landing page doesn’t change. Your testing baseline won’t be accurate if this is the case.

5. Get rid of confusing factors

Unfortunately, there isn’t a vacuum in which your testing is being done. That implies there’s a chance that a small outside element could have a significant impact on your test, leading to a deceptive outcome.

Make sure that factors like traffic sources and referring advertising are the same for both pages, and do your best to exclude any other factors that could influence your test. You should think about accounting for the following:

split testing variables

Remember that even while it’s ideal to deal with these right away, you’ll need to keep an eye out for them all throughout. Accurate outcomes can be hindered by unforeseen circumstances.

6. Verify that everything is functional.

Before launching your test, look over everything. Does your post-click landing page display consistently across all browsers? Is the CTA button functional? Do all the links in your advertisements work?

It’s crucial to QA every component of your campaign before you launch it to make sure nothing jeopardizes the veracity of your findings.

7. Encourage visitors to your pages

Drive traffic to your pages right now. Make sure the traffic is coming from the same source, as we previously indicated, unless you’re split-testing your traffic sources or adverts.

Also, pay attention to where the traffic is coming from. The source of the traffic is referred to as the “selection impact,” and it explains how it can bias the outcomes of your test. Peep Laja from CXL adds more detail:

“Example: you send promotional traffic from your email list to a page that you’re running a test on. People who subscribe to your list like you way more than your average visitor. So now you optimize the page (e.g. post-click landing page, product page, etc.) to work with your loyal traffic, thinking they represent the total traffic. But that’s rarely the case!”

Once you’ve chosen your traffic sources, run your test repeatedly until you reach the sample size you calculated for both pages during pre-testing (original and control). Keep doing the test if you reach that figure in less than a week.

Why?

because conversions are greatly influenced by the days of the week. Your audience will be more responsive to your marketing messaging on some days than others.

It’s time to review the results once you’ve reached your sample size, conducted the test for at least a full week, and taken into consideration any confounding factors that can taint your data.

8. Examine and improve

How did your proposed change perform? Did you make a significant lift? a little one?

Keep in mind that you cannot be certain that a lift created below the minimum observable effect of 20 percent was the product of your changes.

Congratulations if you actually produced a lift greater than that! You’ve finished optimizing now

Not.

Your post-click landing page may be better than it was previously, but that doesn’t necessarily mean it’s at its finest. Something needs to be tested constantly. Each campaign has its flaws.

Don’t worry if you didn’t produce lift or worsened the variation. You didn’t mess up. You’ve just learned about something that has no bearing on the conversion rate of your page. Continue testing and moving on.

Do you currently use split testing?

We may have simply transformed something that seemed straightforward into an optimization technique that seems much more difficult than it actually is.

Fortunately, you can rapidly generate versions of your post-click landing pages to test with split testing solutions and evaluate them all in one place with the industry’s most sophisticated analytics dashboard.

Leave a Reply