No fluff - just the best news in paid search marketing every week.
Six Deadly Mistakes Of Web Page Testing & Tuning
The importance and impact of online testing on your conversion rates cannot be understated. Yet as powerful as testing can be, it is a double-edged sword that can actually set you back if you are not careful. Poorly designed tests can take years to complete, and even worse, they might not provide concrete insights to what elements will convert more visitors into customers. Here are six of the most common mistakes I see clients fall into while conducting online testing.
Using the wrong type of test (A/B vs. multivariate test)
A/B tests allow you to test two versions (or more) of an entire page against each other to see which one works best. If you are testing three different versions of a page, then you are conducting an A/B/C test and so on. A/B tests are especially useful because they allow you to test major design decisions by putting two or three completely different designs against each other to find out which one of these works best with your visitors.
Multivariate tests (MVT) allow you to test multiple elements of a single page at the same time. So you are able to test different headlines, different images, or different colors on a single page. Although it may seem like testing different elements on one page is less complex than testing one or more pages against each other, the opposite is true. You can think of A/B tests as simplified versions of multivariate tests.
One of the first questions you must ask yourself is whether you should start with an A/B or multivariate test for a particular page. And while multivariate tests are indeed powerful, they require considerably larger number of visitors to the test page for you to reach a decisive conclusion.
So, should you go with A/B or multivariate testing for your particular case? There is no universal answer. If you are starting out with a brand new landing page or have limited number of visitors, we usually recommend conducting an A/B experiment to start out. However, if you already have an existing page, we might conduct a small multivariate test (less than 12 or so different scenarios). The goal of this type of test is to learn which elements have the highest impact on conversion. Based on the results we learn from that MVT, the team will decide to conduct further MVT or A/B tests.
A successful test will take into account the number of visitors and number of conversions on a page (not the site). Although your site may have 30,000 visitors a month, your test page may have as few as 500 visitors and one conversion per month.
A/B tests are good for alternate designs of an entire page, whereas multivariate tests are helpful in determining the most successful elements at different locations on a page.
Testing too many elements
In the case against multivariate testing, I discussed the perils of letting multivariate testing software do the thinking instead of using an online marketer’s judgment, intuition and persuasion to guide the testing process.
There are plenty of software solutions out there which allow you to test tens of thousands of combinations on a single page—there is software that actually allows you to test millions of combinations. The problem with testing so many variations is the amount of time and resources it will take to conduct such tests.
And yes, some websites have sufficient traffic that they feel testing tens of thousands of combinations is just fine.
I disagree. There is no art in testing millions of scenarios and hoping that one of them will convert better than the current page you have. I’ve found that testing fewer than 100 scenarios can increase conversion rates from 4% to 15% in a matter of a few months. It’s not about testing only, it’s taking a common-sense, holistic approach to the process testing that leads to improved results.
Running tests that take too long
There are two kinds of factors that can have an impact on your conversion rate. There are internal factors that you can control, such as your design, messaging, copy, etc. But there are also external factors over which you have little control. If your competitor is running a 50% discount sale, you should expect your conversion rates to suffer. If the economy gets worse, you can expect your conversion rates to suffer as well.
One of the dangers of conducting testing is not accounting for the impact of these external factors. There is no real way of controlling them; however, there are few things you can do to minimize their impact:
- Keep track of external factors and understand that they might have an impact on your results. Being aware and not being sidetracked by a plummeting economy is probably a good idea.
- Minimize the time it takes to run your experiments. We do not advise conducting tests that take months to complete. I usually like to see tests completed in four weeks at the most. If that is not possible, then go back to the drawing board and rethink your test scenarios.
- Stay on top of what your competitors are doing. There are several tools that allow you to “spy” on competitor activity, including whether or not they are employing testing or some type of conversion optimization. I’ll write about those tools in a later article.
Failing to monitor your test as it takes place
Conventional wisdom states that you can set up an experiment, start the test and just sit around and wait for the results. That works well in many cases. But there are times where you might want to change an experiment in mid-stream. Closely monitor experiments to decide if you should eliminate some elements when there is enough evidence that they are not producing the results you are looking for.
This point is especially applicable when you’re using Google’s website optimizer as your testing tool. Website Optimizer is a great testing tool but running experiments on it may take too long. Now, you must be careful, seeing that one scenario is not producing any results after only 20 or so visitors does not mean anything. You also have to know when it is the right to time to interfere.
Failing to conduct follow-up experiments
Let’s say that you conducted a test, and as a result you were able to double your conversion rates. But then you stopped testing. That is a typical story.
We had a client who started with a conversion rate of around 3%. After we did the initial redesign and testing, we were able to increase their conversion rates to 9.8%. Our clients was so pleased with the results, they decided to suspend the campaign they already paid for.
It took about another month to convince them that they should continue testing and that there is the potential to increase conversion rates even further. Designing follow up experiments, and learning which elements worked and which elements did not, is at the heart of conversion optimization. To make a long story short, the client was able to increase their conversion rates to 14.9% with just two more months of testing.
Thinking that testing is a silver bullet
Testing is a great tool to increase your conversion rates. It is also an important step in any conversion rate optimization process you conduct. However, it is only one step in the process. It is a critical step that should happen at the end of conversion optimization projects. If you are looking for double digit conversion rates, then you must start with understanding your target market through the process of persona creation, site analysis, analytics assessments and design and copy creation. Trying to test random scenarios without doing the initial homework is like throwing darts in the dark.
- Select the right type of test to conduct based on the number of visitors and other data collected from you analytics.
- Limit the number of scenarios through a holistic approach to testing by considering market segment, persona development, trust factors, fear factors, etc.
- Don’t be side-tracked by external factors that impact your conversion rate. Stay on top of how the economy is performing and what your competitors are doing so you can test and optimize accordingly.
- Stay involved in the tests you are conducting. Don’t minimize the importance of following up and tweaking test elements as results become statistically significant.
- Don’t stop once you see improvements. Get beyond single-digit improvement by continuing to test.
- Approach any project from a number of different angles, because testing alone will not maximize your results.
Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.