Core Principles For Life & Conversion Rate Optimization
I’d like to humbly offer three steps to becoming a conversion rate optimization guru and a better person into the bargain. At first I thought I might like to write this post as a semi-comic (if I wrote it right) Glengarry Glen Ross style rant “A-B-C…A-always, B-be, C-conversion rate 0ptimizing,” but I decided that might […]
I’d like to humbly offer three steps to becoming a conversion rate optimization guru and a better person into the bargain.
At first I thought I might like to write this post as a semi-comic (if I wrote it right) Glengarry Glen Ross style rant “A-B-C…A-always, B-be, C-conversion rate 0ptimizing,” but I decided that might give the wrong impression on my first post. Much as I love the film, I am no fan of high-pressure sales, which is one of the reasons why I enjoy CRO so much. The scientific nature rewards a calm, methodical approach so instead of an invective-strewn homage, I am going to set out the virtues and traits that I believe underpin good testing and, ultimately, good people.
You want results, and you want them now. So when do you call a test over? What represents a clear winner?
One of the biggest mistakes you can make in a test is to jump ahead of yourself. You’ve got to make sure that you factor everything into your tests to ensure a valid result. If your conversion metric is something that could be affected by time, then make sure that you aren’t stopping your test early.
We recently ran a test on a newsletter and subscription sign up process, where the sign up required an email confirmation to complete and receive both products. The original page explained that the user had successfully signed up for the newsletter but then went on in the body of the text to say that they needed to respond to a confirmation email to complete the subscription. The variation made it clear in the headline that this was only step one, and made the email confirmation call to action part of the header. The test was set running and, by highlighting that the email needed to be confirmed, the variation came romping home as a winner with 99.1% chance to beat the original and more than 63% observed improvement.
Test after 12 hours (click image for larger version)
However the important point to note was that the variation was prompting people to respond to an email immediately. As you can imagine, those people who read the original page may not have checked their email immediately without prompting but would have checked later on, leaving a lag between the times to conversion between the two pages. Below is the screenshot after 36 hours and, as you can see, the result is no longer so clear.
Test after 36 hours (click image for larger version)
Although the variation is still the better page, with a 26% observed improvement, the difference between the two has been massively reduced. In this instance the variation is still almost certain to be the better page over a longer time, but you can see how in closer tests this lag in conversion times could easily produce a false result.
Patience is a vital ingredient in successful testing and, although it is tempting to jump ahead when a result is being declared, it is important to be aware of elements that could skew figures and make sure you have given the test every opportunity to return a valid result. It may only be a one in 20 chance that the variation would not have won after all, but you can’t take that risk when you may be structuring your website on the basis of that test.
You know your site, you know how it works and why it is created in that way. You work (and, as nearly as you possibly can, live) on the internet. Your opinion about what needs testing is therefore not the best perspective. In fact, in some cases, it might be the worst.
It is important to remember that not everyone is like you, in fact most internet users are very unlike you indeed. Not only do they not know your site, they don’t trust it or necessarily understand what to do or where to go. They are innocents, gambolling through your website with intent, but a lack of trust/knowledge/patience or a hundred other obstacles that will stop them in their tracks. They are the best people to tell you what is wrong with the website, so it’s important to get their feedback. Even if 50% say that your site is too light and 50% say the site is too dark, you will find other invaluable trends in the feedback that will enlighten your testing in ways that you, simply sitting down with a bunch of web-savvy colleagues, can only stumble on.
There are plenty of ways of getting this info, from adding feedback and survey forms to your website (via services such as Kampyle.com, Survey Monkey, 4Q, Uservoice), going to websites like Feedback Army or Fivesecondtest and asking people to review your site, or even talking to your customer service department and asking them what general queries or complaints sound like about the website. If something is coming up repeatedly then there is probably an opportunity for optimization.
Remember: They know the answers, you don’t—if you did, you would have built the site that way in the first place. So be humble and bow to the wisdom of the crowd. They may not know Sergey Brin from Sergei Bubka but they will guide you to success, if you let them.
|Pole Vaulter||Web Pioneer|
If you run tests with small changes then, more often than not, you will see small positive or negative changes to conversion rate. You should be looking to tear up what you already have, and try a completely fresh approach. You should have a good idea of what you are trying to test from your feedback, so don’t be afraid to try some pretty exciting/crazy variations to resolve the issues your customers are having. By committing to big tests wholeheartedly you will stand a much better chance of making real and substantial progress.
So what if all your customers bounce at the first sight of your test variation? That’s going to hurt in a short space of time, but fortunately you can press a button and return to your original instantly. And best of all you haven’t failed when you produce a negative result, as you have found something that doesn’t work, thereby increasing your chances of getting it right next time. Why did that variation offend your customers so much that they bolted for the virtual door? Answer that question and you can tailor your future tests to address that issue.
The potential upside of the big change is that you could find a way of boosting your conversion rate substantially and changing the performance of your website forever. The potential downside is that you kill the conversion rate for a day or two and have to switch the test off.
Hold on, if there’s no long-term negative then that means you don’t need to be brave as there is no real downside. Hmmm… I’ve just convinced myself that you don’t need to be brave, but you do have to be bold. If you have concerns about the “just do it” ethos, remember that the only long term consequences are good ones!
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.