Google Website Optimizer (GWO) is a great tool for getting started with conversion rate testing. We use it extensively and love the way that it abstracts a lot of the really complicated stuff and avoids the need to know and understand some pretty tricky statistics. One downside of the simple platform is that it can make it hard to do some slightly difficult (but very desirable) things.
The example I want to discuss today is segmenting your tests to see how they perform for different kinds of users of your site. I’m going to show you a few ways of gathering GWO data in Google Analytics (GA) to bring the full power of the GA platform to your GWO testing.
A classic example is testing the impact of changes to the call-to-action for new vs. returning users. These two segments of users often have very different requirements and we can see why they might react differently to different enticements. For someone who knows your brand and is familiar with your offer, a discount could be supremely effective while a new visitor might respond better to long copy explaining the benefits (this is pure speculation and is designed only to serve as an example). For sites that get high numbers of returning visits (e.g. eBay), returning visitors are a self-selected set of people who understand how to use the control and/or have struggled through the learning curve. They may be resistant to change and prefer the control despite there being room for improvement for new visitors who haven’t gone through that learning curve.
Other useful segmentations during conversion rate tests include:
- Traffic source (e.g. organic search vs. paid search vs. referral traffic)
- Whether a user has visited a specific page on the site or in the conversion funnel
- And a whole load of other things
There are three basic steps to getting the data you need:
- Ensure that you are able to track GWO variables in GA
- Set up appropriate filters and avoid contaminating your existing profiles and data
- Analyse the data
#1 Track GWO Variables
Google Analytics has a (relatively little known) function called utmx to return basic information about a GWO test to analytics. By tracking pageviews that include information in the URL about the test and variant being run, you can set up a profile within GA that has an understanding of conversion rate tests. There is more information on how to do that in this quite technical guide. It may also be useful to read this alternative approach from ROI Revolution that uses a custom ga.js file.
Tracking a pageview such as /my/page.html?gwo_exp=1234567890&gwo_var=0 gives all the information needed to create a profile that stores the variant of the test in a custom variable.
#2 Avoid contaminating your main profile
I suggest implementing the tracking of GWO variables in a new profile. In your core profile, you will probably want to strip out this information so that you can see all visitors of a single page concept together (i.e. not splitting out by which variant they saw).
In this example, you would want a filter such as (from the guide referenced above):
- In “Field A -> Extract A”, select Request URI and (^.*)(.gwo_exp.*)
- In “Output To -> Constructor”, select Request URI and $A1
#3 Analyse the output
One of the nicest things about GWO is its ability to abstract away the maths behind determining winners. While that functionality is obviously still available for the aggregate test (on the whole set of users) there is an additional step required to work out whether you are seeing statistically-significant results on segments of your users.
Although learning the statistics behind these calculations is fun(warning: your experience may differ), we are primarily looking to get to the answer here. For this I recommend using an online sample size calculator. In order to analyse, for example, whether your test has reached statistical significance on new users only, you would segment the data to show only new users and find the number of users in this segment who had seen each variant and the number of each of those who had converted (for whatever conversion goal you are seeking).
With this data in hand, you can fill in the fields in your chosen calculator and, see the relative conversion rates. For most tests, you would seek a 95% confidence that a variant was better than the control. Consider the following data:
These results suggest that you would be happy declaring variant B better than the control (confidence > 97%) but even though variant A has a higher conversion rate than the control at present, you would not be confident declaring it better yet.
What I’d like to have shown is real-world examples of the ways that tests can show misleading things when you look at the average data when digging into the segmentation reveals insights about different classes of user. Unfortunately, I don’t have any public examples. If anyone has real-world examples they can share in the comments, I’m sure the SEL community would love to discuss them.
Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.