Several situations have been brought to me in the last few weeks where website performances have disappointed their owners. The causes of the disappointment itself has been easy to identify: poor rankings, poor traffic or poor conversion. Someone has to be responsible. Often, the agency is blamed and the solution is to shout at the agency or to demand an investigation by the agency.
In my experience, the blame should be roughly apportioned one third to the agency, one third to the client and one third to external forces that no one can control. ( eg: Google’s algorithm!)
In fact, the blame is frequently given 60% to the agency and 10% to the client. The other 30%? Well that’s recognised as being external forces — but then the agency is to supposed to duck and dive and avoid those anyway, right?
The main issue seems to be that website owners don’t have particularly robust methods of carrying out forensic analysis on the causes of their “issue”. Time, I felt to give folks an unbiased guide [Sic] on where to start your search noting that in a great many cases the investigator will not speak the language under investigation.
Market & Time Comparisons
Oh how I wish I had an easier answer to the question on how to benchmark my site! It’s not easy to use effective benchmarks in one language, but when multiple languages are involved, there really are only two effective methods. The first, and simplest, is to carry out checks comparing one time period to another within one market or a geographical area.
However, for an international site, you almost always want to be able to compare the performance in one market against another. Sadly, this where many forensic tests begin to crumble. The difficulty is that there are so many very subtle reasons why one market can be very different to another.
For the sake of argument, let’s take the example of a comparison between the UK and Germany. Our UK site has 100,000 visitors each month and the German site sees only 20,000.
Now consider that the population of Germany is of the order of 90 million compared to the UK’s 60 million. On paper, that would mean that the German visitor figure would need to reach 150,000 to achieve an equivalent performance to the UK.
But there are many other factors to take account of. The UK has a greater Internet connectivity, greater frequency of use per consumer and much greater propensity to buy online all of which can lead to German figures reaching 20% of the UK In a truly fair comparison. Which means that, whilst the reality is that most clients don’t try to achieve a high level of sophistication in their market comparisons, it is of itself difficult to do in a meaningful way.
The solution in my book is to look at country to country comparison ratios over time. Take your domestic market, the one you know best, and measure all of your “other” markets as ratios of it at the outset as a baseline. Then compare the trends of those markets against your domestic baseline market through time.
Since organizations generally understand much more detail around the performance of your main market, you will more naturally compensate for its ups and downs when you compare other markets to it over time via ratios and not directly. This method also accounts for the full range of subtleties from country to country.
So once you’ve formed a reliable method of measurement, you move onto the forensic analysis itself. On the search side of the search-and-social coin, the most important factor to check — other than the basis for analytical comparisons we’ve already described — is that the correct keywords are in use. Only then can you judge if they have been correctly deployed.
The best way to do this is through a Gap Analysis. A Gap Analysis exists to ensure that the correct set of keywords is in use. It is keyword research, but it is keyword research which takes account of other factors.
There are at least two types of keywords (and often many other classifications) in a gap analysis. The first of these is obviously “gap keywords”. These are a really important find in a gap analysis because these are the holes in your program, the very keywords of value which your site is either currently not achieving or quite probably not even targeting.
The second class of keywords in a gap analysis are the “protected keywords”. These are those keywords which for various reasons you do not wish your latest keyword research to dump because they offer some kind of significant value.
They might be “brand keywords” which don’t have significant traffic volumes but they absolutely have to be there. They might be “campaign keywords” which have zero traffic — a fact which you’re about to change by investing thousands or millions of dollars in a marketing campaign.
They might simply be keywords with low traffic volumes but which convert extremely well on your site and so you want to keep them for pure economic reasons and to minimise the risk of moving to other keywords.
The vast majority of international SEO forensics I have seen start by looking at the results after the keywords have been selected – even if those keywords were wrongly chosen and targeted.
Unknown Unknowns Disguised As Known Knowns
Thank you Donald Rumsfeld for helping us humble SEOs with this one. A large number of site performance issues are caused by pitfalls the site owner is completely unaware of and doesn’t understand. This is nothing to be ashamed of, in the international arena we always have to trust others to help us out.
The problem is that site owners will frequently jump to blame something they do understand in the face of an “unknown unknown” — sometimes bending reality to fit. If you feel you’re bending reality, you probably are — and by the way, asking a colleague from that country is really dangerous.
Whilst they usually don’t have the necessary direct professional expertise, they are given huge “local” credibility overriding all other possibilities and creating “known knowns” disguising the true causes of our “unknown unknown”. In truth, we should probably name these “unknown known unknowns”.
The Research Trap
Web analytics having failed to produce a useful insight, the website owner resorts to asking the users via focus groups or worse an online survey. Online surveys work best in the home market. They are so full of risk when deployed in markets outside your home market, that I would caution anyone from using in markets don’t fully understand (which is e often the reason they are deployed of course.
People ask the wrong questions, to the wrong people (those already visiting) expressed in a way which doesn’t properly take account of cultural differences. They then deliver the surveys in the wrong language by overly relying on IP filtering to select the right users. Finally, they then compare one country against another and make the same mistake again that we described in the first point.
The Quality Mirage
The quality of the site or the language used is often blamed for its failure. Many website owners fail to realise that the definitions of grammar or correctness may differ significantly from language to language. In English, grammar is generally regarded as a set of rules which all have in common, though we all bend them to suit in our own language and this causes no problems.
Often, the most popular keywords in a language are not spelled in a consistent way. Some regard one way as “correct” because that particular version is their personal preference.
How often have I seen website content rejected on one person’s personal opinion, when Google’s keyword volumes contradict that decision directly. You have to trust the experts and your users, not the subjective view of one colleague, friend or even relatives.
Want to know more? We’ll be covering these issues at the International Search Summit workshop at SMX West this week!
Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.