When Keyword Research and Search Data Deceives
As search engine optimization (SEO) professionals, we obsess with search data from a wide variety of resources. Which one is best for our clients? Which keyword research tool reveals the most accurate search behaviors when rebuilding a site’s information architecture? Does our web analytics data validate our keyword research? And, more importantly, did these tools […]
As search engine optimization (SEO) professionals, we obsess with search data from a wide variety of resources. Which one is best for our clients? Which keyword research tool reveals the most accurate search behaviors when rebuilding a site’s information architecture? Does our web analytics data validate our keyword research?
And, more importantly, did these tools provide your most desired information? Some answers might surprise you.
Keyword research data
I love keyword research tools. I use all of them because I can discover core keyword phrases, which are commonly used across all of the commercial web search engines. And I can also tailor ads and landing pages to searchers who typically use a single, targeted search engine (and it isn’t always Google, as one might imagine).
However, keyword research tools are not a substitute for a knowledgeable and intuitive search engine marketer. All too often, website owners and even experienced search engine optimization professionals launch into a site’s information architecture without gauging user response. As good SEO professionals, we should understand when it is appropriate to implement keywords into a site’s information architecture: when keyword usage overwhelms users, and when keyword usage needs to be more apparent.
This situation occurred recently when I was performing some usability tests on a client site’s revised information architecture. This particular client website is being delivered in multiple languages. We were testing American English, British English, and French. Therefore, the test participants were American, British, and French.
All of the keyword research tools showed the word “student” or “students” (in French, “étudiant” or “étudiants”) as a possible target. The appearance of this word in both keyword research data and in the site’s web analytics data led my client to believe that we should make this area a main category.
If we had relied on the data from keyword research tools, we would have been wrong. If we had relied on the data from web analytics software, we would have been wrong.
The face-to-face user interaction gave us the right answer.
The facial expressions were enough to convince me. Almost every single time the word “student” or “étudiant” appeared during the usability test, I saw confusion. When I asked test participants why they seemed confused, they said that the particular keyword phrase was not appropriate for that type of website. They then placed the student-related information groupings in one of two piles:
- Discard – Participants felt that the information label and/or grouping did not belong on the website at all.
- Do not know – Participants were unsure whether the information label and/or grouping did or did not not belong on the website.
The discard pile won, with over 90% from all three language groups.
Now, imagine if this company did NOT have one-on-one interaction with searchers during the redesign process and only relied on keyword research tools. How much time and money might have been wasted?
Keyword research data is not the only type of data that can be easily misinterpreted.
Web analytics search data
One search metric that clients and prospects inevitably mention is “stickiness.” In other words, one of their search marketing goals is to increase the number of page views per visitor via search engine traffic, especially if the site is a publisher, blog, or news site. Increasing the number of page views per visitor provides more advertising opportunities as well as a positive branding impact. The average time on site (if it is longer than two minutes) is also commonly viewed as a positive search metric.
Or so it might seem. Here is an example.
Many SEO professionals, including me, provide blog optimization for a wide variety of companies (ecommerce, news, software, etc.). Not only do we provide keyword research for blogs, we must also monitor the effectiveness of keyword-driven traffic via web analytics data.
Upon initial viewing, the blog’s analytics data might indicate increased stickiness. Searchers are reading more blog entries. Searchers are engaged. Therefore, the blog content is great…that is a common conclusion.
For an exploratory usability test, I ask test participants to tell me about a blog post that they found very helpful. I asked them why they liked the blog’s content, and I listen very closely for keyword phrases. Audio and/or video recording makes this job a little easier.
When I asked test participants to refind desired information on a blog on the lab’s computer, I did not hear, “This blog content is great!” Comments I frequently heard were:
- “I can’t find this [expletive] thing.”
- “Now where could it be? I saw it here before….”
- “I think this was posted in [month/day/year]….”
- “Where the [expletive] is it?”
As you might imagine, the use of expletives became more and more frequent with the increased number of page views.
Sure, searchers who discover great blog content might bookmark the URL, or they might link to it from a “Links and Resources” section of their web site, or they might cite the URL in a follow-up post on another website. All of these actions and associated behaviors make it easier for searchers to refind important information.
However, when I review web analytics data, I often find that site visitors do not take these actions as frequently as people might think. Instead, with careful clickstream analysis combined with usability testing, I see that the average page view per visitor metric is heavily influenced by frustrated refinding behaviors.
I have always believed that search engine optimization is part art, part science. Certainly, keyword research data and web analytics data are very much part of the “science” part of SEO.
Nevertheless, the “art” part of SEO comes into play when interpreting this data. By listening to users and observing their search behaviors, having that one-on-one interaction, I can hear keywords that are not used in query formulation. I study facial expressions and corresponding mouse movements that are associated with keywords. I see how keywords are formatted in search engine results pages (SERPs) and corresponding landing pages, and how searchers react to that formatting and placement.
I cannot imagine my job as an SEO professional without keyword research tools and web analytics software. In addition, I cannot imagine my job as an SEO professional without one-on-one searcher interaction. What do you think? Have any of you learned something that keyword research tools and/or web analytics data did not reveal?
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.