Interview With Louis Rosenfeld, Author Of Search Analytics – Part 2
In Part 1 of my interview with renowned information architect Lou Rosenfeld, author of Search Analytics for Your Site: Conversations with Your Customers, he defined site search analytics (SSA), suggested keyword patterns to monitor, and outlined some differences between web searchers and site searchers.
In Part 2, we discuss relevancy scores, site search analytics that can improve navigation, some insightful tests you can use on your own site. Enjoy!
Q. Relevancy is important for both Web search engine optimization (wSEO) and site search engine optimization (sSEO). How can site search analytics (SSA) help website owners identify relevant content that can be more relevant? Can you give an example?
Rosenfeld: Analyzing common queries that retrieve zero search results may help figure out why you content isn’t being retrieved. But it also can tell you about content that you should make available (but don’t). And where you find content gaps, you find opportunities.
For example, if users are failing to find information on specialized insurance policies on your extreme mountain biking site — well, maybe they should? Even if you don’t start selling them insurance, you should redirect them to a partner who can (and who will gladly compensate you for the business you’ve generated for them).
The Financial Times puts a different twist on this opportunism. They analyze their most recent week’s queries for spikes in searches for companies and people. Then they compare these trends with their recent week of editorial coverage. If there’s a discrepancy, it might indicate a breaking story. FT’s editors can then decide whether or not to send a reporter to investigate. In effect, FT uses SSA as a predictive tool, rather than solely as a tool to diagnose and improve its content.
Q. Why do both SEOs and information architects need to understand relevance (or relevancy) testing and precision testing?
Rosenfeld: Relevancy testing is limited—it involves testing the subset of queries that actually have something close to a “right” answer. Precision testing is also limited in how it measures search success. But both are better than no testing at all. And it’s always useful to have even a few metrics to help you to tell a story—in this case, a story about a new search engine that wasn’t quite ready for prime time (see the Vanguard example below).
It’s also important to remember that relevance is in the eye of the beholder–in this case, the user. And it’s the responsibility of everyone involved with search–including and especially SEO experts and information architects–to make sure their search engines retrieve results that are relevant to their users. Relevance and precision testing are just two tools for keeping your search engine’s relevance ranking honest.
Q. Can you give us a brief example of a relevancy test and a precision test?
Rosenfeld: Relevance testing and precision testing are a pair of measurable approaches we cover in the book’s first chapter–using a case study from The Vanguard Group. Vanguard used relevance testing to make sure common queries that had “best matches” retrieved them at or near position #1 of the search engine results page (SERP). And they used precision testing to measure the relevance of their common queries’ top five search results.
These simple metrics helped the company identify some important problems with a newly-installed search engine and how to fix those problems (as well as avoid a disastrous launch).
Q. I agree. Back in the 1990s, I discovered that when site search results were accurate, the “best” pages were selected to rank on the commercial Web search engines. That came from really defining what each page on a site was about, and connecting related pages to each other via formal and supplemental navigation.
In turn, helped us identify microconversion points and KPIs (key performance indicators). What are some search metrics website owners can use to beef up or use as KPIs?
Rosenfeld: The most obvious of these is the percentage of queries that retrieve zero results. It’s a great indicator of likely search failure, and can be incorporated into any KPI where findability is (or ought to be) considered. Other failure-related metrics include the percentage of queries where users fail to click on a search result, or the percentage of queries that lead to users immediately exiting the site (a.k.a. bounce rate).
Of course, we’re not just interested in failure–ultimately, we care about how successfully we’re engaging our users. The average time spent on the site after searching, and average number of pages viewed after searching may indicate engagement, though in some cases they’re counter-indicative–users may be happiest with you when your site engages them as little as possible.
There are many, many more search-related metrics covered in chapter 7; kudos to my friend Marko Hurst for coming up with them.
Q. For SEO professionals and conversion specialists, there were 19 search metrics, their purposes, and notes for each of them. Some of those metrics include the percentage (%) of sessions that use search (which compares the usage of the site’s search system vs. browsing), and the average number (#) of queries per session (which tracks how frequently users search during a single session).
So what is a search session? Why should SEO professionals and information architects be concerned about search sessions?
Rosenfeld: We study search sessions to learn how users’ information needs change as they interact with a site’s search system. This is especially useful in the Age of Google, where users are understandably unwilling to invest much effort in crafting an initial query.
In session analysis, we go beyond that first query (e.g., “patio furniture”) and see how the user, after examining search results, follows up with new queries (e.g., “metal patio furniture” or “patio furniture under $500”). If we see patterns in how sessions go, we can start determining how to improve how we support search refinement. For example, perhaps our site should support filtering by material (“metal”) and sorting search results by product price.
The problem with sessions is that it’s often tricky to tell what exactly a session is. Usually we combine IP address, query content, and time/date stamping to draw reasonable lines around sessions—for example, semantically-related queries that take place within a few minutes might constitute a session. But sometimes users make radically change the nature of their query within a few seconds.
When they switch from “metal patio furniture” to “flagstone,” have they begun a new session, or are they worried about whether or not metal patio furniture will scratch up their flagstone patio? Similarly, a user might search for “flagstone” on Monday night and search the same thing on Wednesday afternoon. Is that one or two sessions?
As with all analytics data, SSA leads to interpretations are, at best, educated guesses at what is going on. In my book’s last chapter, I beg readers to combine SSA with other, more qualitative research methods to figure out why things are happening.
Q. I like the example of how quantitative data can help you identify where a problem is on your website, but qualitative data (such as field research and task analysis) tell you why there is a problem. And then you can fix your interface. Are there other ways that SSA can improve the search interface?
Rosenfeld: My favorite example is simply to use query length as a guide for ensuring a wide enough text entry box. Common queries can also be really helpful for populating “type-ahead” lists that are increasingly common to search interfaces.
Example: Search boxes where the text-entry field is not wide enough, and many searchers might not recognize the magnifying glass (a symbol for ‘search’ or ‘find’).
Q. How can site search analytics improve site navigation?
Rosenfeld: So many sites do a terrible job at contextual navigation—the “horizontal” routes that connect your deep content. This is increasingly important when so many users bypass reach that content via Google, Twitter and other social media, and ad campaigns, rather than your site’s main page and upper layers.
SSA can help you establish data-driven “desire paths” for helping users move through your deep content. For example, you can study the queries that begin on your site’s important content types, such as product description pages. If you sense patterns in those queries—say, users seem to be frequently searching for product specs from those pages—you now know to make sure each of your product overview pages links to its corresponding product specs page.
I would like to thank Lou for a great interview. And SEO professionals? Take his words to heart. “Relevance is in the eye of the beholder,” and “where you find content gaps, you find opportunities” are my particular favorites. Which ones are yours?
*use Discount Code “SEL” to receive 15% off the book now
Editors Note: We would like to thank Rosenfeld Media for the 15% discount for our readers. Neither Search Engine Land nor Shari Thurow has any financial interest in sales of the book, although Ms. Thurow has endorsed the book as a top read for SEOs.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.