Where Is Search Going? Surf Canyon’s Mark Cramer

Some time ago, a gentleman by the name of Mark Cramer emailed me wanting to talk about the future of search. It was probably precipitated by the Search:2010 whitepaper we produced in 2007, where I talked to several search and UX notables, including Danny Sullivan, Chris Sherman, Marissa Mayer, Jakob Nielsen and others. Alas, Mark […]

Chat with SearchBot

Some time ago, a gentleman by the name of Mark Cramer emailed me wanting to talk about the future of search. It was probably precipitated by the Search:2010 whitepaper we produced in 2007, where I talked to several search and UX notables, including Danny Sullivan, Chris Sherman, Marissa Mayer, Jakob Nielsen and others. Alas, Mark and I never connected back then. So, it’s somewhat ironic that we finally connected in 2010, the hypothetical horizon I set back then for the future of search. And that irony kicked off our interview:

Cramer: You might be aware that I pinged you the first time almost three years ago to talk to you about the future of search.

Hotchkiss: Now that we’re actually talking about it, it’s probably no longer the future. It’s probably the present of search.

Cramer: We’re working on it, let’s put it that way. Certainly if we’d have talked three years ago, the conversation would have been more around theory—”Here’s a great idea, here’s something that we’re looking into doing”—whereas today we’ll be able to talk about the things that have actually happened.

So, first a little bit about Mark Cramer and his company, Surf Canyon. Specifically, I
asked Mark what gap in the search experience that Surf Canyon set out to fill.

Cramer: The company was founded actually a little over four years ago and the genesis came from some frustration that we were feeling with search. What we were feeling at the time was that when you were running a search on one of the major search engines that a lot of the effort was on the part of the user. The onus of crafting the appropriate queries, of digging through the search results to find what you were looking for, was on the user. Certainly a lot of people know how to do that better than others, but even if you are an expert at it, when you run a query and get back 20 million results, there’s really only so much you can do with that.

We felt at the time that it would be beneficial to the user if the search results were dynamic rather than static—if at the time that the user’s running the query and selecting results, the information imparted by the user’s selection was exploited immediately to re-rank the result set. This would enable the good results to percolate to the top immediately in the current search session, while the irrelevant results would get suppressed. That was the theory.

We started by building a prototype; essentially we built a search engine that did what I just described. Eventually we decided to release the technology in the form of a browser add-on so that people could take it with them and receive the benefits when they’re searching on one of the major search engines.

So Surf Canyon basically resorts the Google results, based on a link you select as an implicit signal of intent. The theory is this: if that’s a link you’re interested in, let’s use that as a filter and push down the results that are obviously not relevant. So, if I search for “apple” and chose apple.com, Surf Canyon assumes that I have no interest in apple pie, Fiona Apple or Gwyneth Paltrow’s daughter. All results not related to Apple, the company are pushed down, allowing the more relevant queries to rise to the top. The same would be true, incidentally, in reverse, which is probably a bigger win for the user, because you have to go 6 or 7 pages deep in the Google results for that query to find anything not related to Apple, the company. Surf Canyon allows you to quickly filter and sort, simply by choosing a link. It does all the rest in the background.

This sounds great in theory, but does it actually work for the user? According to Cramer, the answer is yes:

Cramer: One of the experiments we ran was for a certain number of users, randomly selected, and without them knowing, we turned off the algorithm so that when they navigated to page 2 they did not see a re-ranked page 2. What they saw was the original results 11 through 20. Now the metric was actually pretty simple to measure. All we need to look at, in the control group versus the test group, was what was the likelihood of at least one result being selected? What we found was that in cases where the user had made selections on page 1, when they move to page 2, there was a 40% greater likelihood of at least one result being selected in the test case compared to the control. On page 2 the results are apparently at least 40% more attractive in eliciting a selection from the user.

A 40% uptick in user satisfaction by any measure is pretty impressive. But it seemed to me that this would only be true in some searches, where going deep in the results is likely. If a search engine gives me a mixed bag of relevance that’s going to require a lot of digging on my part, I can see where Surf Canyon could be valuable. But what if Surf Canyon does its thing and accidently filters out the results I’m looking for? I threw a scenario at Cramer:

Hotchkiss: Say I’m looking for a digital camera and I want to compare the top brands. And the first one I happen to click on is a Canon, but that doesn’t mean I only want to look at Canons; I might want to look at Olympus, I may want to look at other brands. My fear as a user would be the minute I click on that I start going down a dead end. You’re being too restrictive in the results I see. So how does Surf Canyon accommodate those searches? Do you just turn it off?

Cramer: One thing that you mentioned was filtering. One of the advantages to our application is that we in fact are not filtering per se, because filtering would mean eliminating irrelevant documents from the result set. With re-ranking, all the documents are still there; however we simply changed the order, which means that we can bring them back.

In your example, if the user runs a search for “digital camera” and they’re interested in looking at multiple brands. If there are multiple clicks on the brand Canon, then you are correct, we will assume that this user is interested in that particular brand. However, in the case where multiple brands are being selected, we’ve developed our algorithm to be flexible enough to realize that, for that particular subcontext which might be brand, there’s ambiguity. The algorithm will essentially look at the fact that Canon is being selected, or Olympus and Sony and other sorts of things. Really the big challenge for us—the reason why it’s taken such a long time to actually put it together—is to be able to develop an algorithm which is flexible and subtle enough to actually realize which subcontext the user’s actually looking for.

So, in your example, if the user does a search for “digital camera” and clicks a Canon result and then a Sony result and then an Olympus result, then we won’t necessarily be re-ranking based on brand. There might be other features with respect to those results that are consistent across them that the algorithm’s able to identify. Maybe they’re reviews, right? Maybe we’re looking for reviews for different brands. So in the review subcontext, that will become the most important subcontext for this particular information need, and those review-related results will get re-ranked higher and the brand will have less of an impact.

That was impressive. An algorithm that recognizes the subtlety of subcontext has some decent IP cred behind it. I couldn’t help asking what the signals might be that Surf Canyon pays attention to in sorting through the ambiguity to determine intent:

Cramer: The first thing that we look at is titles and snippets because when the user actually makes a selection off the search page, this is the information that is available to the user. When we look at why the user selected result number five and not results one through four, that decision was based primarily on the titles and snippets that were on the page. We look at dwell times. That’s also an indication of the extent to which particular documents are relevant to the user—after they’re selected, of course. And then we do look at the full content of pages on certain occasions depending on other factors. We would first need to identify that the selected document is indeed relevant to the user, but once we make that determination, then we can use that information as well.

For today, I’ll wrap up by saying that, as a user, I installed Surf Canyon and was fairly impressed. It was more helpful than I expected (though there may be few more skeptical critics of search user experiences than me). In fact, the only drawback I had with the Surf Canyon plug in was that I’m still a search marketer and so I often need to see the “vanilla” version of a SERP. I kept having to toggle off Surf Canyon which became a little frustrating, but to be fair, it’s really quite easy to do. If you’re not a search marketer, I’d definitely recommend taking it for a spin.

In my next Just Behave, I’ll continue my conversation with Mark, where we speculate on where search might go in the future.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Gord Hotchkiss
Contributor

Get the must-read newsletter for search marketers.