• http://www.justanother24hours.com contactdjy

    Interesting article – thanks. I also feel that search results are getting worse – for what it’s worth. Totally subjective but a consistent gut feel. I mainly use Google search.

  • http://www.mattcutts.com/blog/ Matt Cutts

    Danny, your Google search results look a little strange. Have you done anything weird to your search settings or location? Maybe used someone else’s cookie to join a Google UI experiment or something? I ask because I’m getting different results when I do the search. I asked my wife to do the query [search term research] as a sanity check and she sees the same results that I do. Here’s a screenshot of what I see:
    http://www.mattcutts.com/images/search-term-research.png

    Those results are actually quite good, in my opinion. I might switch out 1-2 results for a different one, but overall I think the results I see are quite good.

  • http://www.joblr.net Mikkel deMib Svendsen

    > Sadly, we still have no commonly accepted measurements of relevancy across search engines,

    Its interesting – I remeber we have had this chat for years now and still we have no widely accepted method for measuring search quality.

    Years ago I remember you came up with some pretty good ideas for how to conduct quality comparison (can’t find the link, though …) and I also know all the engines do something internally.

    So why haven’t the engines accepted a public standard? Do you know if any of the major engines are currently for it or against it?

  • http://ninebyblue.com/ Vanessa Fox

    Matt, I’m seeing the same results Danny is. I’ve seen a bunch of other weird stuff lately (blatant spam ranking really well). Can forward a few examples if you’d like.

  • http://www.mattcutts.com/blog/ Matt Cutts

    Vanessa–definitely feel free to forward them on. I’m in Google’s TGIF right now, but I’ll check into why Danny/you are seeing different rankings than I see on my computer, my wife’s computer, and my phone. :|

  • markjackson

    I see what Matt sees.

  • http://www.dazzlindonna.com dazzlindonna

    I see what Matt sees, but as my first page of results – and they seem to match what Danny describes as his first page of results. Which makes me wonder if maybe Matt was comparing his first page to Danny’s screenshot of the second page?

  • http://www.mediawhizsearch.com Marjory

    I’m confused. Aren’t the results supposed to be personalized based on search behavior? Why should anyone see the same results?

  • http://ItsTheROI.com Jonahstein

    I see what Matt see… except he has one more listing at the bottom and I get the related search terms box.

  • http://www.mattcutts.com/blog/ Matt Cutts

    Okay, a fellow Googler pointed out that I misread the post. Danny’s Google screenshot is page 2 of Google’s results and the rest of the screenshots are from the first page of other search engines. dazzlindonna, you’re right–I was comparing my first page to Danny’s screenshot of the second page.

  • http://searchengineland.com/ Danny Sullivan

    Matt, that’s it exactly. The first page of results ARE fairly good, as I noted. It’s the second page that’s largely a mess. I’ve bolded now that I’m talking about the second page (though I did go on at length to say this). The results there are just funky. And while I know this is the second page, and few people go to them (as I said in the story), still, these are results 11-20 that Google is saying are the best out of 125 million results — and many of them just don’t measure up for that.

  • http://www.alanmitchell.com.au alanmitchell

    What I can’t understand is why the general standard of paid search ads is also so poor and irrelevant to users’ search queries, despite thousands of advertisers having an almost open brief to write engaging ads to compete for the user’s click. Have a look at some examples I found when searching for hotels in Sydney:

    http://www.alanmitchell.com.au/techniques/relevancy-the-holy-grail-of-ppc/

    It’s a myth that paid search is ‘saturated’ and ‘so two years ago’ – there are still countless opportunities everywhere for advertisers wanting to focus on relevancy.

  • ronald

    Let’s do a test. Since Google is claiming to organize the worlds information let’s use that.
    Search:
    what is information [Res: OK]
    information is what [Res: OK]
    information what is [Res: 7: ACA - What is Chiropractic?]

    From the results we can see that they value/order the most common used phrases. But their mistake is they then take all data equal, which can easily exploited by SEO.
    Bing doesn’t make that mistake.

  • Andrew Goodman

    Danny, don’t you think you’ve chosen an example with a high degree of difficulty?

    (1) it’s three common words, that have many synonyms — search, term, and research. that’s bound to confound many engines. it’s also low volume so there wouldn’t be a wealth of data on it. and you don’t enclose it in quotes. isn’t the more common usage “keyword research”? I see a pretty solid Google SERP on that term.

    (2) it’s a sort of incestuous one — a search by someone looking to discuss search marketing keywords, with search marketers. so – that field is bound to *attract* more spam from the 100,000 SEO’s who use these products and want to experiment with placement techniques.

    I’m sure you’ll move onto more practical consumer examples like

    [chiropractor toronto] (should be very good, common business type, large city)

    or

    [chiropractor schenectady] (should have a bit more trouble, longer tail, smaller city)

    I’ll be looking forward to more insights.

    Certainly for the sites I work with, I see a lot of SPAM in the results — Sites Positioned Above Me!! If only Google knew what some of these companies were doing, I’m sure they wouldn’t rank so well. The question is, after weeding out all that stuff, what’s left. Without human curation I think it is pretty hard to for search engines to make judgment calls about intent as it intersects with quality/relevancy, across so many different types of queries, using only “signals” of same (esp. on long tail stuff).

    A final point here would be — I’d love to compile a list of places users should be searching to get better results *if not Google*. On the above, for example, my instinct would tell me that Yelp would be a much better resource.

    That turns out not to be the case for schenectady chiropractors (no reviews), but somewhat so for Toronto (some reviews, larger market).

    As for reviews and community helping the consumer find the quality vendors, on sites like Yelp…. Google has an inconsistent approach in featuring these, and in some sense competes with them. So they do well for awhile in Google SERP’s, and then get buried. An exception is TripAdvisor. But that may be for good reason (more volume, longer track record to establish intent signals).

    But on a query for “St Lucia hotels,” where TripAdvisor comes in #1 as they often do… you’d still be worried if junky sites showed up, say, in positions 11-20. On this query, I didn’t find that to be the case. Maybe Google has a better time of it reading signals for queries like this both in terms of their popularity, and in terms of consistent consumer intent.

  • http://www.cucumbermarketing.com Helen

    Thanks so much Danny for bringing this up! I’ve been seeing a LOT of irrelevant search results lately. More and more of them are coming from facebook, some directories, various portals, but not the actual sites I am looking for….

  • Stupidscript

    Marjory, October 23rd, 2009 at 10:23 pm ET makes a good point. Where is the personalization based on search history that we’ve been hearing so much about? Or does it only occur with a sustained search exercise … based on the current session, and not including previous sessions?

  • http://www.SurfCanyon.com Mark Cramer

    The problem you’re facing, which is true for virtually all queries, is that “search term research” can mean different things to different people at different points in time. The result set, if generated correctly, should represent those documents with the greatest likelihood of satisfying the user’s information need, but there will always be ambiguity. The “Utah History Research Center” is perhaps not a great example, but if (strangely) 5% of searchers find that one useful, then perhaps it makes sense to put it on page 2.

    There’s also the interesting trade-off between the total relevance and the “risk” of the result set. http://web4.cs.ucl.ac.uk/staff/jun.wang/papers/2009-sigir09-portfoliotheory.pdf is an interesting paper that you might enjoy which essentially says that the quality of the result set can be improved by introducing results that do not correlate strongly with others on the page. In other words, diversity in the result set can improve the search experience by reducing overall risk.

    Therefore, while I’m not going to defend all of the results above (some are pretty bad), it makes sense that some of the results might be outside of what you might consider “relevant” because someone else just might think otherwise.

    The fundamental issue is one of ambiguity combined with an overabundance of content. With 126 million matches it’s virtually impossible to make every result at the top relevant to the particular user, at that point time, while completely eliminate all that are irrelevant.

  • http://www.liveambitions.com liveambitions

    Here’s my 2 cents:

    1. Those websites on page 2 are probably strong, and thus resulting in top rankings. There may be other web pages with better relevancy, but the strength of these weaker websites may not be great enough to overpower the stronger websites.

    2. “Search term research” may have 125 million results, but it doesn’t necessarily mean that this phrase is popular. Unpopular phrases don’t get much play in article titles and content. So, Google has to do its best to come up with whatever is available for indexing.

    3. Let’s not forget that Google’s algorithm is still driven by computers. This is a technology that is still being worked on. We’re kind of expecting a computer to do a human’s job. Only a human can look at a listing and say that it’s truly relevant or not. I think Google’s done a great job thus far.

    For example, if you run a search for “apple”, how in the world is a computer supposed to know what kind of apple you’re talking about? Unless the computer can read the searcher’s mind, it’s only going to be able to spit out what it’s programmed to do. Search results are sometimes only going to be as good as the search queries.

    Steve
    http://www.liveambitions.com

  • http://searchengineland.com/ Danny Sullivan

    Thanks, everyone, for all the comments. Some responses.

    Andrew, yep, those are three common words. But search engines have long used proximity as a ranking signal. Even if you don’t deliberately seek a phrase using quotation marks, they’ve done this. Google moved that way long ago. One reason was because people DON’T use quotation marks or common commands. So in my book, that doesn’t excuse things. In fact, do this:

    http://www.google.com/search?hl=en&safe=off&q=%22search+term+research%22&start=10&sa=N

    Now I’m quoting those terms. I still see the same crummy results, including the China Wholesaler site in #20. It has nothing — nada — to do with those words other than having those three words in that exact order on the page. If that’s all it takes to get into position 20 on Google these days, just use the exact phrase once, our jobs as search marketers just got a lot easier!

    True, “keyword research” is a far more popular query on this subject than “search term research.” I still don’t think that excuses such bad results.

    I disagree on the incestuous issue. Google’s listing things like slides and agendas. No one went out with an agenda to deliberately rank for these words. And on Yahoo, half the top results weren’t even related to search term research. I mean the Autism page ranks because it’s about Autism Research and has the words “enter a search term” also on the page. That’s the type of false positive I’d expect from AltaVista circa 1999, not Yahoo in 2009.

    On [chiropractor toronto], sure, that’s better. Though not sure how much I trust one of the pages in Google’s top results that has a big “Toronto’s Chiropractors Directory rated on the TOP on YAHOO and GOOGLE!” heading at the top of the page. Or another with all the “useful links” at the bottom leading from a page about Toronto chiropractor services to “plano dentist” or “LA oral surgeon.” That screams out link exchange. Guess that stuff is working again in Google.

    I guess my main point is that for many searches, I can fine outliers that make you just go “huh?” And that’s the myth I’m talking about, the idea that some people have that when you get a top ten list, that these really are the top ten pages on the web for what you searched for. In reality, they’re the top ten best guesses, and sometimes those are terribly wrong. And in general, I feel like things are getting a little more wrong these days than right. I just don’t have stats to back that up.

    Stupidscript, personalized search is making a huge difference in search results, I’ve been finding. Logged in, I can see pages that were buried back in the 4th page of search results leapfrogging to page 1. That’s great, one of the pluses to personalized search. But the core non-personalized results should still be pretty strong, too. And if you’re not seeing them, you’ve got to be logged in and have web history enabled. See this for more:

    http://searchengineland.com/google-search-history-expands-becomes-web-history-11016

    Mark, agreed, it’s a huge challenge for a search engine to know what exactly someone seeks for any query that’s entered. But I don’t think “search term research” is that ambigious. At best, the key issue is do you want tools, reviews of tools or articles about the practice. And we do see that variety. But the quality can be iffy. And I really doubt 5% of those searching on this topic find “Utah History Research Center” to be relevant. That just doesn’t make sense to me

    I also agree that diversity can be useful. But that might be part of the problem. The diversity dial might be cranked up too much.

    liveambitions, Google’s search technology is over a decade old now. That’s like 100 search dog years. The oddities on page 2, the weirdness on Bing and Yahoo, these are things you’d see back a few years ago that were corrected. They feel like a step backwards from where we’ve been, not an issue that they just haven’t reached the right mark yet.