The latest analyses, insights and strategies that inspire CMOs and marketers everywhere.
89% Find Search Engines Do Good Job Finding Information, But “Noise” Is Issue
Has Google’s relevancy gotten worse? A recent opinion poll suggests not, while at the same time confirming a concern that’s been rising in anecdotal accounts — there’s too much “noise” surrounding the “signal.”
Rasmussen Reports surveyed 740 adult Americans on January 4-5 about a variety of search engine related issues. The key question that caught my eye?
“In terms of finding what information you need, how do you rate today’s Internet search engines like Google, Yahoo and Bing …excellent, good, fair or poor?”
Most Rate Search Engines Well
In total, 89% found that search engines do a good or excellent job in finding information. Here’s the full breakdown:
- 47% – Excellent
- 42% – Good
- 10% – Fair
- 0% – Poor (technically between 0% and 1%, but specific figure not given)
- 1% – Not sure
Does that mean Google itself is gaining such high marks? Maybe these are all Bing users? Unlikely. The survey didn’t ask which search engine people used, which was unfortunate. It did ask if people used more than one search engine at the same time. Few did:
“Do you generally use the same Internet search engine all the time?”
- 78% – Yes
- 19% – No
- 3% – Not sure
Since Google is by far the most popular search engine in the US, it’s reasonable to assume that the overall satisfaction numbers indicated overall satisfaction with Google.
But Noise Is An Issue
If search engines are doing such a great job in general, and Google in particular, why have we seen a spate of posts recently suggesting that Google’s gotten worse? I think the answer is in another question from the poll:
“Which is a bigger problem when you use an Internet search engine – that you can’t find what you need or that your query generates too much irrelevant data?”
- 70% – That your query generates too much irrelevant data
- 13% – That you can’t find what you need
- 18% – Not sure
Only 13% say they can’t find what they’re looking for. The answers are there, the “signal” that people want to tune into. They’re just surrounded by a lot of noise.
Another Poll With Seemingly Conflicting Findings
I think you see a similar frustration in a poll that Lifehacker just ran. This gathered nearly 10,000 responses to the question:
“Have Google’s Search Results Become Less Useful To You?”
- 43.8% – Kind of/sort of, but it’s still the best way to get at the good stuff
- 33.8% – Absolutely. The spammers have gained a significant foothold
- 11.2% – I haven’t really noticed a change
- 7.1% – I’d say no, or not to the point where it matters, at least
- 3.6% – No, and actually, my results have been better and more convenient lately
- 0.6% – Other
The headline on Lifehacker’s poll results story was “Over 77 Percent of Lifehacker Readers Say Google’s Search Results are Less Useful Lately,” which combined the two most popular responses, one that is totally negative and one that can be read either way (results are less useful, but Google’s still the best way to find things).
However, “less useful” doesn’t mean “useless” or that people aren’t continuing to find things with Google on a regular basis. To better illustrate this, you could also do a headline from the same poll saying that 2/3rd agree that Google results are still useful. Consider the side-by-side charts below:
The first chart combines the negative and mixed responses into the “Yes” slice. The second chart combines the positive, neutral and mixed responses into the “Yes” slice. Same poll, seemingly different conclusions — unless you understand that growing noise (and frustration with it) doesn’t mean that Google has suddenly become useless.
Good Remains Good Enough
Last year, I did a long look at Google’s results in my story, How The “Focus On First” Helps Hide Google’s Relevancy Problems. The point of that was to illustrate what the poll numbers above are showing: good is good enough for Google to win. As I wrote:
In a few minutes, give me a query, and I can usually find at least one result that doesn’t match the quality you’d expect to be in the first page of results on Google. If it’s an area I’m an expert it, I can do it even faster — and find more outliers. And if you go to the second page of results, it can sometimes be laughable. Google survives this because for the most part, a few good answers are good enough.
If Google really were as bad as some anecdotal accounts suggest, you’d see a loss of users, in my opinion. That’s based on watching the search engine space for 15 years now, observing what’s helped players rise and fall. Instead, Google continues to remain well above its closest competitor. One measuring company, comScore, just reported Google having record usage in the US.
But Google Comes Under More Scrutiny Now
By the way, Bing and Blekko also get by on “good is good enough.” As with Google, I can easily find irrelevant results on them without breaking a sweat. But they aren’t coming under the fire that Google’s starting to take because, in my view, they’re enjoying the underdog benefits that previously Google enjoyed.
As I wrote in my Blekko Launches Spam Clock To Keep Pressure On Google story last week:
There was a press love affair when Google first came out. There continues to be a consumer love affair, in my mind, that the Google brand on search results can make them seem better. There have been several studies in the past where just putting the Google logo on someone else’s results will make a consumer think the results are superior.
I think we’re finally seeing this slip back on Google. Just as its achievements were inflated into super-greatness, now its results are blown-up into huge failures.
You can see this in Paul Kedrosky’s piece from this week, when he writes:
We could get better algorithms, which is happening to some degree, with search engines like Blekko and others
I like Paul, a lot. I like Blekko and the folks over there, a lot. But make no mistake. Blekko absolutely does not have a better search algorithm than Google. It has a different search algorithm that is used against a completely different collection of documents than Google — and one that is probably only a couple billion pages in total (if that) versus Google dealing with tens of billions of pages.
No one knows who has the better search algorithm. Blekko, for all the attention it has gained, still has tons of learning to do — and the folks at Blekko have said as much, even as they ride the latest “Google sucks” wave.
Perception Isn’t Reality
Personally, I’ve felt that Google’s search quality hasn’t been as good as in the past. But my gut feel might be wrong. I remember the negative experiences far more than I recall the many times that Google works extremely well for me. I also use Google far more than I use Bing or Blekko — which also means I’m less likely to notice things that go wrong at those search engines. I also search for things I never would have tried in the past. My expectations have grown, over time.
I don’t think I’m unusual. For those who’ve written recently of Google’s “decline” in search quality, I think they’re all regular Google users and mainly writing from their guts. None of the posts I’ve seen appear to have done any robust testing of queries on Google and compared those to Bing, much less tried to measure changes in relevancy at either search engine over time.
Testing Needed, But Testing Is Tough
I’ve written several times recently — as I’ve done years ago — that what we really need are batteries of tests runs against the major search engines, in a way that they’d both agree are fair. Nor is our column earlier this week — Google vs. Bing: The Fallacy Of The Superior Search Engine from Conrad Saam — the type of testing I’m talking about.
That column looked at only 20 different queries and found Bing slightly ahead. Pick a different 20, and Google might have “won.” I think you need to run many more queries than that.
In addition, do you discount if either Google or Bing changes the results based on your search history or your location? That might not make sense, if such systems are inherently designed to provide better results for individuals.
Do you discount “smart” answers or vertical search results like news headlines that may appear, focusing only on the “natural” results, those classic “10 blue links?” That’s not how a typical searcher interacts with results.
No One Really Knows
Relevancy testing was complicated enough 10 years ago when results were simpler. Today, figuring out a system is even more complex. Our column wasn’t trying to designate a “winner” in the search sweepstakes but rather point out the fallacy of anyone declaring that one search engine is “superior” to another. The honest answer is that we really don’t know. We don’t have the metrics to assess that.
I’ll be revisiting the relevancy metrics challenge in the future, plus talking with the major search engines about whether there’s a way to go forward with the idea, so that we’re not relying on someone’s ego search or anecdotal accounts to decide which search engine has the best results. There’s got to be a better way than that.
For more on this subject, see some of these past stories from us:
- Reviewing Some Bad Google Search Results With Sergey Brin
- The Google Sewage Factory, In Action: The Chocomize Story
- How The “Focus On First” Helps Hide Google’s Relevancy Problems
- Blekko Launches Spam Clock To Keep Pressure On Google