Return To The Search Engine Shoot-Out

My colleague Chris Sherman briefly looked at the PC World ‘Search Engine Shoot-out’ article the other day, and I also mentioned it in my own weblog. I read through the article briefly, but something about it kept nagging me, so I wanted to go back and look at it in rather more detail. The more […]

Chat with SearchBot

My colleague Chris Sherman briefly looked at the PC World ‘Search Engine Shoot-out’ article the other day, and I also mentioned it in my own weblog. I read through the article briefly, but something about it kept nagging me, so I wanted to go back and look at it in rather more detail. The more I look at it, the more concerns I have over the piece.

The first concern of course is the idea that there’s a ‘best search engine’. There’s very little difference between saying that and saying that there is a ‘best reference book’ or ‘best President’ or ‘best television programme’. It simply doesn’t exist, yet there does appear to be this Holy Grail in the industry that we should all be trying to find some ultimate resource that will be the answer to all of our needs. Of course, nothing is further from the truth – there IS no ‘best search engine’ in the same way that there’s no best reference book. What is best in one situation is not best in another, and in order to use search engines successfully we need to employ a blended approach, moving from one engine to another as needed. In fact, on my own website I have a listing called ‘Which search engine when?‘ that lists a variety of things you may want to do or search for on the Internet, and which engines may be best at helping with that task. I’ve listed over 100 different resources, all of which are good at some elements of search, and not so good at others.

This is not difficult to move from one option or alternative to another and I see no reason why we shouldn’t expect people to do this when it comes to internet searching when they’re perfectly capable of moving between different resources in the rest of their lives. The idea, promulgated in this article is that Google is in fact the best search engine, and quite simply this is not the case. It may be the best search engine in some situations for some users, but that’s a rather different situation to saying that it is ‘the best’ which is quite simply a silly, simplistic statement.


A second concern that I have relates to the competitors Jeff Bertolucci, the author, decided to pit Google against. Some of these were excellent choices and he hit the nail on the head – Yahoo, Live Search, and Ask do need to considered in any comparisons between search engines. However, what puzzled me was some of the other choices that were made – the Wikipedia for example is not a search engine, and it simply cannot be considered in the same way, or be expected to perform searches that are not appropriate to its interface. Similarly the Open Directory Project is a completely different type of search engine – an Index or Directory engine, getting its data, storing, indexing and displaying it in a completely different way. It is obvious from the very outset that both resources would perform poorly as indeed they did. What about the omissions from the list as well? I was very surprised not to see Exalead on the listing, and when I used some of the sample queries Bertolucci used it performed very well indeed, and would certainly have been very highly placed in the final table. In fact, I could name a number of other engines that could easily have slipped into the list replacing those that simply should not have been there in the first place.

The approach used to work out the results also needs to be looked at in some detail. At this point I’ve got a lot of sympathy with Bertolucci because it’s very hard to test and compare search engines unless you do hundreds if not thousands of tests. Anything less is going to open you up to criticism of anecdotal testing (looking for favourite names or football teams), but if you only have a limited time or budget available it becomes inevitable that this approach has to be taken. The type of searches used, with keywords and phrases is of course going to work well with certain engines and not others. If you are looking for an explicit result giving a particular fact (as all the test queries were) some engines will of course perform better, because that’s what they were designed to do. However, if a different approach was taken, to provide perhaps a holistic response, with a search engine giving the searcher lots of information about a particular subject, an overview, or ways of expanding or narrowing the search then quite clearly Google is not going to excel in that situation, since that’s not what it does best.

Moreover, a search engine is not simply the sum of its results – there is much more involved than that. The level and amount of functionality is of vital importance, and comparisons need to take into account the ability to search on regions of the world, file formats, periods of time, different languages and so on. It’s also necessary to look at the way the information is displayed on the searcher’s screen; there’s no point in having the correct result available if it’s difficult to read.

Now of course it’s very easy to say this, and much more difficult to put into practice. Indeed, much of this is going to come down to individual preference, and while one person may like thumbnail shots of a page on the results screen, someone else will hate it. All of which goes to provide additional weight to my point that there’s no best search engine, so why pretend that there is? Given that the tests were run on a small number of (sometimes ill chosen) search engines the fact that the overall result showed that Google was “indeed the best search engine” is quite frankly completely meaningless.

Let’s move along and look at some of the other categories included in the shootout, and in particular the second section of the article entitled ‘Undisputed Champ’. The very first sentence highlights another problem area for me. The author states “If you use Google and are happy with it, you have no reason to switch engines.” There is a fundamental flaw in this opinion. I may be very happy driving a 20 year old Cadillac, without realizing that newer models are faster, with better fuel consumption and a whole host of interesting gadgets, but does that mean I should stick with what I’ve got? Of course not. In fact, the people who most need to look at different engines are exactly those who are in an unthinking comfort zone and not looking at other alternatives. This is bringing us very close to the territory of ‘There are things that we don’t know we don’t know’ and the only way you can find out what you don’t know is to do the exact opposite of what Bertolucci is suggesting, which is to explore, move out of the comfort zone and see if you can be more effective with other tools.

This section of the article also illustrates another failing with the overall methodology used. An image search was run on the term ‘windform’ with the searcher having a specific idea as to exactly what they wanted – a horn that measures 20 feet in length. A search engine gained points if it returned an appropriate image on the home page. However, simply because a search engine did, or did not do this is entirely irrelevant. As long as it found images that matched the search term it would have done a good job. I’d be concerned if an engine returned pictures of cats with a query like that – unless it was pictures of a cat called ‘Windform’, but when this doesn’t happen it is not the search engine that has failed, it is the searcher. If a searcher cannot clearly indicate what they want, then a search engine has little to work with, and a single word search is quite simply a poor search. I’d be willing to bet that if the search was expanded to something like ‘windform horn’ all the image engines would have done a good job. It is unfair to criticize a search engine when it fails to live up to your unarticulated request for a specific piece of information; we need to move a lot further down the road towards personalization for this to become a valid criticism and we’re certainly not there yet.

The section of the article ‘What’s new in search’ was also a puzzle, given that the author of the piece was actually spending his time looking at existing search engines, rather than the test beds that all the major search engines use. Once again there is a fundamental flaw in the assumptions made in this section. Search engines are criticized because they don’t, for example, display a photograph of a daffodil when that search term is input. If that’s not their job, that’s not a valid criticism. To be fair however, there is an inconsistency here – Ask for example did not show images of daffodils, but it did show an image of the Eiffel Tower when that term was searched for. So the functionality is willing, even if the search term is weak. However, in order to do the search engines justice at this point surely the author should have been looking at the test beds? Ask X displays images of daffodils along with a whole host of other information. Searchmash (from Google) did not.

The section on television guides is one that I’m going to pass by, simply because it did exactly what it said on the tin. It’s one of the few sections that didn’t surprise me, that chose good contenders and provided an excellent summary.

It is a shame that I can’t say the same about ‘Smart Interface Tricks’, which primarily limited itself to looking at the Ask smart answers function, and comparing it to the other contenders. It was a shame that the ability to suggest different search terms wasn’t mentioned, or the opportunity of saving searches as RSS feeds, or the fact that some search engines (Google Suggest and Ask X for example) can suggest terms while the searcher is typing. Or the ability to a phonetic search, offered by Exalead, or the various ways that Accoona has of targeting search results. I could go on, but I think the point is made, though I’m slightly concerned by the term ‘tricks’. None of these are tricks; just good, old fashioned programming based on an understanding of what will make life better for the end user.

The picture search engine section also puzzled me. It’s true that the major contenders were included, but again limiting the success or otherwise of the engines to their ability to return specific images was a waste of possibilities. Returning photographs that match the search terms is of course paramount, but the ability to restrict results to colour, black and white, size, royalty free images and so on is also important to the searcher. Seeing lovely accurate colour images of daffodils is of no use to me if I want black and white images. Once again the parameters used in this section have to be called into question. Flickr for example is a huge repository of photographs and is a key search tool when it comes to images – partly because of the images themselves of course, but also the opportunity to browse through specific groups of images as defined by users, or easily have discussions based around particular subjects, or the chance to comment on images is hugely important.

There is a section on News resources, but again there were strange omissions. The BBC site was ignored for example, yet the search engine is perfectly functional and I would have found it helpful to see how it matched the other engines out there. Once again, the emphasis is placed solely on results, with little reference to functionality – these really are not the be all and end all of the worth of a search engine, and other considerations do need to be taken into account. Though unfortunately apparently not in this survey.

I’m going to ignore the Mobile/local section of the report entirely; very few search engines are paying attention to local needs if those needs are not based in the US. Since I did not have access to the particular type of mobile/cell phone being used I can’t provide sensible comments anyway, other than to say that simply because a search engine looks good on one device it doesn’t necessarily mean it would look good on another, and just using one resource does, in my view, render the testing fairly pointless

The shoot out has a section specifically devoted to the future of search. Ask X is mentioned (though the author of the piece continually refers to it through the entire article as Act X, which does make me wonder slightly), as is searchmash. Snap also gets a brief mention, but that’s about it. No mention of Yahoo Mindset, no mention of Google labs and the things they are working on, no reference to new and developing Web 2.0 search functions. Perhaps by this point the author was beginning to realize that he’d bitten off more than he could chew, or alternatively it was badly edited, but in either case the opportunity to look at this in depth was missed. These errors continued in the section on creating customized search engines – where was the discussion of the Eurekster swicki? Mention of the Yahoo search builder? Or even, in a piece which seems to take every opportunity to point out how wonderful Google is, was there no mention of the Google custom search function?

At which point the article draws to an end, with an explanation of how the testing was done. The author claims to have covered the ‘real world use’ of search engines, but were was mention of multi or meta search engines? Search engines designed for children? Social bookmarking search engines? I could continue (and still not stray into the area of search engines not designed for the average member of the public), but there seems little point. As a shoot out the article is sadly lacking in almost every area, and as a consequence the results are of little value. For a resource such as PC World, which has an excellent reputation and has previously produced some excellent resources this is less of a shoot out and more of damp squib.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Phil Bradley
Contributor

Get the must-read newsletter for search marketers.