Sign up for weekly recaps of the ever-changing search marketing landscape.
Danny Sullivan Tackles Search 3.0 And 4.0 In SMX West Keynote
I thought I’d gracefully retired from the Danny Sullivan Keynote Review business. Comparing Danny to Edward R. Murrow, assessing how the attendance stretched the room capacity… ahh, they were good times. Then I awoke to find myself in a large hall at the Santa Clara Conference Center. It wasn’t a dream! Turns out I was only a bit sleepy because that O’Hare blizzard delay caused me to land in San Jose at 3:00 a.m. on this day, February 26th, 2008. It was now 9:01 a.m., and a fresh Sullivan keynote was beginning at SMX West. There I was, sitting over to the right-hand side of the room (Danny’s left) near where Matt Cutts was hiding. Laptop open, battery charged. Danny speaks. As if this were liveblogging, which it isn’t, I now switch to the present tense.
The topic: Search 3.0, Search 4.0, and Beyond. To cover this — surely not, 4.0?? we’re still hearing complaints about Web 2.0! — Danny will have to cover 1.0 and 2.0. This is straightforward stuff for experienced search marketers, but brand new to many. Danny is talking about how Search 1.0 was primarily about on-page factors like keyword density and keyword matching. This was so unsophisticated, it more or less launched the practice of gaming the search engines. In those days, pre-1998, search engine optimization as Danny taught it through his Webmaster’s Guide to the Search Engines was relatively rare. In our hazy memories, we seem to remember, on one hand, ordinary marketers and webmasters making good use of Danny’s detailed, common-sense advice about how various search engines ranked content; and on the other, those who took that advice and ran with it in all sorts of evil ways. All will agree that Search 1.0 might have been a golden age for index spammers, but from the standpoint of the searcher, it sucked.
Moving onto Search 2.0, Danny is addressing the move by the search engines to address the link structure of the web, treating a link like a vote. While the most notable pioneer of this trend was Google with its PageRank method, Danny is also mentioning Direct Hit, which looked at click paths to assess the apparent popularity of different search results.
Already I am beginning to sense the difficulty of slotting particular search innovations into generations of x.0’s, given that Direct Hit’s methodology started to encroach into the types of behavioral analysis and quasi-personalization that would later be more fully developed under Search 3.0 (or 4.0, or 5.0, depending on who you ask). I won’t belabor the point but it seems to me too many .0’s here, and there’s a problem of implying a sort of temporal or generational shift where in some cases they may be just different aspects or types of search or different metaphors or philosophies that may not need to be relegated to the scrapheap just because they coexist with other (or newer) stuff.
Anyway, back to Search 2.0. Danny is pointing out that it didn’t take long for the optimizers to figure out how to game this supposedly bulletproof generation of search. You had the playful approach — Googlebombing — which would merely produce an embarrassing result on a non-popular query. But more fatally, you had the development of link farms and the link economy, developments which Google denied for years.
Danny is now turning to discussion of Search 3.0 – Universal and Blended Search. This is the phenomenon whereby search engines will increasingly mix and match other types of search results along with their standard Ten Blue Links search engine results page. For example, Google might place a YouTube video search result in position 5 on the page. Because the still image requires more space than a standard result, this takes up quite a bit of screen real estate, and pushes the remaining SERP’s that much farther down the page below the fold. A Google News result in position 10 takes up one more space that formerly would have been in the search marketer’s organic index playing field.
Spelling out the inadequacies of 1.0 and 2.0 actually serves as a pretty interesting piece of insight into why there is so much focus on 3.0 now. The official reason might be that users respond well to being provided with a variety of different types of information, a variety that can be refined over time by paying attention to click patterns. An unofficial reason, though, might be that Ten Blue Links often don’t withstand careful scrutiny on their own. The organic index is spam-ridden, user queries are often hard to disambiguate, and search algorithms are often failing to show definitive results. Finally, gen-2.0 search algorithms may over-reward “SERP staples” like Wikipedia, and fail to alert users to in-house content that the search engines might have spent much time developing.
Now Danny is illustrating the concept of Google Universal with a screenshot that shows a large number of local search results, totally dominating the area above the fold and pushing everything down the page. He’s saying “and look, they’re the top results, so you’re very likely to click on them.” I am now thinking to myself – “geez, they always do this! Grehan does it too!” This example clearly shows that local results are the second-highest results on the page, not first highest! Three attractively-positioned premium sponsored listings are taking up quite a bit of real estate above that, and are further highlighted with a goldenrod background. Moreover, about 4.5 sponsored listings are visible above the fold in the right margin, and they look pretty attractive too. I always have to remind these organic search gurus of this stuff!
Danny is also tipping his cap to blended search efforts at other search engines, such as Ask 3D and Morph. He gives a useful Yahoo example, showing how they blend in an event result from Upcoming, to give the search results page a fresh feel.
We’re barely into Search 3.0 and we’re now looking at making the so-called “social graph” and other elements of personalization a big part of search. This is Search 4.0, if you will. Danny is having mixed feelings about some of it.
Personalization can emanate from data the engines collect from portal pages such as iGoogle (funny, Danny doesn’t mention Yahoo’s personalized home pages dating back much farther); users bookmarking things specifically using the search engines’ bookmarking features or delicious (you’d also want to wonder about browser integration, web-based bookmarking 1.0, and also about the so-called P2P Search era, and why those failed to take hold whereas now the SE’s are seen as clever to integrate them?); search histories showing you what you’ve clicked on in the past; and web history or clickstreams. Danny says that Eurekster and Yahoo 360 haven’t succeeded yet. So, what are we to make of Search 4.0?
Danny is hitting a slightly skeptical note when it comes to talk of Facebook and the social graph. I think it is useful to play devil’s advocate here, and it doesn’t conflict with a recognition that the information could help advertisers and users. Danny’s light critique of this realm is the question of how seriously you can take a community where people have 5,000 “friends.” But those gathering data don’t need to take it all seriously — they could measure activity. The biggest problem with what Danny is calling “monitoring clicks in a trusted environment,” of course, is privacy. Bang on. If the social graph is tantamount to spyware, then for every significant action or advance, there is likely to be a reaction or retreat.
For the benefit of new attendees, Danny is fleshing out his talk to cover other key ideas that everyone should know about, but I’m using the time wisely to write the wrap-up paragraphs to this column.
I very much hope Danny reprises this exact same talk next year, but hopefully sooner. Many of those who come to the table with a little search knowledge that is now six years out of date can be dangerous and hazardous to work with. Except for a couple of nanoseconds as he discussed Search 1.0, Danny proffered nary a mention of meta tags, page titles, heading tags, or keyword density. Some of those basics are important, but they don’t constitute anything other than a starting point towards good rankings. Nor did he bore us with nonsense about reciprocal link pages or other warmed-over gimmicks.
Rest assured, the scam cold callers will keep the very least relevant search optimization techniques on the agenda for those impressionable enough to pick up the phone and listen. It’s a marketer’s job to stay current, though.
Search engines are getting smarter every year. Search marketers need to keep pace, and Sullivan’s keynote is a great way to introduce the beginner-to-intermediate audience to the cutting-edge stuff, from an authoritative source. I won’t be surprised if, next time around, I see a few attendees being literally dragged in by the ears. Listen to what Danny’s saying! You can’t go wrong.
Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.