• http://www.seroundtable.com rustybrick

    I hope to go through the quiz tomorrow on the plane… Should be fun to pick up from where you left off.

  • http://www.blizzardinternet.com Carrie Hill

    WOW – I got a 76% and was seriously doubting what I knew. I thought I knew what I was doing – but after reading all the buzz about “wrong answers” and such I dont feel so dumb!

    Thanks for taking an in depth look at this – I thought for a bit that I was the only one thinking it was a bit odd.

  • http://www.seomoz.org randfish

    Danny – while I disagree with a great many of your critiques (and think you’re just dead wrong about a couple), I’ve gone over the quiz with a fine tooth comb, and careful attention to this post in particular. I doubt you’ll be fully satisfied (I simply couldn’t justify changing a few of the questions you called out), but I do think you’ll like the new #6 :)

  • http://searchengineland.com Danny Sullivan

    Which ones, which ones? I gotta know, Rand — where am I dead wrong?

    See, that’s one place where the test is indeed useful — it has people talking about different ideas on SEO. But I want to know where I’m dead wrong, so I can either understand why or argue my case :)

  • http://www.rootinfosol.com root123

    Kool- I must say that everyone associated with SEO should go through it.

  • http://www.seomoz.org randfish

    Dead wrong – Google does not show a “representative sample” of links in any sense of the word. The absolute correct answer there is Yahoo! If you had quibbled with me because Google’s blog search link command provides good data, I’d say OK, but it’s only for blogs, so Yahoo! is still better. In addition, with Yahoo! you can see links that come from only certain sources using “site:” and choose only links from pages containing certain keywords, etc. It’s a flexible, powerful tool. Google’s web link: command is as close to useless as you’ll get.

    Also preferred domain and canonical URL are not the same thing. One refers to a domain and the other to a given page location. Canonical is used to describe the version of a given page that is the original “source” or the version that the website owner would want to be that source. Canonicalization isn’t a process you do in Webmaster Central, it’s something you’d need to do in your site architecture (good examples of the problem would be paginated versions of content on blogs, print-version pages on media sites, content that’s been licensed out, etc.)

    #9 – you’re just being ridiculous there. Media search engines? It’s like you just want to find something to quibble with – :(

    Hopefully, you’ll be a little happier with the edited version.

  • http://www.tekwebsolutions.com Mike Tekula

    I think I agree with the spirit of Danny’s post here in that I didn’t come away from the SEOmoz quiz feeling like my 80% rating of “SEO Professional” really made a difference one way or the other. A lot of the questions would have left me a bit disgruntled if that score actually meant something.

    That said, I thought the critique of #9, among a few others, was a bit of stretch since they weren’t very difficult or problematic questions for me. Sure, you can nit pick and find something wrong with every one of the questions – some ambiguity in the question language, the fact that some of it was simply anecdotal and not of high-relevance for a working knowledge of SEO, etc. I think the test had more to do with determining how much you read on SEO than how skilled you are. What would the fact that PageRank is named after Larry Page have to do with your skills as an SEO? Bit of a stretch to suggest that matters. . .

    Also, it looked like the scores stated over at the SEOmoz blog were all over the place. Seasoned SEOs were coming in at 70% while SEO bean sprouds (not unlike myself) were sometimes up near 95%. That obviously tells you something. . .

    Bottom line: I had fun taking the test, but I don’t think I’m going to be posting my “badge” anywhere.

  • http://www.altogetherdigital.com Ciarán

    “Bottom line: I had fun taking the test, but I don’t think I’m going to be posting my “badge” anywhere.”

    Which one assumes was the main point of it?


  • http://searchengineland.com Danny Sullivan

    I didn’t say Google showed a representative sample. I said a sample of links, period. And that’s exactly what it shows. And I further said what is more important to you, a sample of links that Google decides is worth showing or a chunk of all links. The answer is, it depends.

    You might feel the sample Google shows has no value, that they are mixing up low quality and high quality links to mess with our heads. Might be true, but then again the links they decide to show might be important some way. Overall, I agree with you that if you want a big comprehensive list of links, Yahoo’s the way to go.

    I should add that I originally gave each question a “grayness” factor but pulled that as being too confusing. But this was a question I didn’t feel was that gray. I generally agree with you, but you can indeed quibble (and some people aren’t even going to consider it quibbling).

    As for canonical, sorry, Rand. How you define the word does not mean everyone defines it that way — and I’ve heard people use it to mean various things. I have most heard it in the industry as being exactly what I wrote — how search engines, Google in particular, decide which URL to use for a page if they have multiple choices. In particular, see Matt’s post here, where he says:

    “Sorry that it’s a strange word; that’s what we call it around Google. Canonicalization is the process of picking the best url when there are several choices, and it usually refers to home pages….When Google “canonicalizes” a url, we try to pick the url that seems like the best representative from that set.”

    So see, that to me is Google doing canonicalization. What you describe is an attempt to influence the canonicalization process. And you’re influencing it because you have a preferred URL or domain you’d like to see show up — something that may NOT be the canonical domain that Google chose.

    So when you say in the test, “the primary URL you want associated with the content is known as the ‘canonical version’,” I’m like no it’s not — the canonical version is what Google picked, and what you want has no defined name. We don’t have some common industry jargon for that. If I had to pick, I’d call it the preferred URL or the preferred domain. And when it comes to domains, that’s exactly what Google calls it — preferred domain, a way to influence from Google Webmaster Central how Google may do canonicalization on a domain level. On a URL level, we don’t have those tools there.

    As for media search engine, no, I’m not being silly at all. Blinkx? You know, reads video files transcribes the audio part to text, to make it easier to search for content within the file? Google Video used to do this; Everyzing also does it.

    Rather than quibble, it’s a seriously bad question. You asked about crawling, but then you included content that isn’t crawled but rather indexed. I mean Flash and Java, those are on pages that get crawled, not typically as files that are going to stand along on their own for spidering, right? So it’s not that they don’t get crawled — it’s that the content isn’t indexed when visited.

    Now perhaps it was meant as a trick question — but if you are going to be tricky, then you’d better make sure that the trick can’t be spun back on you. ANY URL to ANY content, regardless of content type, is easily crawled. But is the content itself easily indexed?

  • http://www.tekwebsolutions.com Mike Tekula

    Well, Ciarán, I don’t think I was suggesting that the “main point” of the SEO Quiz was anything in particular. If anything I’d say that the purpose was to get people involved in new discussions at SEOmoz.org – which worked beautifully. It also will certainly spur more learning among mozzers which is a very good thing.

    I was simply agreeing somewhat with Danny in that a lot of the questions were a bit fuzzy and not highly relevant to an SEO’s skillset. I wouldn’t expect to see a direct correlation between quiz scores and the results a particular SEO could achieve, in other words.

    Clearly the fact that I scored higher than Danny Sullivan, for example, doesn’t relate to our actual comparative abilities or knowledge. . .maybe in my dreams. . .

  • http://www.markbarrera.com Mark Barrera

    This quiz seems to have stirred up quite a bit of controversy which I think is needed to remind people the dynamic nature of our industry. Great post!

  • tagvine

    Danny, I’m not sure what you problem is with the quiz. I was a bit disappointed in reading through your post, as it was extremely nit-picking and as Rand mentioned, at points, ridiculous.

    I’m a fan of your reporting and insight into the world of SEM and SEO, however, I think this was not the greatest example of that. Sure, anyone can over-analyze any SEO recommendation and say it’s stupid, but it’s not necessary.

    “It’s wise to use the keywords you’re attempting to rank for on that page as the anchor text of the external-pointing links.” The clear answer is no. Your rebuttal and attempts to make yes look like a viable answer is juvenile. Sure, the logic is there and technically you could make minor disputes, but come on. That’s over analysis and juvenile. It takes me back to my school days when you have that hypercritical kid that always points out that technically the teacher isn’t 100% right. Common sense tells you that if you want to rank for a word you shouldn’t help someone else rank for that same word… simple as that. And as for “But this stuff about getting your own pages to rank by the outbound anchor text?”, Rand doesn’t mention that in the question, you simply assumed it.

    I’m not going to go through them all, but nearly all of your complaints were of this same, overly-critical nature. After reading this it makes me think you have a personal grudge with Rand, or you’re upset with the score that you received (not that it has any indication to your ability) and wanted to publicly dispute and ridicule the quiz. Those were my first two thoughts after reading your article.

    I must say I was a bit disappointed.

    I mean come on, you dispute the wording of “why is it important for…” and say that is should be “how might it help to…”. Just an FYI, that is essentially the same phrase. To even mention, and furthermore, dispute that is just silly.

    Making a point that Google’s link: command might better satisfy than Yahoo’s Site Explorer? That’s a joke. You’re essentially saying less information (a sample) is better than the whole thing. [sarcasm]I know, let’s do A/B multi-variant testing on 10 people and make changes to our site based on that data instead of showing it to 100,000 people, that way it will be less information and more valuable.[/sarcasm] Let’s be logical here, more info is better. Plus, as Rand mentioned, Site Explorer actually has tools and operators that give you even better results.

    The more I reread your post the more frustrated I get.

    Honestly, I think only 1-3 of the points you made in here are really valid to the point of discussing. Rand, I’m with you on this.

  • http://searchengineland.com Danny Sullivan

    @tagvine: To make myself perfectly clear, it was a bad quiz. That’s my problem with it. It had questions which in many cases did not have clear answers. It sometimes had answers that would not be clearly agreed upon by different people. I think there’s ample evidence of confusion and agreement, at this point. I think Rand himself has already said there were problems with it. It it was a good test, no one would be poking at it as they are now. They’d just be collecting their badges, laughing about how bad or well they did and showering Rand with links. Well, he’s still getting the links.

    My points are not all nit-picky, especially when many of the questions themselves are poised to either try and be “tricky” as a test of knowledge. If I had the energy, I would have gone through all 75 questions and provided many, many other examples where I rolled my eyes and thought, “c’mon.” But after the first 10 or so, I’d documented plenty.

    I’m sorry you felt I was being juvenile in my response. I probably spent about two hours going through the test, not just with the questions I posed, but also reviewing other things. It was a deliberate and considered review, not simply a knee-jerk schoolyard prank. And it was something I did, ironically, out of respect for SEOmoz. If they’re going to be hauled up on the test, I wanted to go through it myself first hand.

    Yes, I had to assume in question three what might Rand or SEOmoz might be thinking. That’s because a lot of the test was down to whatever they might think works as fact, rather that it being the exact case for Google much less all the major search engines. If you want to win this test, that’s how you have to think — like SEOmoz. Now if you’re on SEOmoz all the time, love everything written there and agree with all, I’m sure this worked for you. Me, I think people should question everything and determine their own truths.

    As for being nitpicking on wording, here’s the deal. I’ve been doing this a long time, right? You know, writing about this stuff for 11 years. Wording is crucial, absolutely. If you’re talking about indexing and say ranking, you’ve dramatically changed an issue. One question as I explained used crawling as a synonym for indexing. In that question, it wasn’t the same meaning.

    I constantly qualify everything I write — and if I’m not, hold me up to shame. That’s because most everything about SEO is in the “might,” “maybe,” “could” or “is believed category.” I can find things on that test where the exact answers might not work for a particular site. This is why you qualify. There is rarely a “best” way, though there is often a way that “many believe” is the best. Small turns like that make a big difference especially when you are dealing with people new to SEO. Especially when they do all the “right” things and then don’t understand why the magic formula to success didn’t help.

    Indeed, I’m a veteran of having to deal with readers back in the late 1990s when we had WebPosition roll out with “perfect page” analysis tools, where it would take a page in and then spit out how you should change it to best rank for Infoseek, AltaVista and so on. Except you know, those change weren’t that different. And you know, you could easily find pages that were designed for Infoseek (they’d all say things like IS in the URL) ranking on Excite.

    If you felt those 75 questions were all perfectly fine and I’m just nit picky, more power to you. That’s the point — they’re working for you, I’m not going to tell you that you are wrong. But other people definitely do not agree with them, because SEO is not and has never been a precise science, and it gets very hard the more you try to pin it down as such through exacting things like a test.

    As for the sample, absolutely — less information *could* be better. Now if I dump 100,000 links from Yahoo on you, is that better than if you had only 1,000 links from Google? Why? Because more is better? You have to sort them in some way, so more alone isn’t better. Knowing that your competitor has 90,000 links from some guestbooks that might not show in a Google backlink lookup is helpful to you. I can — nitpick if you want — argue it is not. As I said also, however, I agreed with Rand that Yahoo was the best choice.

    In the end, I can assure you that if I’d kept going through and documenting all of that test, it would have been more than 1-3 points you agreed with. But as for those 1-3 — hmm, I covered issues I had with 15 different questions. Give me the benefit of the doubt and say you agree that 3 of those 15 had real flaws. So that’s 20 percent of the test questions examined that are bad. If you took a driving test, would you feel comfortable knowing that 20 percent of the questions made no sense?

    I understand that this was partially in fun. I also appreciate the education value that was involved. But I guess I’m done with the quizzes. Next time, I’d rather just see the answers trotted out for examination and debate.

  • http://searchengineland.com Danny Sullivan

    Just to add one more thing, as I said to Rand, I originally had a “grayness” score for each of these questions. I dropped that as perhaps being to confusing. But I should have kept it. I would have made it clearer where there were some questions where I know the dispute factor is small versus other ones where it was much larger. For example, question one had a grayness of like 1: I felt few would dispute it. Question 4 was more like 5 out of 10: maybe it will help; maybe it won’t, what type of site are we talking about?

  • http://www.jehochman.com JEHochman

    “the primary URL you want associated with the content is known as the ‘canonical version’,”

    When I do SEO, it is. :-)

    Danny, I think your 20% no sense estimate is high. Dan Thies and I both got 86-87%, which means that probably only 13 – 14% of Rand’s questions were flawed. :-D

  • http://www.seolid.com/ seolid.com

    I got a 77 and even got a right answer for a question (about Danny Sullivan) which i didn’t answer , since i never encountered that question while taking the quizz – a glitch in the process? maybe.

  • http://www.linux-girl.com Asia

    This was definitely fun but long. Many of the questions were difficult to understand without having to read through it carefully – but I got through it – not as great as I hoped to be, but I did get all the important questions answered correctly! So I’m happy :)

  • http://www.altogetherdigital.com Ciarán

    Mike Tekula – I didn’t mean to put words in your mouth; I just assumed that the main reason for the test was to generate links, and that one of the ways that the mozzers have done this in the past was by giving out badges for people to put on their site (see the Web 2.0 awards).

    This isn’t a dig – I LOVE the moz; it just made me chuckle that you had said what I was thinking…(even if your reasons for doing so were different to mine).

  • http://www.seobythesea.com Bill Slawski

    For #20, I picked Yahoo as patenting “TrustRank” rather than Google. Dang it — I knew it was Google! But it was late, and I questioned myself.

    You were right, Danny. It was Yahoo. The trustrank patent application (from Yahoo): Link-based spam detection

    TrustRank is a link analysis technique related to PageRank. TrustRank is a method for separating reputable, good pages on the Web from web spam. TrustRank is based on the presumption that good documents on the Web seldom link to spam. TrustRank involves two steps, one of seed selection and another of score propagation. The TrustRank of a document is a measure of the likelihood that the document is a reputable (i.e., a nonspam) document.

    Yahoo has four more patent applications which take trustrank into the realm of social networking, by call it “dual trustrank” and incorporating user annotations and tagging and social networks into the linking analysis.

    The google patent that Rand points to has nothing to do with trustrank.

  • http://searchengineland.com Danny Sullivan

    Thanks, Bill, for helping me think I wasn’t losing my mind. See, I saw the TrustRank and thought Yahoo — because it sounds like something Google should have, but you’d done all that writing on it, so Yahoo was sticking in my head. So I went Yahoo. Then I started doubting myself after getting it wrong — it was late. Next time, I just do the test open book.

  • http://www.venere.com Susan

    Excellent article Danny! I too was a bit put off by some of the supposedly wrong answers and haven’t changed my mind since.

    Thanks to you too Bill. I answered Yahoo at #22 and was a bit baffled when it turned out to be Google. I positively remembered reading about a patent application by Yahoo with the term TrustRank, it must have been on your blog :)

  • http://www.tekwebsolutions.com Mike Tekula

    Ciarán – sorry, I think I misunderstood the intent of your comment. My sincere apologies if I came off a bit prickly in my reply.