Dissecting An SEO Quiz — Are There Right Answers?

SEOmoz released its latest SEO quiz today, which is now getting some buzz as folks poke at the accuracy of the questions. Below, my own spin through the test, which constantly had me disagreeing with many of the "answers," plus some links to more of the chatter about it.

Some of the disagreement is in comments in a post about the quiz at SEOmoz itself. Over at our Sphinn forum site, we have multiple topics arguing about the quiz (1, 2 and 3) Former Googler Vanessa Fox took a swing in her Why the SEOmoz SEO Quiz Is Completely Wrong post, giving me a big chuckle at the part where she found herself getting an answer wrong, only to have the proof of her wrongness be a quote from herself!

When it came out this morning, Barry and I talked about going through it, and Barry said he might do that tomorrow. He still might, but I ultimately decided I wanted to go through it myself. I intended to go through each and every questions, but I lost energy after the first 10. Still, I think it will be a useful look over why it can be so hard to come up with a quiz with exactly the "right" SEO answer.

Let’s dive in at question 1:

1) Which of the following is the least important area in which to include your keyword(s)?

You could chose from Meta Description, Body Text, Meta Keywords, Title, Internal Anchor Text. Me, I went for the meta keywords tag, since as I explained just recently (Meta Keywords Tag 101: How To "Legally" Hide Words On Your Pages For Search Engines), only two of the four major search engines even use content there for retrieval purposes.

2) Which of the following would be the best choice of URL structure (for both search engines and humans)?

OK, so you get five choices. I know the answer the quiz is after. It wants you to go for one of the three choices that use keywords in the URLs. In particular, the "right" choice will be wildlifeonline.com/animals/crocodile because to a human, that suggests an order (here are all our animals, and within the animal area, our crocodiles).

The other two "keywords in the URL" choices aren’t as neat. Then you’ve got two URLs with no keywords. One is using parameters, you know, like wildlifeonline.com/blahblah?id=1234. The other is a nice short URL with a number like this, wildlife.com/563.

Well, avoiding parameters in URLs is good advice, though search engines are much, much better at it over time. Keywords in URLs have commonly been consider to provide at best a very slight ranking boost, so I suppose one of those three choices makes sense. BUT! You also want to make sure your URLs don’t change, if you can help it. Saves you from maybe getting 404 errors, losing link love without setting up redirects and so on. Now say you use blogging software like Movable Type. Keywords in the URLs are all nice and good until you discover changing a post’s title can change the URL. In that case, maybe you want what I call rebuild safe URLs. That can mean numbers, and so numbers could arguably be better for search engines and humans (since humans don’t like broken URLs, either).

3) When linking to external websites, it’s wise to use the keywords you’re attempting to rank for on that page as the anchor text of the external-pointing links.

It’s a true or false. The answer is supposed to be that if you want to help your own page, link to the other page with the words you want your own page to be found for. I think. It’s a bogus question, though. When linking to external sites, you should use anchor text that explains to those clicking on the link what to expect. Alternatively, if you want the external site to rank for certain terms, then make sure those terms are in the anchor text. But this stuff about getting your own pages to rank by the outbound anchor text? Makes my head hurt.

4) Which of the following is the best way to maximize the frequency with which your site/page is crawled by the search engines?

The answer is put your head between your legs and kiss…. I mean, you don’t have a lot of control over this. One answer is to "frequently add new content," which might help search engines realize they should keep coming back. But then again, if you’re some new site with no authority or reputation, they ain’t going to come a callin’ just because you add a lot of stuff. That would mean they constantly are letting time get sucked down by sites that would just post anything in hopes of getting listed. It’s probably the best of the choices, but it’s not guaranteed. Saying "best way" would be better qualified as "a way that might help."

5) Which of the following is a legitimate technique to improve rankings & traffic from search engines?

"Submitting your domain to the top 2000 search engines on the web" isn’t a choice I’d go for — but neither is it an illegitimate technique. I mean, using one of those tools that submits to 2,000 search engines isn’t going to bring you traffic when only four or five of those search engines has significant traffic. Nor does submission generally do much to ensure ranking. But it’s not spamming to do this. And adding "keyword-dense meta tags" isn’t wrong if the density means the tags contain a variety of different but relevant terms. Again, probably won’t help you — but it’s not spamming.

The right answer is clearly supposed to be "re-write title tags on your pages to reflect frequently searched, relevant keywords." Though, that’s not right either — it’s really, rewrite title tags to include the key terms you want each particular page to be found for, with the assumption you also use those terms as part of the body copy of the page.

6) "Don’t Make Me Think" is a must-read book about:

I admit it — I didn’t know. And I didn’t know I should know the book as part of my SEO knowledge. Nor do I think everyone would agree you should. FYI, I guessed it was about site design and usability.

7) Which of the following is the WORST criteria for estimating the value of a link to your page/site?

Sigh. Worrying about whether the link is useful at all, perhaps. I mean, it’s a link. If someone’s going to give you a link, take it. If you’re asking should you request a link — or buy a link, then sure, you might want to evaluate whether it is worth the time or cost involved. Of the choices, we’re suppose to say the Alexa score is the worst criteria, since Alexa ratings are commonly seen as unreliable. OK, I can roll with that. The better answer is that all the other criteria (is the page ranking? is the page including in an index) all relate back directly to the search engines themselves.

8) Why is it important for most pages to have good Meta Descriptions?

Interesting — I totally agree with the right answer, that "they serve as the copy that will entice searchers to click on your listing." But, sometimes they aren’t used, so that’s not entirely right. And you know, we don’t use them here (that’s actually going to change in the near future). So saying it’s important might be better phrased, "How might meta description tags help with SEO?"

9) Which of the following content types is most easily crawled by the Search Engines?

Gosh — Windows Player Media files (Windows Media Player actually plays a variety of different files), executables, Flash, XHTML and Java applets. I’m going with XHTML, which is basically (to my understanding) a cleaner, more-standards compliant form of HTML. But you want to get technical? First, which search engine are we talking about? I mean, there are media search engines that will crawl media files. Second, crawling isn’t necessarily the issue – it’s INDEXING. They can crawl a link to a media file, but they might not be able to see inside to index it, store the content for retrieval.

10) Which of the following sources is considered to be the best for acquiring competitive link data?

You’re supposed to say Yahoo, since Yahoo Site Explorer provides really good backlink data for any site. Google only gives you a sample. Ah — but what’s more important, a sample of links that’s probably skewed toward the ones that count more or a big chunk of everything.

Beyond The First 10

After the first 10 questions, I thought I’d then only comment on specific questions I had problems with. Continuing on, that led me to:

16) Which of the following is NOT a "best practice" for creating high quality title tags?

So I’m supposed to say "include an exhaustive list of keywords," but I disagree that "include the site/brand name at the beginning or end of the title" makes sense. I’ve seen too many pages where the site name in the title actually detracts from me wanting to click through. Sometimes it helps, but not always.

17) HYPOTHETICAL (for this question, assume that "link value" refers only to the Google search engine): If placing a link from page 1 to page 2 provides page 2 with X amount of link value, what happens if two links are placed on page 1 pointing to page 2?

Seriously, hypothetical — and yet there’s a correct answer? According to what, the PageRank formula of 1998? I guessed that the link value might remain the same.

And Enough! Focus On The Wrong Answers

At this point, I ran out of energy. Some questions were clear cut (PageRank was named after Larry Page), while other questions had me sighing because there simply was no correct answer, in my opinion — only opinion (such as the choices for "Spammy sites or blogs begin linking to your site. What effect is this likely to have on your search engine rankings?").

There was actually a lot of sighing as I continued on the quiz. Eventually, I made it: 184/257 points, 72 percent. I suck at SEO. Or do I? Let’s look at some of my wrong answers:

For #3, I was told "Linking out to other pages with your page’s targeted keywords in the anchor text can cause keyword cannibalization and reduce your page’s chances to rank well at the search engines. It also creates additional competition for your page in the search results, as you give relevance through anchor text and link juice to a competing page."

Well, I could maybe agree a bit with the second part of that, but the first? Sorry, doesn’t win me over.

For #17, I was told "Page 2 receives an amount of link value greater than X, but less than 2X.  In testing from multiple parties (SEOmoz and a few of our friends), we’ve found that Page 2 will receive more value from two links on page 1 than from only a single link, but nowhere close to "double" the link value. This appears to be consistent across all of the engines."

Um, OK. You asked a hypothetical question that apparently really did have an answer. I guess it should have been phrased like, "We’ve done some testing in in these cases, which do you think we found?" And I haven’t tested this particular situation, but if I did and come up different than SEOmoz, would I still be wrong?

For #20, I picked Yahoo as patenting "TrustRank" rather than Google. Dang it – I knew it was Google! But it was late, and I questioned myself.

For #22, the idea here was what solution was needed to avoid duplicate content issues when you’re in a blogging environment, where a story moves down and then off the page. I said there aren’t issues. Instead, I’m supposed to add "noindex" to the paginated pages. At this point, I’m like — "what the hell are you talking about." I mean, what’s a "paginated" page? At Search Engine Land, each story has precisely one URL. It gets listed one the home page, then moves down and off. It’s not going to get duplicated in its entirety by any other pages. So looking at that environment, damn right — I didn’t see any issue.

For #24, I said if you were worried about "keyword cannibalization" (which I had to look up — it means competing with yourself for the same term, and it’s not phrase I’ve typically heard in the industry in my years (and at only 1,300 matching pages, I don’t think I’m alone in this), you should prevent your less important pages from being indexed. Here, I was trying to think like someone who is being really ruthless about SEO, kill of the weak. The "correct" answer is apparently to link from those "weak" pages back to the stronger ones. I suppose….

For #25, the "de-facto version of a page located on the primary URL you want associated with the content is known as the ‘canonical version’," says the quiz. Bull, sez I. See, canonicalization is special to me because of my inability to pronounce the dang word, for one. I’m much better at it now. But I hated it because I had to talk about it and how it means that the search engines will pick what domain version to use for your site (like you have www.mysite.com and mysite.com, they’ll go with one of those). But see, that’s the search engine making the choice — not you. The choice you want is not the canonical version. I don’t know what it’s called across the board, but Google calls it the "preferred domain." And that’s why I answered "preferential version" as the closest of the bad choices I had. And got it wrong.

I got more wrong, some of which I could argue successfully against, and a few where I’m like "Oops! You got me."

Overall, I appreciate the intent of the quiz, which was to further educate folks. And I’ve generally found these types of quizzes fun. But I guess I’m all quizzed out now.

Related Topics: Channel: SEO | SEO: General

Sponsored


About The Author: is a Founding Editor of Search Engine Land. He’s a widely cited authority on search engines and search marketing issues who has covered the space since 1996. Danny also serves as Chief Content Officer for Third Door Media, which publishes Search Engine Land and produces the SMX: Search Marketing Expo conference series. He has a personal blog called Daggle (and keeps his disclosures page there). He can be found on Facebook, Google + and microblogs on Twitter as @dannysullivan.

Connect with the author via: Email | Twitter | Google+ | LinkedIn



SearchCap:

Get all the top search stories emailed daily!  

Share

Other ways to share:
 

Read before commenting! We welcome constructive comments and allow any that meet our common sense criteria. This means being respectful and polite to others. It means providing helpful information that contributes to a story or discussion. It means leaving links only that substantially add further to a discussion. Comments using foul language, being disrespectful to others or otherwise violating what we believe are common sense standards of discussion will be deleted. Comments may also be removed if they are posted from anonymous accounts. You can read more about our comments policy here.
  • http://www.seroundtable.com rustybrick

    I hope to go through the quiz tomorrow on the plane… Should be fun to pick up from where you left off.

  • http://www.blizzardinternet.com Carrie Hill

    WOW – I got a 76% and was seriously doubting what I knew. I thought I knew what I was doing – but after reading all the buzz about “wrong answers” and such I dont feel so dumb!

    Thanks for taking an in depth look at this – I thought for a bit that I was the only one thinking it was a bit odd.

  • http://www.seomoz.org randfish

    Danny – while I disagree with a great many of your critiques (and think you’re just dead wrong about a couple), I’ve gone over the quiz with a fine tooth comb, and careful attention to this post in particular. I doubt you’ll be fully satisfied (I simply couldn’t justify changing a few of the questions you called out), but I do think you’ll like the new #6 :)

  • http://searchengineland.com Danny Sullivan

    Which ones, which ones? I gotta know, Rand — where am I dead wrong?

    See, that’s one place where the test is indeed useful — it has people talking about different ideas on SEO. But I want to know where I’m dead wrong, so I can either understand why or argue my case :)

  • http://www.rootinfosol.com root123

    Kool- I must say that everyone associated with SEO should go through it.

  • http://www.seomoz.org randfish

    Dead wrong – Google does not show a “representative sample” of links in any sense of the word. The absolute correct answer there is Yahoo! If you had quibbled with me because Google’s blog search link command provides good data, I’d say OK, but it’s only for blogs, so Yahoo! is still better. In addition, with Yahoo! you can see links that come from only certain sources using “site:” and choose only links from pages containing certain keywords, etc. It’s a flexible, powerful tool. Google’s web link: command is as close to useless as you’ll get.

    Also preferred domain and canonical URL are not the same thing. One refers to a domain and the other to a given page location. Canonical is used to describe the version of a given page that is the original “source” or the version that the website owner would want to be that source. Canonicalization isn’t a process you do in Webmaster Central, it’s something you’d need to do in your site architecture (good examples of the problem would be paginated versions of content on blogs, print-version pages on media sites, content that’s been licensed out, etc.)

    #9 – you’re just being ridiculous there. Media search engines? It’s like you just want to find something to quibble with – :(

    Hopefully, you’ll be a little happier with the edited version.

  • http://www.tekwebsolutions.com Mike Tekula

    I think I agree with the spirit of Danny’s post here in that I didn’t come away from the SEOmoz quiz feeling like my 80% rating of “SEO Professional” really made a difference one way or the other. A lot of the questions would have left me a bit disgruntled if that score actually meant something.

    That said, I thought the critique of #9, among a few others, was a bit of stretch since they weren’t very difficult or problematic questions for me. Sure, you can nit pick and find something wrong with every one of the questions – some ambiguity in the question language, the fact that some of it was simply anecdotal and not of high-relevance for a working knowledge of SEO, etc. I think the test had more to do with determining how much you read on SEO than how skilled you are. What would the fact that PageRank is named after Larry Page have to do with your skills as an SEO? Bit of a stretch to suggest that matters. . .

    Also, it looked like the scores stated over at the SEOmoz blog were all over the place. Seasoned SEOs were coming in at 70% while SEO bean sprouds (not unlike myself) were sometimes up near 95%. That obviously tells you something. . .

    Bottom line: I had fun taking the test, but I don’t think I’m going to be posting my “badge” anywhere.

  • http://www.altogetherdigital.com Ciarán

    “Bottom line: I had fun taking the test, but I don’t think I’m going to be posting my “badge” anywhere.”

    Which one assumes was the main point of it?

    ;)

  • http://searchengineland.com Danny Sullivan

    I didn’t say Google showed a representative sample. I said a sample of links, period. And that’s exactly what it shows. And I further said what is more important to you, a sample of links that Google decides is worth showing or a chunk of all links. The answer is, it depends.

    You might feel the sample Google shows has no value, that they are mixing up low quality and high quality links to mess with our heads. Might be true, but then again the links they decide to show might be important some way. Overall, I agree with you that if you want a big comprehensive list of links, Yahoo’s the way to go.

    I should add that I originally gave each question a “grayness” factor but pulled that as being too confusing. But this was a question I didn’t feel was that gray. I generally agree with you, but you can indeed quibble (and some people aren’t even going to consider it quibbling).

    As for canonical, sorry, Rand. How you define the word does not mean everyone defines it that way — and I’ve heard people use it to mean various things. I have most heard it in the industry as being exactly what I wrote — how search engines, Google in particular, decide which URL to use for a page if they have multiple choices. In particular, see Matt’s post here, where he says:

    “Sorry that it’s a strange word; that’s what we call it around Google. Canonicalization is the process of picking the best url when there are several choices, and it usually refers to home pages….When Google “canonicalizes” a url, we try to pick the url that seems like the best representative from that set.”

    So see, that to me is Google doing canonicalization. What you describe is an attempt to influence the canonicalization process. And you’re influencing it because you have a preferred URL or domain you’d like to see show up — something that may NOT be the canonical domain that Google chose.

    So when you say in the test, “the primary URL you want associated with the content is known as the ‘canonical version’,” I’m like no it’s not — the canonical version is what Google picked, and what you want has no defined name. We don’t have some common industry jargon for that. If I had to pick, I’d call it the preferred URL or the preferred domain. And when it comes to domains, that’s exactly what Google calls it — preferred domain, a way to influence from Google Webmaster Central how Google may do canonicalization on a domain level. On a URL level, we don’t have those tools there.

    As for media search engine, no, I’m not being silly at all. Blinkx? You know, reads video files transcribes the audio part to text, to make it easier to search for content within the file? Google Video used to do this; Everyzing also does it.

    Rather than quibble, it’s a seriously bad question. You asked about crawling, but then you included content that isn’t crawled but rather indexed. I mean Flash and Java, those are on pages that get crawled, not typically as files that are going to stand along on their own for spidering, right? So it’s not that they don’t get crawled — it’s that the content isn’t indexed when visited.

    Now perhaps it was meant as a trick question — but if you are going to be tricky, then you’d better make sure that the trick can’t be spun back on you. ANY URL to ANY content, regardless of content type, is easily crawled. But is the content itself easily indexed?

  • http://www.tekwebsolutions.com Mike Tekula

    Well, Ciarán, I don’t think I was suggesting that the “main point” of the SEO Quiz was anything in particular. If anything I’d say that the purpose was to get people involved in new discussions at SEOmoz.org – which worked beautifully. It also will certainly spur more learning among mozzers which is a very good thing.

    I was simply agreeing somewhat with Danny in that a lot of the questions were a bit fuzzy and not highly relevant to an SEO’s skillset. I wouldn’t expect to see a direct correlation between quiz scores and the results a particular SEO could achieve, in other words.

    Clearly the fact that I scored higher than Danny Sullivan, for example, doesn’t relate to our actual comparative abilities or knowledge. . .maybe in my dreams. . .

  • http://www.markbarrera.com Mark Barrera

    This quiz seems to have stirred up quite a bit of controversy which I think is needed to remind people the dynamic nature of our industry. Great post!

  • tagvine

    Danny, I’m not sure what you problem is with the quiz. I was a bit disappointed in reading through your post, as it was extremely nit-picking and as Rand mentioned, at points, ridiculous.

    I’m a fan of your reporting and insight into the world of SEM and SEO, however, I think this was not the greatest example of that. Sure, anyone can over-analyze any SEO recommendation and say it’s stupid, but it’s not necessary.

    “It’s wise to use the keywords you’re attempting to rank for on that page as the anchor text of the external-pointing links.” The clear answer is no. Your rebuttal and attempts to make yes look like a viable answer is juvenile. Sure, the logic is there and technically you could make minor disputes, but come on. That’s over analysis and juvenile. It takes me back to my school days when you have that hypercritical kid that always points out that technically the teacher isn’t 100% right. Common sense tells you that if you want to rank for a word you shouldn’t help someone else rank for that same word… simple as that. And as for “But this stuff about getting your own pages to rank by the outbound anchor text?”, Rand doesn’t mention that in the question, you simply assumed it.

    I’m not going to go through them all, but nearly all of your complaints were of this same, overly-critical nature. After reading this it makes me think you have a personal grudge with Rand, or you’re upset with the score that you received (not that it has any indication to your ability) and wanted to publicly dispute and ridicule the quiz. Those were my first two thoughts after reading your article.

    I must say I was a bit disappointed.

    I mean come on, you dispute the wording of “why is it important for…” and say that is should be “how might it help to…”. Just an FYI, that is essentially the same phrase. To even mention, and furthermore, dispute that is just silly.

    Making a point that Google’s link: command might better satisfy than Yahoo’s Site Explorer? That’s a joke. You’re essentially saying less information (a sample) is better than the whole thing. [sarcasm]I know, let’s do A/B multi-variant testing on 10 people and make changes to our site based on that data instead of showing it to 100,000 people, that way it will be less information and more valuable.[/sarcasm] Let’s be logical here, more info is better. Plus, as Rand mentioned, Site Explorer actually has tools and operators that give you even better results.

    The more I reread your post the more frustrated I get.

    Honestly, I think only 1-3 of the points you made in here are really valid to the point of discussing. Rand, I’m with you on this.

  • http://searchengineland.com Danny Sullivan

    @tagvine: To make myself perfectly clear, it was a bad quiz. That’s my problem with it. It had questions which in many cases did not have clear answers. It sometimes had answers that would not be clearly agreed upon by different people. I think there’s ample evidence of confusion and agreement, at this point. I think Rand himself has already said there were problems with it. It it was a good test, no one would be poking at it as they are now. They’d just be collecting their badges, laughing about how bad or well they did and showering Rand with links. Well, he’s still getting the links.

    My points are not all nit-picky, especially when many of the questions themselves are poised to either try and be “tricky” as a test of knowledge. If I had the energy, I would have gone through all 75 questions and provided many, many other examples where I rolled my eyes and thought, “c’mon.” But after the first 10 or so, I’d documented plenty.

    I’m sorry you felt I was being juvenile in my response. I probably spent about two hours going through the test, not just with the questions I posed, but also reviewing other things. It was a deliberate and considered review, not simply a knee-jerk schoolyard prank. And it was something I did, ironically, out of respect for SEOmoz. If they’re going to be hauled up on the test, I wanted to go through it myself first hand.

    Yes, I had to assume in question three what might Rand or SEOmoz might be thinking. That’s because a lot of the test was down to whatever they might think works as fact, rather that it being the exact case for Google much less all the major search engines. If you want to win this test, that’s how you have to think — like SEOmoz. Now if you’re on SEOmoz all the time, love everything written there and agree with all, I’m sure this worked for you. Me, I think people should question everything and determine their own truths.

    As for being nitpicking on wording, here’s the deal. I’ve been doing this a long time, right? You know, writing about this stuff for 11 years. Wording is crucial, absolutely. If you’re talking about indexing and say ranking, you’ve dramatically changed an issue. One question as I explained used crawling as a synonym for indexing. In that question, it wasn’t the same meaning.

    I constantly qualify everything I write — and if I’m not, hold me up to shame. That’s because most everything about SEO is in the “might,” “maybe,” “could” or “is believed category.” I can find things on that test where the exact answers might not work for a particular site. This is why you qualify. There is rarely a “best” way, though there is often a way that “many believe” is the best. Small turns like that make a big difference especially when you are dealing with people new to SEO. Especially when they do all the “right” things and then don’t understand why the magic formula to success didn’t help.

    Indeed, I’m a veteran of having to deal with readers back in the late 1990s when we had WebPosition roll out with “perfect page” analysis tools, where it would take a page in and then spit out how you should change it to best rank for Infoseek, AltaVista and so on. Except you know, those change weren’t that different. And you know, you could easily find pages that were designed for Infoseek (they’d all say things like IS in the URL) ranking on Excite.

    If you felt those 75 questions were all perfectly fine and I’m just nit picky, more power to you. That’s the point — they’re working for you, I’m not going to tell you that you are wrong. But other people definitely do not agree with them, because SEO is not and has never been a precise science, and it gets very hard the more you try to pin it down as such through exacting things like a test.

    As for the sample, absolutely — less information *could* be better. Now if I dump 100,000 links from Yahoo on you, is that better than if you had only 1,000 links from Google? Why? Because more is better? You have to sort them in some way, so more alone isn’t better. Knowing that your competitor has 90,000 links from some guestbooks that might not show in a Google backlink lookup is helpful to you. I can — nitpick if you want — argue it is not. As I said also, however, I agreed with Rand that Yahoo was the best choice.

    In the end, I can assure you that if I’d kept going through and documenting all of that test, it would have been more than 1-3 points you agreed with. But as for those 1-3 — hmm, I covered issues I had with 15 different questions. Give me the benefit of the doubt and say you agree that 3 of those 15 had real flaws. So that’s 20 percent of the test questions examined that are bad. If you took a driving test, would you feel comfortable knowing that 20 percent of the questions made no sense?

    I understand that this was partially in fun. I also appreciate the education value that was involved. But I guess I’m done with the quizzes. Next time, I’d rather just see the answers trotted out for examination and debate.

  • http://searchengineland.com Danny Sullivan

    Just to add one more thing, as I said to Rand, I originally had a “grayness” score for each of these questions. I dropped that as perhaps being to confusing. But I should have kept it. I would have made it clearer where there were some questions where I know the dispute factor is small versus other ones where it was much larger. For example, question one had a grayness of like 1: I felt few would dispute it. Question 4 was more like 5 out of 10: maybe it will help; maybe it won’t, what type of site are we talking about?

  • http://www.jehochman.com JEHochman

    “the primary URL you want associated with the content is known as the ‘canonical version’,”

    When I do SEO, it is. :-)

    Danny, I think your 20% no sense estimate is high. Dan Thies and I both got 86-87%, which means that probably only 13 – 14% of Rand’s questions were flawed. :-D

  • http://www.seolid.com/ seolid.com

    I got a 77 and even got a right answer for a question (about Danny Sullivan) which i didn’t answer , since i never encountered that question while taking the quizz – a glitch in the process? maybe.

  • http://www.linux-girl.com Asia

    This was definitely fun but long. Many of the questions were difficult to understand without having to read through it carefully – but I got through it – not as great as I hoped to be, but I did get all the important questions answered correctly! So I’m happy :)

  • http://www.altogetherdigital.com Ciarán

    Mike Tekula – I didn’t mean to put words in your mouth; I just assumed that the main reason for the test was to generate links, and that one of the ways that the mozzers have done this in the past was by giving out badges for people to put on their site (see the Web 2.0 awards).

    This isn’t a dig – I LOVE the moz; it just made me chuckle that you had said what I was thinking…(even if your reasons for doing so were different to mine).

  • http://www.seobythesea.com Bill Slawski

    For #20, I picked Yahoo as patenting “TrustRank” rather than Google. Dang it — I knew it was Google! But it was late, and I questioned myself.

    You were right, Danny. It was Yahoo. The trustrank patent application (from Yahoo): Link-based spam detection

    TrustRank is a link analysis technique related to PageRank. TrustRank is a method for separating reputable, good pages on the Web from web spam. TrustRank is based on the presumption that good documents on the Web seldom link to spam. TrustRank involves two steps, one of seed selection and another of score propagation. The TrustRank of a document is a measure of the likelihood that the document is a reputable (i.e., a nonspam) document.

    Yahoo has four more patent applications which take trustrank into the realm of social networking, by call it “dual trustrank” and incorporating user annotations and tagging and social networks into the linking analysis.

    The google patent that Rand points to has nothing to do with trustrank.

  • http://searchengineland.com Danny Sullivan

    Thanks, Bill, for helping me think I wasn’t losing my mind. See, I saw the TrustRank and thought Yahoo — because it sounds like something Google should have, but you’d done all that writing on it, so Yahoo was sticking in my head. So I went Yahoo. Then I started doubting myself after getting it wrong — it was late. Next time, I just do the test open book.

  • http://www.venere.com Susan

    Excellent article Danny! I too was a bit put off by some of the supposedly wrong answers and haven’t changed my mind since.

    Thanks to you too Bill. I answered Yahoo at #22 and was a bit baffled when it turned out to be Google. I positively remembered reading about a patent application by Yahoo with the term TrustRank, it must have been on your blog :)

  • http://www.tekwebsolutions.com Mike Tekula

    Ciarán – sorry, I think I misunderstood the intent of your comment. My sincere apologies if I came off a bit prickly in my reply.

Get Our News, Everywhere!

Daily Email:

Follow Search Engine Land on Twitter @sengineland Like Search Engine Land on Facebook Follow Search Engine Land on Google+ Get the Search Engine Land Feed Connect with Search Engine Land on LinkedIn Check out our Tumblr! See us on Pinterest

 
 

Click to watch SMX conference video

Join us at one of our SMX or MarTech events:

United States

Europe

Australia & China

Learn more about: SMX | MarTech


Free Daily Search News Recap!

SearchCap is a once-per-day newsletter update - sign up below and get the news delivered to you!

 


 

Search Engine Land Periodic Table of SEO Success Factors

Get Your Copy
Read The Full SEO Guide