• http://www.wolf-howl.com graywolf

    Wonders how this is going to affect directories.

  • http://searchenginetigers.com Simon Heseltine

    …and IYP sites…

  • http://www.ericward.com eric_ward

    It’s a great first step. The big vendor sites that have enjoyed the free organic listings to product results pages now can now open their wallets and move a couple inches to the right to PPC results. It’ll be tougher for the little guy as PPC gets more costly. That’s where the sleeping giant Froogle comes in handy. Product feed submission into Froogle is free. At least right now anyway :) With the huge cashpile Google has, all it will take is a month of prime time TV ads for Froogle to become known in the mainstream world. To me Froogle has been like a massive dormant volcano. You know it’s going to erupt, and if/when it does it will change the landscape forever. -ew

  • http://blog.outer-court.com Philipp Lenssen

    > Practically no one links to our search
    > results. But now thanks to the new
    > Google guidelines, out of the blue, I
    > have to go block off the /fastsearch
    > area or potentially be seen as spamming
    > Google. What a pain.

    Danny, it makes sense to do this for other reasons than just Google though. A couple of years ago I created a little CSS search engine (you could link your own stylesheet to an XHTML-conformant page, thus creating new search engine designs on the fly). I forgot to add the correct robots.txt line, and months later suddenly found that some spammer had added thousands of links with my search engines. (Those were the days before nofollow, so this spammer achieved to get backlinks from my domain this way, carefully selecting search queries which would hit back on his page.) While that problem is not applicable to all search engines, it does cover a portion of them.

  • https://www.linkworth.com/solutions/adv_signup.php?sid=sls13 Eddie Walter

    Hi Danny,

    I believe I have an answer for one of your questions…

    “such as can you do a robots.txt file that matches only the first few characters of a page name (with Shopping.com, you’d need to wipe out any “xpp-” prefixed pages).”

    A robots.txt exclusion command will exclude anything that contains the string you are excluding. For example,

    User-agent: *
    Disallow: xxp-

    would exclude anything that includes “xxp-” in the URL. So if you had a page that was titled “www.example.com/whatever-xxp-all.html”, it would exclude that. If you wanted to just exclude any pages that started with xxp- only, but not that had xxp- anywhere in the URL, you would want to use:

    User-agent: *
    Disallow: /xxp-

    Beyond that, I find that this move by Google is a bit of a concern. As a shopper, I don’t mind having results from folks like BizRate or Shopping.com or Amazon.com in my SERP’s. They are a good way for me to compare different products and providers.

    The thing that bothers me is the MFA sites where it’s nothing but AdSense once you click to the site. I think that this move could be the first in putting an end to that, but my love of Google says that won’t ever happen, organic or PPC listings. They make too much money off of MFA sites.

    Anyway, good post!

  • http://sebastianx.blogspot.com/2007/03/why-ecommerce-systems-suck.html Sebastian

    Considering that so many eCommerce sites have used their search facilities to produce highly relevant spider fodder and to send link love to thin product pages this will soon be known as Google’s e-commerce penalty ;)

  • http://www.smart-keywords.com/blog.html AussieWebmaster

    This is going to be a think line to walk… what happens to people with Google Checkout? will they be less likely to have their products pages dropped?

    Using srver side code to change things is nothing new…

  • http://www.demib.dk Mikkel deMib Svendsen

    This is going to hurt big time on a lot of sites! I think we all know who they are … :)

    We will be covering this in more details at Strikepoint (WebmasterRadio.FM) this evening.

  • http://www.naffziger.net/blog Dave N

    Graywolf nailed the issue. There are tons of sites that create value by organizing the information on the web. Some make that information available through directories, others through search and others through resource guides (wikipedia).

    Theoretically, other websites are determining which pages are the most relevant resources by linking to them, and this is already considered in Google’s algorithm.

  • http://searchengineland.com/070312-104201.php littleman

    Thing is G’s algo favors the overwhelming link pop of the big feed sites like bizrate and shopping.com over the mid-size merchants who submit their feeds to them with the same exact product descriptions. So the merchants are screwing their natural results over by submitting to them.

    At the same time, all the comparison shopping engines have worked super hard to get as much of their content into the indexes as possible, so yeah, in effect its like arbitrage on a corporate scale with a shit load of link pop ensuring that they float above the merchants.

    I recently had a client who was getting killed in natural search because of this and I tried to convince them that the feeds are hurting their ranking potential and they just couldn’t accept it.

  • http://www.doolally.net doolally

    Could the same apply of tagged pages like
    http://searchengineland.com/guides/link_building.php
    As they are quite simular to search results pages?

  • http://sextoysinsider.com rlonghurst

    My company is a retailer that submits feeds to Shopping.com, Bizrate et al. It can be very frustating to see these sites’ seach results rank very highly in Google when the keywords that Google has ranked them for are the keywords in our product names and short descriptions.

    By dint of their sheer size – and, cynics would say, AdWords spending power – the sites are able to rank higher than the *original* source of their content.

    Yes, these sites do go some way to making sense of the Web for the shoppers, but what of the searcher who want to go straight to the merchant?

    Shouldn’t Google show merchant results first and then a pageful of indentkit Shopping.com, Pricerunner, Shopping.com, Bizrate and Froogle results?

    Is showing a screenful of near identical price comparison sites the best way of serving the user?

  • http://www.michael-martinez.com/ Michael Martinez

    As a consumer I WANT to see those product search pages show up in the query results. Screw the PPC ads — they never lead you directly to the exact products you need.

    What I don’t want to see in search results are search results that are irrelevant to my particular need. I don’t want to see MFA search scraper pages. I don’t want to see MFA pseudo search pages on parked domains.

    But if I want to find a particular DvD model and the only way it’s been crawled/indexed for a large shopping site is through an internal search result, then give me the internal search result.

    Otherwise, instead of investing their efforts in forcing people to police the Web for them (by threatening either online or behind the scenes at conferences to delist pages that don’t use REL=NOFOLLOW), Google can quickly and easily resolve this particular issue by supporting an actually useful meta tag, one that says, “Don’t return this page in results, but pick one of the sibling pages this page links to as appropriate for the query”.

    Then Google can pick the most appropriate page. That is, the algorithm says that internal search page X is relevant to the query, and X points to X-1, X-2, and X-3 — let Google pick one of those three.

    The solution is not to require Webmasters to do Google’s bidding under threat of penalty. The solution is for Google to remember that without the Web there is no Google.

  • dougsimms

    Yes, I’m sure it is in their best interest to eliminate all the query based pages on the web:

    http://www.google.com/search?hl=en&safe=off&q=+site:finance.google.com+google.com+inurl:q%3D

    This is a weak and shortsighted idea, Im surprised that it is coming from GOOG…

    I guess when you can just plug in the equivalent to yesteryear’s “Inside Yahoo! Matches” to make money on your own SERPs, everyone else gets the hose.

  • http://www.luckylester.com Lucky Lester

    Good read and a very good point regarding paid listings.

  • RayPays

    I am wondering when Google will stop displaying results that are just URL’s with Google adwords in several different configurations on sites that have no other content. Given that they make money on anything clicked on these type sites pages, my bet is, not any time soon.

  • Trogdor

    We’re a small business, and sometimes our internal-product-search URLs get spidered, and even get ranked. We’ve encouraged this, as it’s always seemed like another chance to get relevant pages ranked in the engines.

    Now, we’re supposed to stop?! Our product-search-result pages are indeed relevant for many of the queries we target.

    Over and over, Matt & Vanessa vaguely said, “result pages that offer little value to the visitor” … which gives them a nice out, because it’s all based on whether or not the pages have value. I’d like to think that our internal SERPs do indeed, but as Google is the big decider, it seems as though a useful SEO tool has been taken away from me.

    I see that in Matt Cutts’s post about it, there’s been no response about internal SERPs that are valuable / relevant, so we’re in this wonderful grey area.