• T.A.

    The timing on this is great. An article came out of Distilled’s SearchLove conference today which posits that GoogleBot is actually based on Chrome.

    If you’re interested, you can check it out here: http://t.co/Jy6QPf3b

  • http://www.ScottBartell.com Scott Bartell

    Does this mean google can now index Facebook comments?

  • http://www.timacheson.com/ timacheson

    It’s about time.

    It’s a search engine’s job to index content on the web!

    Why has the corporation taken this long to deal with Facebook comments?

    I welcomed this wholeheartedly when I saw the first report.

    However, it’s manifestly lamentable that Google has taken this long to react to Ajax, a web technology introduced by Microsoft decade ago and showcased by IE6.

    This is symptomatic of a web search monopoly that has been resting on its laurels. Google has neglected to innovate and has been left behind.

    Not long ago, Google was proposing a whole new framework to allow the indexing of Ajax type content, which the corporation was expecting all affected websites to implement in order to make Google’s job easier! As I pointed out at the time on my blog and in comments on Google’s announcement, this would have been a bad solution. Here’s an extract from my own comment, posted at the time:

    “The role of Google’s search engine is to facilitate search. That means working with the web as it is, not trying to change it. Yes, it’s a challenge to index Ajax; but it’s not impossible. Yes, it will require work by Google. Guys, that’s your job. The next-generation of indexing wizardry needs to be able to do this without every website implementing something that Google has come up with. I wish Google would stop trying to tell us how to create the web.”


  • Ashley Sheridan

    @timacheson it’s not as simple as you make it out to be. Ajax is just the method used to grab the page data, but often it’s triggered by something that only a user can do. You can’t seriously expect a search engine crawler to look at what a click would do on every button or object on your page in order to then see what Ajax calls were made?

    As an example, consider a typical Facebook page. Often comments are hidden beneath a ‘read more’ link that makes an Ajax request to the rest of the comments. Without context (and a search bot shouldn’t be context specific, it should be generic enough to look at content across any web page, not just Facebook) the bot has to traverse everything that can be interacted with. Not always easy, especially with more and more people moving away from event handlers in the HTML being moved to external Javascript files using frameworks.

    What about other events, such as scripts looking for a specific keyboard key being depressed? Without some hefty code being the search bot, it would never ‘know’ to look at Ajax calls being made from a trigger such as this, and any such Ajax call would most likely use the specific key as an argument in the call to the server, so you can’t just search the Javascript code for any Ajax-y looking URL and assume it’s the full one used in the call.

    It’s often the things that seem the most simple that are incredibly complicated.