Why the evergreen Googlebot is such a big deal [Video]

Google’s Martin Splitt discusses the effort behind making Googlebot ‘evergreen’ in this clip from Live with Search Engine Land.

Chat with SearchBot

The evergreen Googlebot was a huge leap forward in Google’s ability to crawl and render content. Prior to this update, Googlebot was based on Chrome 41 (released in 2015) so that the search engine could index pages that would still work for users on older versions of Chrome. The drawback, however, was that sites with modern features may not have been supported. This discrepancy created more work for site owners that wanted to take advantage of modern frameworks while still maintaining compatibility with Google’s web crawler.

Always up-to-date. “Now, whenever there is an update, it pretty much automatically updates to the latest stable version, rather than us having to work years on actually making one version jump,” said Martin Splitt, search developer advocate at Google, during our crawling and indexing session of Live with Search Engine Land. Splitt was part of the team that worked on making Googlebot “evergreen,” meaning that the crawler will always be up-to-date with the latest version of Chromium; he also unveiled it at the company’s I/O developer conference in 2019.

Twice the work. Before the advent of the evergreen Googlebot, one common workaround was to use modern frameworks to build a site for users, but to serve alternate code for Googlebot. This was achieved by identifying Googlebot’s user agent, which included “41” to represent the version of Chrome it was using.

This compromise meant that site owners would have to create an alternate version of their content meant specifically for Googlebot. Doing this would’ve been laborious and time consuming.

Googlebot’s user agent, revisited. Part of the issue of updating Googlebot’s user agent to reflect the latest version of Chromium was that some sites were using the above-mentioned technique to identify the web crawler. An updated user agent might have resulted in a situation where a site owner (that wasn’t aware of the change) did not serve any code to Googlebot, which could have resulted in their site not getting crawled, and subsequently indexed and ranked.

To prevent disruption of its services, Google communicated the user agent change in advance and worked with technology providers to ensure that sites would still get crawled as usual. “When we actually flipped . . . pretty much no fires broke out,” Splitt said.

Why we care. The evergreen Googlebot can access more of your content without the need for workarounds. That also means fewer indexing issues for sites running modern JavaScript. This enables site owners and SEOs to spend more of their time creating content instead of splitting their attention between supporting users and an outdated version of Chrome.

Want more Live with Search Engine Land? Get it here:


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

George Nguyen
Contributor
George Nguyen is the Director of SEO Editorial at Wix, where he manages the Wix SEO Learning Hub. His career is focused on disseminating best practices and reducing misinformation in search. George formerly served as an editor for Search Engine Land, covering organic and paid search.

Get the must-read newsletter for search marketers.