5 tips and trends from Google Webmaster Conference

The Google Webmaster Conference was filled with fun information. Here are the more important topics covered.

Chat with SearchBot

Google held its Google Webmaster Conference event for the first time at its Mountain View, California headquarters on Monday.

Here are five tips and trends we took away from the event.

Structured data. Over the years, Google has continued to roll out new support for additional structured data markups and continues to expand support for additional rich results in search. What was apparent from the numerous talks at the conference was that Google will expand support for structured data to inform new experiences in the search results as well as the Assistant.

This includes adding new rich result types, above the numerous options already officially supported. It also includes Google improving and updating how it shows these rich results in the Google search results. So stay on top of these changes and try implementing appropriate structured data for your site.

Emoji search. Did you know it took Google over a year to add support for emojis in search? This includes Google’s ability to crawl, index and rank emojis.

Also, did you know that Google sees over one million searches per day with emojis in the search phrase? Isn’t that ?????

Deduplication tips. Google also shared numerous tips around how the search engine handles duplicate content and how webmasters can help Google figure out the canonical version. These tips include:

  • Use redirects
  • Use meaningful HTTP result codes
  • Check your rel=canonical links
  • Use hreflang for localization
  • Report hijacking cases in the forums
  • Secure dependencies for secure pages
  • Keep canonicals signals unambiguous

Crawling and rendering tips. Google also offered numerous tips around crawling and indexing. These included:

  • Do not rely on caching rules, Google doesn’t obey them
  • Google minimizes its fetches, so GoogleBot might not fetch everything
  • Google checks your robots.txt before crawling
  • 69% of the time Google gets a 200 response code when trying to access your robots.txt, 5% of the time 5XX response code, 20% of the time the robots.txt is unreachable.
  • If Google cannot reach your robots.txt due to a 5xx error, Google won’t crawl the site
  • Google renders what you generally see using the Chrome browser
  • Google does 50-60 resource fetches per page, which is a 60-70% cache rate. Or about 20 fetches per page.

Synonyms. Search lead, Paul Haahr had an interesting presentation around synonyms and how Google understands some queries. He took us through examples of when Google got it wrong, why it got it wrong, and more importantly, how Google fixed mistakes going forward.

Here are a couple of tweets with those details. In the first, Haahr shows how Google is able to parse the distinct meanings of “gm” in three different queries to refer to: General Motors, genetically modified and general manager.

In another example, Google explained that it is able to return relevant listings for “York hotels” when a user searches “new york hotels.”

Why we should care. There wasn’t news out of yesterday’s event, apart from the release of Speed report in Google Search Console, but the engineers presenting added details and clarity around how its systems work that can help SEOs in their daily work and in thinking about their strategies in the year ahead. To learn more about what was said at this event. You can read Jackie Chu’s blog post on it; you can see my collection of tweets with the coverage or scan the #gwcps hashtag on Twitter.


About the author

Barry Schwartz
Staff
Barry Schwartz is a Contributing Editor to Search Engine Land and a member of the programming team for SMX events. He owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on very advanced SEM topics. Barry can be followed on Twitter here.

Get the must-read newsletter for search marketers.