Despite Assurances, Google Place Pages Now Showing In Search Results
Google introduced its new Maps profile page format, Place Pages, last week on Thursday. It’s a pretty dramatic change that replaces a tabbed version of the “info bubble” on Maps with a full page that features a wide variety of rich content. Each of these pages has its own URL.
When I discussed the change with Google I specifically asked whether they were going to crawl and index these pages, being created for all local businesses, points of interest, cities, landmarks and neighborhoods. They told me no; these pages would only be accessible from Maps and not directly from Google.com search results.
However this morning Mike Blumenthal wrote a post that discusses the potential indexing of Places Pages. And on my personal blog, Screenwerk, there’s a discussion in the comments about whether this is inevitable and the potential implications for SEO. There’s also speculation about whether these pages are intended to be landing pages for local merchants as a prelude to simplification of search marketing for small businesses.
The debut of Place Pages would appear to be a major development in local that has already stirred the imagination and creative thinking of many in the SEO community. It’s very timely and we’ll be talking about this on at least a couple of panels at SMX East on Day 1:
- Ranking Tactics For Local Search
- Maps, Maps, Maps!
Will these pages become part of a local SEO strategy? Can they be used as landing pages? How will they tie into local extensions if at all?
These will be questions on the minds of people in audience; and we’ll get our local expert panelists (including Google) to discuss Place Pages as well as other critical local SEO/SEM questions and issues.
Postscript From Danny Sullivan: Part of the concern is that despite saying these pages wouldn’t be indexed, they’re showing up in Google search results, as Mike Blumenthal’s post demonstrates. For example, a search like this brings up this page in the top results.
The issue is the difference between a robots.txt block and a meta robots block.
Blocking with robots.txt doesn’t allow Google to spider a page, but it may produce what it calls a “partially indexed” listing — the title of a page and a link to it, information which is gathered solely from how OTHER pages link to the listed page, not the page itself.
In order to full block a page from the index, a meta robots tag must be used. My Meta Robots Tag 101: Blocking Spiders, Cached Pages & More article goes into depth about this.
We’ll check with Google, but my bet is that the team in charge of the new Places pages simply did not realize they needed to use the meta robots tag to fully stay out of Google. Yes, even Google can be stupid when it comes to SEO. That’s one reason why they created an SEO starter guide (PDF) for their internal teams. That guide focuses on using the robots.txt file and does not cover the unique blocking powers of the meta robots tag (unless someone follows a link from the guide to extra reading material).
If Google wants to live up to blocking these pages as it said it would, I’d expect to see meta robots blocking added in short order.
Postscript 2, From Danny Sullivan: I’ve spoken further with Google which assures that it does NOT intend for these page to be showing up in organic results outside of its own OneBox displays (IE, not taking up a listing that would otherwise go to an “external” web site). They’ll fix this oversight (they did say it was an oversight) either by using the meta robots tag or internal filtering.