Despite Assurances, Google Place Pages Now Showing In Search Results

Google introduced its new Maps profile page format, Place Pages, last week on Thursday. It’s a pretty dramatic change that replaces a tabbed version of the “info bubble” on Maps with a full page that features a wide variety of rich content. Each of these pages has its own URL.

When I discussed the change with Google I specifically asked whether they were going to crawl and index these pages, being created for all local businesses, points of interest, cities, landmarks and neighborhoods. They told me no; these pages would only be accessible from Maps and not directly from search results.

However this morning Mike Blumenthal wrote a post that discusses the potential indexing of Places Pages. And on my personal blog, Screenwerk, there’s a discussion in the comments about whether this is inevitable and the potential implications for SEO. There’s also speculation about whether these pages are intended to be landing pages for local merchants as a prelude to simplification of search marketing for small businesses.

The debut of Place Pages would appear to be a major development in local that has already stirred the imagination and creative thinking of many in the SEO community. It’s very timely and we’ll be talking about this on at least a couple of panels at SMX East on Day 1:

  • Ranking Tactics For Local Search
  • Maps, Maps, Maps!

Will these pages become part of a local SEO strategy? Can they be used as landing pages? How will they tie into local extensions if at all?

These will be questions on the minds of people in audience; and we’ll get our local expert panelists (including Google) to discuss Place Pages as well as other critical local SEO/SEM questions and issues.

Postscript From Danny Sullivan: Part of the concern is that despite saying these pages wouldn’t be indexed, they’re showing up in Google search results, as Mike Blumenthal’s post demonstrates. For example, a search like this brings up this page in the top results.

The issue is the difference between a robots.txt block and a meta robots block.

Blocking with robots.txt doesn’t allow Google to spider a page, but it may produce what it calls a “partially indexed” listing — the title of a page and a link to it, information which is gathered solely from how OTHER pages link to the listed page, not the page itself.

In order to full block a page from the index, a meta robots tag must be used. My Meta Robots Tag 101: Blocking Spiders, Cached Pages & More article goes into depth about this.

We’ll check with Google, but my bet is that the team in charge of the new Places pages simply did not realize they needed to use the meta robots tag to fully stay out of Google. Yes, even Google can be stupid when it comes to SEO. That’s one reason why they created an SEO starter guide (PDF) for their internal teams. That guide focuses on using the robots.txt file and does not cover the unique blocking powers of the meta robots tag (unless someone follows a link from the guide to extra reading material).

If Google wants to live up to blocking these pages as it said it would, I’d expect to see meta robots blocking added in short order.

Postscript 2, From Danny Sullivan: I’ve spoken further with Google which assures that it does NOT intend for these page to be showing up in organic results outside of its own OneBox displays (IE, not taking up a listing that would otherwise go to an “external” web site). They’ll fix this oversight (they did say it was an oversight) either by using the meta robots tag or internal filtering.

Related Topics: Channel: SEO | Google: Maps & Local | Google: Place Pages | Google: SEO | Top News


About The Author: is a Contributing Editor at Search Engine Land. He writes a personal blog Screenwerk, about SoLoMo issues and connecting the dots between online and offline. He also posts at Internet2Go, which is focused on the mobile Internet. Follow him @gsterling.

Connect with the author via: Email | Twitter | Google+ | LinkedIn


Get all the top search stories emailed daily!  


Other ways to share:

Read before commenting! We welcome constructive comments and allow any that meet our common sense criteria. This means being respectful and polite to others. It means providing helpful information that contributes to a story or discussion. It means leaving links only that substantially add further to a discussion. Comments using foul language, being disrespectful to others or otherwise violating what we believe are common sense standards of discussion will be deleted. Comments may also be removed if they are posted from anonymous accounts. You can read more about our comments policy here.
  • Will Scott

    It’s an exciting development Greg, that’s for sure.

    There’s great opportunities across the board to leverage advanced image and video tagging, to push Barnacle SEO to a new level and to leverage the heck out of these things in general.

    There’s some testing ongoing which will hopefully bear fruit before next week so we can report on it.

    Good timing on Google’s part for the release, no?


  • chiropractic

    My gut says this is a prelude to landing pages for local merchants and small businesses. Seems like a natural progression based on what’s taken place so far. I think businesses can learn from studying the elements appearing on those pages and apply them to their own sites/pages.

  • Chris Silver Smith

    Danny, I think Google should be very unhappy with the inefficiency that’s exposed if they only have the noindex meta tag to use for keeping those pages out of the index.

    In order for that to work, they’ll potentially end up crawling millions of their own pages – only to *not* index them! Not a very green solution, I must say.

    So, it’s a bit of a catch-22. Robots.txt won’t keep the pages from being indexed as they represented they would insure, but if they have to resort to noindex meta tags, they’ll risk ultimately impacting their own performance as they try to crawl all the pages that could get linked-to, and they’ll expend tons of needless CPUs in the process. Not elegant.

    Hmmm… sounds like we really need a solution for telling them which pages to NOT index as well as NOT crawl.

  • stroseo

    Perfect explanation: “Blocking with robots.txt doesn’t allow Google to spider a page, but it may produce what it calls a “partially indexed” listing”

    Everyone’s been going back and forth about why a few of these pages are appearing in the index. Right now, only a few of these pages out of a potential hundreds of millions or more are showing up in search results so, right now, I’m not too concerned about the traditional natural search results.

    However, what are Google’s long term goals? Matt Cutts ( and Lior Ron ( Google’s Global UGC Lead) have both made appearances in the comment sections of a couple of article posts but both have been translucent about Google’s long term intentions. I talk about more of my concerns here:

    But the bottom line here is that many of the companies that Google is aggregating data from heavily rely on the traffic received from Google search visitors. If the visitor flow is cut off to these content originators, their business models may fail which will cease the data feeds that are currently populating Google’s Place Pages. Very parasitic in nature, and very unlike Google.

  • Michael Martin

    Seems like we found the hot topic for SMX East as NoFollow & Paid Links dominated SMX Advanced.

    I am sure Michael Gray will be all over this ;)

    See you all at SMX East in NYC next week!

    ,Michael Martin

  • stroseo

    Ok so a more specific answer to the “Partially indexed URLs” phenomenon.

    Over at Tech Crunch, Matt Cutts followed up with another reply regarding the URLs showing up in search results even though robots.txt files prevented the crawling of the pages.

    He referred to a previous post of his regarding this issue here:

    Matt said:

    “You might wonder why Google will sometimes return an uncrawled url reference, even if Googlebot was forbidden from crawling that url by a robots.txt file.”

    “There’s a pretty good reason for that: back when I started at Google in 2000, several useful websites (eBay, the New York Times, the California DMV) had robots.txt files that forbade any page fetches whatsoever. Now I ask you, what are we supposed to return as a search result when someone does the query [california dmv]? We’d look pretty sad if we didn’t return as the first result. But remember: we weren’t allowed to fetch pages from at that point. The solution was to show the uncrawled link when we had a high level of confidence that it was the correct link. Sometimes we could even pull a description from the Open Directory Project, so that we could give a lot of info to users even without fetching the page.”

  • Danny Sullivan

    Yes, I know the arguments that Google makes for showing partially indexed or “link only” URLs. They’re good ones. And yet, they still don’t respect a site owner’s wish to perhaps say no, I really don’t want you to list me.

    Improving the robots.txt protocol might be a way forward, allowing people to say don’t crawl and do/don’t show a link.

  • stroseo

    Hey Danny, I totally agree with you, technically right now it seems there is no 100% sure way to prevent a URL from showing up in the index. This only strengthens your comment: “Yes, even Google can be stupid when it comes to SEO”

Get Our News, Everywhere!

Daily Email:

Follow Search Engine Land on Twitter @sengineland Like Search Engine Land on Facebook Follow Search Engine Land on Google+ Get the Search Engine Land Feed Connect with Search Engine Land on LinkedIn Check out our Tumblr! See us on Pinterest


Click to watch SMX conference video

Join us at one of our SMX or MarTech events:

United States


Australia & China

Learn more about: SMX | MarTech

Free Daily Search News Recap!

SearchCap is a once-per-day newsletter update - sign up below and get the news delivered to you!



Search Engine Land Periodic Table of SEO Success Factors

Get Your Copy
Read The Full SEO Guide