• victorpan

    Thanks, this post was very helpful in understanding the big picture of indexing. Fortunately I haven’t had to use the URL removal tool, but it’s nice to put things into perspective and scope.

  • http://www.richamorindonesia.com/ Rich Amor Indonesia

    Thank you very much. It help me to understand what should I do with my website after a slight change on my traffics.

  • Eve

    I have a question: What about internal searches that are already indexed? I have used robots.txt with diallow but as you’ve stated “To be clear, adding a page or a folder to the “Disallow” list in your Robots.txt file does not remove it from the index”, so should I use GWT for removal?

  • Pano Kondoyiannis

    How can remove articles who don’t have access to them?

  • http://www.brickmarketing.com/ Nick Stamoulis

    “The first and most important step is to be paranoid when using the URL Removal Tools”

    Be triple extra careful before you start trying to de-index various pages on your site. I like to map out every single page, whether I am doing something to them or not, so I know exactly what my plan of attack is. That way, if something does get messed up, I can go back and look and see what I mean to do.

  • http://in.linkedin.com/pub/ashvini-vyas/37/474/451/ Ash Vyas

    using GWT tool’s disavow link feature.

  • http://in.linkedin.com/pub/ashvini-vyas/37/474/451/ Ash Vyas

    have your website got crawled after putting it in robots? id not, then wait till then. If yes then, include noindex descriptive in that page

  • http://www.archology.com/ Jenny Halasz

    Nice article Eric! One other thought for your readers… DON’T use rel=canonical on pages that you want Google to ignore. For example, if you have bad inbound links and you rel canonical them, you could really mess yourself up because you’re taking bad links and moving them to another location on the site. In that case, a robots noindex and a 301 redirect are probably your best bet.