3 ways to improve link equity distribution and capture missed opportunities

You've worked hard to accumulate as much link equity as possible from external sources, but is your internal linking structure diluting that equity? Columnist Chris Long details how to reclaim your lost link value.

Chat with SearchBot

Links Graph Web Pages 22 1920

There’s a lot of talk about link building in the SEO community, and the process can be time-consuming and tedious. As the web demands higher and higher standards for the quality of content, link building is more difficult than ever.

However, few SEOs are discussing how to better utilize what they already have. There seems to be an obsession with constantly building more and more links without first understanding how that equity is currently interacting with the website. Yes, more links may help your website rank better, but your efforts may be in vain if you’re only recouping a small portion of the equity. Much of that work dedicated to link-building efforts would then be wasted.

For many websites, there is a big opportunity to improve upon the link equity that has already been established. The best part about all of this is that these issues can be addressed internally, as opposed to link building which typically requires third-party involvement. Here are some of my favorite ways to reclaim lost link value.

1. Redirect old URL paths

On client websites, I often see discontinued product pages that haven’t been redirected or entire iterations of old websites where almost all of the URLs are returning 404 errors. Leaving these pages broken leaves too much unused link equity on the table.

Finding old URL paths and 301 redirecting them can lead to huge wins in search engine visibility. In one fell swoop, you can reactivate the value of hundreds or even thousands of links that are pointing toward your domain.

So the question becomes, how can you surface these old URLs?

There are a few different methods I use, depending on the resources I have at hand. Occasionally, I’ve had clients who just went through a migration that moved their old website to a staging site. If this is the case, you should be able to configure Screaming Frog to crawl the staging environment (you may need to ignore robots.txt and crawl nofollow links). After the crawl is complete, simply export the data to a spreadsheet and use Find/Replace to swap out the staging domain with the root domain, and you should have a comprehensive list of old URL paths.

However, what if you don’t have access to any resources that list old URLs? For these situations, I use a combination of Ahrefs, Google Analytics and Google Search Console (credit to Dan Shure’s article on redirect chains, which helped me refine this process).

First, using Ahrefs, I’ll enter my domain, and then click the “Best Pages By Links” report.

Unclaimed Backlinks

From there, I export the entire report into an Excel file. It’s important that you export all of the URLs Ahrefs gives you, not just the ones it identifies as 404 errors. Ahrefs will only provide the initial status code the URL returns, which can be misleading. Often, I’ll see situations where Ahrefs identifies the status code as a 301, but the URL actually redirects to a 404.

Once I have my Excel file, I run the URLs through Screaming Frog using “List Mode” and export the 404 errors it finds into a master Excel document.

Next, I go to Google Analytics and navigate to the “Landing Pages” report. I’ll typically set the date ranges for as far back as the account tracks, but this varies for each situation. I’ll export all of the data it gives me to a spreadsheet and then add the domain name in front of the relative URL path using Excel’s CONCATENATE function.

I once again run this list through Screaming Frog and add the 404 errors it finds to the master document.

Export Top Landing Pages

Finally, I log in to Google Search Console, open up the “Crawl Errors” report, and navigate to the “Not Found” tab. I export these URLs and confirm that they do, in fact, return 404 status codes by using Screaming Frog. I add these 404 pages to the master document.

Search Console Errors

Now there’s one master spreadsheet that contains all of the potential broken URLs in one place. De-dupe this list and run Screaming Frog in “List Mode” and export the URLs that return 404 status codes.

To help prioritize which URLs to redirect first, I connect Screaming Frog to the Ahrefs API, which will allow the crawler to gather the link metrics associated with each page. I sort that list by number of linking root domains and assign priority to the redirections that way.

After I have the final list of 404 errors, it’s simply a matter of identifying the destination pages on the client website each URL should redirect to. To scale this effort, I often use a combination of MergeWords and the OpenList Chrome extension.

2. Analyze the .htaccess file

When evaluating how your website distributes link equity, it’s important to understand how your global redirects are working as well. This is where the .htaccess file comes into play. In this file, you can see the syntax that instructs your website how to handle redirect rules.

When using a tool like Ahrefs, if I’m seeing common redirect patterns, this is a good sign that these rules are defined in the .htaccess file.

Often, I’ll see that the .htaccess file is causing 302 redirects that should be 301, pushing unnecessary redirects (causing redirect chains), or missing redirect rules that should be there. For instance, a common mistake I see are files that 302 redirect HTTP URLs to HTTPS instead of 301.

Each situation is entirely different, but here are some of the .htaccess rules I commonly look for:

  • “HTTP” to “HTTPS” rules
  • Non-WWW to WWW rules
  • URL capitalization rules
  • Trailing slash rules

There are many opportunities to better control the directives of the .htaccess file. If you’re noticing similar patterns of improperly configured redirects, it may be worth pulling this file and talking to your developers about how these issues can be fixed.

3. Fix internal 301 redirects

Now that you’ve accumulated as much link equity as possible from external sources, it’s time to ensure that your website is passing it efficiently internally. If your website has a bunch of internal 301 redirects, there’s a chance that your deeper pages may not be receiving as much link equity as they possibly could be. While Google claims there is no link equity lost in 3xx redirects, why leave this up to chance? I would rather be 100 percent sure that internal links are passing their full value throughout the website.

To identify these, I run Screaming Frog in “Spider Mode” on the domain being analyzed. Screaming Frog will crawl the website and gather instances of 301 redirects in the “Redirection (3xx)” report. If you want to determine the order of importance, sort this report by “Inlinks.” You will now see the pages that are internally 301 redirecting the most.

301 Inlinks Screaming Frog

Often, these are instances of internal redirects in key areas such as the primary/secondary navigation, footer or sidebar links. This is great because with one change, you can eliminate a large quantity of these internal 301 redirects. While you’ll want to fix as many as possible, I recommend you start there.

Final thoughts

One thing I’ve learned during my time as an SEO is that webmasters are fantastic at diluting equity. Changes such as website migrations and previous URL redirects all have a large impact on link equity.

While in an ideal world link equity would be kept in mind during these implementations, that is often not the case. The above steps should serve as a good starting point to getting some of yours back.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Chris Long
Contributor
Chris Long is the VP of marketing at Go Fish Digital. Chris works with unique problems and advanced search situations to help his clients improve organic traffic through a deep understanding of Google’s algorithm and Web technology. Chris is a contributor for Moz, Search Engine Land, and The Next Web. He is also a speaker at industry conferences such as SMX East and the State Of Search. You can connect with him on Twitter and LinkedIn.

Get the must-read newsletter for search marketers.