How to measure success with JavaScript-dependent websites

SEO strategist Robin Rozhon answers questions from SMX Advanced about monitoring rendering success, using external A/B testing tools and more.

Chat with SearchBot
Smx Overtime Robin Rozhon

I enjoyed sharing the stage with Hamlet Batista during our session about the new renaissance of JavaScript at SMX Advanced in June. I spoke about some of the JavaScript-dependent websites I’ve worked with, their unique challenges and the importance of having automated testing and monitoring solution in place.

Here are some of the questions submitted by the session attendees and my answers.

What did you use to test the rendering success rate [in your session example]?

We set up an automated monitoring script that checks a considerable number of pages on the site every day at 8 a.m. The script checks multiple elements on each page. One of the elements we’re checking is the presence of the language selector because we found that the language selector is not there if prerendering fails. Once we know how many pages the script checked (the number is the same every day) and how many times the prerendering failed (the language selector not found), we can calculate the rendering success rate.

If you don’t have a monitoring solution, you can use Screaming Frog to achieve a similar result.

  • Set Rendering to “Text Only” and switch the user agent to Google Smartphone.
  • Use Custom Search or Custom Extraction to target the element that’s not present when the prerendering process fails.
  • Crawl the site (or a significant sample of pages).
  • Repeat the crawl multiple times over the next week.
  • Count the number of times when the monitored element is present and calculate the rendering success rate.

Do you have any tips for dealing with dynamic rendering when your site uses external A/B testing tools that are inherently client-side rendering?

I’d want Google to see only one version of a page. This means I’d serve the old version to search engines until the new tested design is permanent. You’re already doing user agent detection because you use dynamic rendering so you can block adding the A/B testing code to a page when a request comes from a search engine bot and add the A/B testing code only if the page goes to a user.

My design/dev team often asks if we could use JavaScript to hide content that’s visible on click or hover. For desktop, what are alternatives we could implement instead? Or are there any?

I don’t know the details but, generally, CSS can do hover-related actions. For on-click events, you want to make sure that the desired content is in the initial HTML response. You don’t want to load that content dynamically via JavaScript after the user clicks.

If the content in question is visible by default and you want to hide it after an interaction, that’s fine. Google doesn’t click on or hover over elements.

I have exactly the same new implementation as company White – with opacity. This has been bothering me as the pages that migrated to this new implementation are not performing as good as previously. Can you confirm you didn’t see any issues with opacity, and there’s no need to try to address/change it?

Every website is different, so I can speak only to the one I have encountered. We didn’t see any noticeable improvement after removing the initial opacity:0, but it was a site with massive branded traffic. Generally, if your website doesn’t receive much of branded traffic and relies heavily on non-branded traffic, I would want to remove opacity:0 sooner rather than later. If the vast majority of your organic traffic comes from branded queries, I’d assign a lower priority to this but still want to get it done at some point.

How can you work closely with devs on these checks if they are remote or in India with a big time difference?

I often work with people in a different city or continent and one thing that’s always worked for me is Skype/Slack calls. I wake up early or stay late for a call rather than exchange long emails. The calls help me to understand their workflow and challenges better while I get a chance to explain the reasons why automated testing should be in place and to address their immediate questions.

Once both sides are clear on why we’re doing it, I still consider it essential to create a ticket with concise but thorough requirements and acceptance criteria to avoid any miscommunication.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Robin Rozhon
Contributor
Robin Rozhon is an SEO strategist who helps enterprise websites with their technical SEO, overall SEO strategy and web analytics to increase their performance in organic search. He enjoys talking about all the technical aspects such as crawling, indexing, JavaScript rendering and log files as well as talking about the benefits of building a strong brand. Robin has experience working in-house for well-known brands (Electronic Arts and MEC) but has also spent several years at agencies in Canada and Europe.

Get the must-read newsletter for search marketers.