The benefits of dynamic rendering for SEO
Client-side vs. Server-side rendering
Different rendering methods are suitable for different purposes. Elimelech advocated on behalf of dynamic rendering as a means to satisfy search engine bots and users alike, but first, it’s necessary to understand how client-side and server-side rendering work.
When a user clicks on a link, their browser sends requests to the server that site is hosted on.
“It’s very much like assembling your own furniture because basically the server tells the browser, ‘Hey, these are all the pieces, these are the instructions, construct the page. I trust you.’ And that means that all of the hard lifting is moved to the browser instead of the server,” Elimelech said.
Dynamic rendering represents “the best of both worlds,” Elimelech said. Dynamic rendering means “switching between client-side rendered and pre-rendered content for specific user agents,” according to Google.
Below is a simplified diagram explaining how dynamic rendering works for different user agents (users and bots).
”So there’s a request to URL, but this time we check: Do we know this user agent? Is this a known bot? Is it Google? Is it Bing? Is it Semrush? Is it something we know of? If it’s not, we assume it’s a user and then we do client-side rendering,” Elimelech said.
On the other hand, if the client is a bot, then server-side rendering is used to serve the fully rendered HTML. “So, it sees everything that needs to be seen,” Elimelech said.
But, dynamic rendering isn’t perfect
There are, however, complications associated with dynamic rendering. “We have two flows to maintain, two sets of logics, caching, other complex systems; so it’s more complex when you have two systems instead of one,” Elimelech said, noting that site owners must also maintain a list of user agents in order to identify bots.
Some might worry that serving search engine bots something different than what you’re showing users can be considered cloaking.
“Dynamic rendering is actually a preferred and recommended solution by Google because what Google cares about is if the important stuff is the same [between the two versions],” Elimelech said, adding that, “The ‘important stuff’ is things we care about as SEOs: the content, the headings, the meta tags, internal links, navigational links, the robots, the title, the canonical, structured data markup, content, images — everything that has to do with how a bot would react to the page . . . it’s important to keep identical and when you keep those identical, especially the content and especially the meta tags, Google has no issue with that.”
Since it’s necessary to maintain parity between what you’re serving bots and what you’re serving users, it’s also necessary to audit for issues that might break that parity.
To audit for potential problems, Elimelech recommends Screaming Frog or a similar tool that allows you to compare two crawls. “So, what we like to do is crawl a website as Googlebot (or another search engine user agent) and crawl it as a user and make sure there aren’t any differences,” he said. Comparing the appropriate elements between the two crawls can help you identify potential issues.
Elimelech also mentioned the following methods to screen for issues:
- Google Search Console can be used to see what kind of HTML is returned to Google and how it can render it.
- Testing tools, such as Google’s mobile-friendly test, the rich results test and Schema.org’s schema markup validator tool (formerly the structured data testing tool).
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.
New on Search Engine Land