An Update On Javascript Menus And SEO

SEOs have traditionally recommended using search engine friendly menu systems such those based on HTML styled with CSS instead of those written in JavaScript.  The primary language understood by search engines has been HTML, but now several search engines have learned how to read JavaScript.  I recently found an example that reveals how much progress […]

Chat with SearchBot

SEOs have traditionally recommended using search engine friendly menu systems such those based on HTML styled with CSS instead of those written in JavaScript.  The primary language understood by search engines has been HTML, but now several search engines have learned how to read JavaScript.  I recently found an example that reveals how much progress has been made.  My observations confirm what software engineer Janis Stipins’ said at SMX East in October 2008, that Google is doing a much better job spidering JavaScript.

About a month ago Monitronics had deployed a new site built with DotNetNuke and the Solpart version 1.7.2 menu system. You can see what technologies were used by reading the comments in the source code. DNN is an open source framework for building websites that run on ASP.NET. DNN’s Solpart menu system relies on JavaScript.  In the past I have recommended ripping out that menu system and using something else.  Like most companies working with DNN, Monitronics just went with the default menus.  When I visit their site with scripting disabled in my browser (Firefox with the NoScript add-on), I see nothing: no main menu items, and no drop downs.  In former times, this would have been a bad omen for SEO.

As promised, Google has spidered the new JavaScript menus.  The Google cache shows all the menu items, lined up in a neat list with links fully operational.  Google is also showing reasonable Sitelinks for a search on [Monitronics], which is a good indication that the site has been properly spidered.  When I look at the Yahoo cache, I see the main menu text, though not the drop downs.   Yahoo has cached the text generated by JavaScript rather than the Javascript code itself. However, Yahoo has failed to spider the drop down menus and failed to extract the menu links.  When I visit Microsoft’s cache, all the menu links are there, organized in the same neat hierarchy as shown in Google’s cache, and the links work.  When a website is built with Microsoft technology, I’d expect the Microsoft search engine to be able to decipher the code, and it seems like Microsoft is doing a good job reading DNN JavaScript menus.

Does this mean we can forget about search engine friendly menus?  I would not ignore the issue, but before investing resources in a new menu system, I would check the performance of the old system.  If something is working, or partially working, it might be better to put those resources towards another priority. Getting good performance from Google and Microsoft, and having Yahoo index a site via its sitemap is perhaps 90% as effective as having fully spiderable menus.

In November DNN posted a note that their newest menu system, version 1.8.0 is “SEO compatible”. Perhaps Yahoo’s spidering capabilities will soon catch up with Google’s and Microsoft’s. Depending on the potential traffic that could be generated by interior pages of a site, the web site owner needs to decide whether it is worth the investment of redoing a menu system to gain that final 10% performance, or whether procrastination might be an effective strategy. In 2009 I think Google and Microsoft will master JavaScript, Yahoo will follow and SEOs will have fewer menu gripes.

PostScript, January 12, 2009: Upon investigation of reader comments, it appears that DNN’s menu system includes a user-agent cloaking module.  Some details are available in a thread at the DNN Community Blog.  Apparently the module serves up search-friendly menus when the user agent matches a search engine spider.  The Firefox User Agent Switcher provides a method of testing.  First, install that add-on, and then import a list of user agents, such as this one.  Switch your user agent to Googlebot, Yahoo Slurp, or MSNbot to see the same pages as appear in all three search engine caches.  Oddly, the cloaking module does not provide correct output for Yahoo.

People debate whether this sort of cloaking is black hat or accessibility programming.  It appears that the search-engine version of the page is identical in content to the one served to users.  There appears to be no deception.  In addition, I think it would be odd for the search engines to ban a large number of sites built using a common menu system, the details of which are probably not apparent to the majority of webmasters managing these sites.  It remains an open question whether the search engines could read these menus without the assistance of friendly cloaking.

Nevertheless, the essential advice of the article remains correct.  Before replacing a menu system, check the actual performance.  Don’t assume that a menu won’t work with search just because of JavaScript.  Search engines claim to be able to read JavaScript, and some menu systems provide their own code fixes when they detect a search engine spider.


Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.


About the author

Jonathan Hochman
Contributor
Jonathan Hochman is the founder of Hochman Consultants and executive director of SEMNE. A Yale University graduate with two degrees in computer science, he has 25 years experience as a business development, marketing and technology consultant.

Get the newsletter search marketers rely on.