• Henry Zeitler

    Good read, but the conclusion is to use Micros and not to get used by them.
    There’s nothing wrong with implementing schema.org and friends, but do it in a conscious way. Just label and provide as much information as needed to make the user curious and visit your site, not the whole recipe.
    Let your teaser-content appear in whatever will scrape it, but make sure the reader has to visit your site to get all the informations he needs.
    Not implementing Micros means missing the train and throw away an opportunity to gain attention.

  • Colin Guidi

    I did enjoy this article, you helped me raise some internal questions and thoughts. I do however think structured data markup has a ton of benefits, and even if we get to the point of every listing hosting some form of structured data markup, you’re going to want to be in that space and use that markup. It might not be an ‘advantage’ once everyone is doing it, but its for sure a disadvantage if you’re not in the inner circle.

  • Maurice Walshe

    Interesting so people are fining that lightweight “internet” style standards are some times not good enough :-) and are now trying to reinvent the wheel X.400 and x.500 had
    much better definitions of how to represent a person decades ago when compared
    to schema.

    The trouble is the ML tasks that the author thinks should be easy are in fact hard to do both at scale and with arbitrary input as opposed to tame small data sets you can use at Uni – so I have some sympathy for Google.

    You have to imagine my looking over the tops of my glasses as I type this.

  • http://twitter.com/sharithurow sharithurow

    Hi Ted-

    I actually agree with you. I understand your point of view.

    When I create wireframes, prototypes, and fully coded templates, I let the content dictate which wireframe, prototype, and template that I will use. And I will modify anything that needs modifying based on user feedback, usability tests, analytics data, and other sources of information.

    Sometimes, search results data is a self-fulfilling prophecy. I don’t know the answer to that dilemma, but I do constantly observe the self-fulfilling prophecies in search results all of the time.

  • http://twitter.com/MaryKayLofurno Mary Kay Lofurno

    Wow Ted, I knew you had some feelings about this when we sat through those sessions at SMX from some of your comments then. I am glad you have put them together here.

    I myself question the bloated pages and redundancy of it especially when speed is suppose to be an important factor. Thanks for a good read and a different perspective on schemas as they have been the darling of the technical seo now for about a year and a half, refreshing. See you at the next Semne meeting. Mary Kay

  • https://plus.google.com/115194199565322841506/about John Britsios

    Aaron, I really appreciate very much that you took the time to expressed very well the facts and that I did not need to go into this any further.

    I hope Ted will follow up. I am very curious to hear what he has to say to all this.

  • http://www.facebook.com/ashevillecontra M-j Taylor

    I’m not sure I grasp why this markup makes it easier to steal your content. Because it’s displayed in SERPs? And I don’t get your analogy to SGML; other than being mark up, I don’t see the relevance. And how is it like cloaking? I get that you think Google should be able to parse all the data without it; but it can’t yet. Why? Well, take your example of recipes. Not every website formats recipes the same way as every other. I think you’re suggesting that Google should figure out every possible way a site could format a recipe so that it can recognize and display them. I think it makes more sense to offer markup so that Google can make sense of a recipe in any format. Personally, I’ve found the use of rich snippets to be a valuable way to communicate
    with the search engines and to enhance display in the SERPS. I wish I had the time to
    implement them more fully.

  • https://plus.google.com/115194199565322841506/about John Britsios

    M-j Taylor, it is retarded to expect from Google to create markup, which they already attempted with Bing and Yahoo, the so called microdata. W3C took action to reserve their duty and they already came up with RDFa Lite.

    What I think the problem here is, that many people prefer to stick to plain HTML. But those must bare in mind that there is no way of getting structured data from unstructured text except by using NLP. If Google and the other search engines should stick to NLP, then we should start moving backwards. Maybe HTML 2.0? LOL

  • https://plus.google.com/115194199565322841506/about John Britsios

    Mary I honestly hope that those thoughts were not presented at the SMX sessions. I would be seriously disappointed if that was the case.

  • https://plus.google.com/115194199565322841506/about John Britsios

    Shari, I do not agree with Ted in any point, but you are making very valid points when it comes to designing for users. But the answer to your question about search engines is, designing with using machine readable markup and protocols if you want to comply to the requirements of semantic search engines.

  • http://www.facebook.com/ashevillecontra M-j Taylor

    So, if I am getting you correctly, John; you’re saying the same thing I said about recipes – they are unstructured text? And therefore not semantically clear to search engines without markup?

  • https://plus.google.com/115194199565322841506/about John Britsios

    That is exactly what I mean.

  • http://searchmarketingwisdom.com alanbleiweiss

    Far better minds than mine have already responded with quite valid counter-positions, so I’ll keep my rant as short as possible. Moving into the age of schema.org has been a much-needed movement for too long. I don’t believe it’s an admission of failure. Instead, it’s an admission that the web is truly more complex, with many more challenges than even Google and Bing can overcome in their end-goal effort. Which means they need our help. Why is that failure?

    The indexed web pre-Schema was really more like an ugly and too-complex to process Excel spreadsheet, and with proper Schema.org implementation it will become or get closer to being like that of true relational database. That’s key where the value is, and why, when properly implemented, discoverability, identity, and ultimate value assignment will be closer to accurate than without such additional markup.

    Another way to see it is if you look at an auto parts store – sure, there’s bins of parts. You may think you’re okay with having a section for the drive-train, and an individual bin for “drive shaft related parts”. Yet the belief that “this is good enough, and will deter thieves because they have to rummage through the bin to find a specific part” is myopic and unduly discounts the lack of true efficiency for the store owner to actually find a specific part of that drive shaft sub-system within the “drive-train” section of the store.

  • http://www.facebook.com/Scottatrue Scott True

    In the conclusion, Ted wrote, “…it looks like they are great
    for end-users…” Isn’t this exactly why it’s a good idea? From what I understand
    with the whole “Content Marketing” movement (Of course, this is something that
    has been going on for a long time, we are just re-labeling it and talking about it
    more), satisfying the end-users is what, in the long run, gets you more
    business. I know this is kind of vague, but for those that understand the “Content
    Marketing” method, it makes sense. I do appreciate this perspective Ted. I don’t
    want us all to agree all the time. Then we would be uncertain that we’re right.
    By “arguing”, it enables us to see things differently and sometimes change our
    points of view for the better. For this particular topic, I am agreeing with
    Aaron Bradley for reasons that have already been stated well. No need to repeat
    them. I am excited about all the things we can do with structured data and I’ve been thinking about many creative ways I can take advantage of it. I like the idea Henry suggested about exposing your teaser content but like Aaron suggested, If the user doesn’t need any more than that, they really shouldn’t be going to the site to click around anyway.

  • http://twitter.com/daniel_l_mills Daniel Mills

    This is an excellent discussion of the many points and counterpoints on the value of semantics on the web.

    In general I don’t support the assertions that microtagging bloats pages, makes content easier to steal, results in lost clicks, and, most of all, sets us back 25 years. I sharethe same reasoning why with Aaron, Alan and others on why.

    I do have two additional thoughts:

    1) Could it be that one reason Google doesn’t try to figure
    everything out about an unstructured page and then display it in a contextual
    SERP is that is represents a less defensible act of unauthorized republishing than
    does using structured content for the same purpose because the creator is
    essentially inviting Google to do so by adding the markup in the first place.

    2) As far as the “dirty little secret” of standards committees
    go, I couldn’t agree more with the author that corporate members of said committees
    all come with their hidden agenda. However, this far from a secret and you don’t
    have to have participated in a standards organization to know so. Anyone who’s worked on the web have seen it in action with HTML. Microsoft, Google, Apple and others handcuff the standardization and adoption by putting the corporate agenda before the community interest.

  • http://twitter.com/MaryKayLofurno Mary Kay Lofurno

    Not sure what you mean John. When I sat with Ted during the SMXeast presentation on this and he had some strong reactions then so all I was doing was remarking about the fact that he was able to get them articulated in an article.

  • https://plus.google.com/115194199565322841506/about John Britsios

    Mary I need to clarify here that I am involved in the Semantic Web Initiative since 2000 (5 years before I even got involved in SEO). To back me up, our company converted our web site markup to XHTML+RDFa markup in October 2008, which means before Google even mentioned a word about Rich snippets, etc and I was arguing in forums with different SEOs since 2006 about the future and the need of the implementation of these technologies. I would have add my comments here, but I saw no reason to do that, since Aaron Bradley above explained what I would have said.

    Read his comments and we can take the discussion from there if necessary.

  • http://twitter.com/MaryKayLofurno Mary Kay Lofurno

    Again John – I think you are missing the inflection of my remark. I read what Aaron wrote. I see this as something that we will have to wait and see. I can say that what is missing from this whole equation is a real understanding of what it takes to get these standards implemented.

    I have busted my tail for two plus years battling for resources internally to get rich snippets up on one of our ecommerce sites. Its an up hill battle. I wish these people would keep this in mind every time they release some new standard or way of doing things. IT departments are choking on work and just because someone convenes it on a standards board does not mean that CIO or VP of IT is going to be impressed enough to alot the extra resource time.