How Human Factors May Affect Information Indexing And Retrieval

If you were to ask a search engine marketer what their main goal is for their type of work, they may tell you, “We want our clients’ web site to come up first in search results.” To make their days more challenging, they may monitor popular keywords, create landing pages which are tightly focused on […]

Chat with SearchBot

If you were to ask a search engine marketer what their main goal is for their type of work, they may tell you, “We want our clients’ web site to come up first in search results.”

To make their days more challenging, they may monitor popular keywords, create landing pages which are tightly focused on one topic, and study search patterns to see where they can make changes to their internet marketing strategies.

Market TargetWhen you ask a user experience web designer what their prime objective might be, they may explain they want their web pages to convert inbound traffic (likely from search engines) to the completion of a goal task such as purchasing, signing up for something, or getting sales leads.

Ask a person who is not a marketer or web designer what their main wish for information seeking or online ordering is and they could tell you they want their minds read so that their search results offer instant, perfect choices.

We Like Choices, But Accurate Matches Are Even Better

When we’re information seeking via the Internet, we are seeking knowledge. There’s an unspoken agreement between a search engine and a searcher that says, “I want the best information on my topic that you can find.” Search engine algorithms try their best to deliver exactly that, but we’ve all experienced pages of results that miss the boat.

I ran a search in Google for a local book author I did some work for in the early 1990’s. She was in her 70’s back then. I wanted to see if she had published a book I knew she had written. Google served up page after page of her most popular book. Amazon, of course, was in the lead.

I didn’t want to know about that book. I wanted to know about her. I didn’t think that would be too hard since she and her brother are famous for art and writing. I couldn’t locate anything useful. I ran a search with her name and “obituary”, just to see if she had passed on, but all I got were obits from other people with whom she had worked with or co-wrote with. There was no web site for her and no write-ups by any news source.

It dawned on me that there is more information on me online than there is for this local, but well known book author. With no web site, no online marketing efforts on her behalf and no interaction with the Internet, it’s no easy task to get “knowledge” or expert information.

I wondered…does this mean that search results are skewed by what it can find and if there is nothing to find, there is no “knowledge” to share? It doesn’t mean that information doesn’t exist. It’s just not on the web.

When we approach a search engine for information, what we’re getting back are hundreds of pages with content that may come close to what you want because somebody put it there to be found.

The more difficult your query is, the more likely the sites you get back are abstract, mismatched or completely unrelated.

All Queries Are Not Created Equal

We each have different theories on what knowledge is and what kind of information is important, and relevant, to us.

Consider how many of you read books. You’ll highlight words, sentences or entire paragraphs. Perhaps you may want to later revisit that page, or you’re doing research and have made notes next to the highlighted content.

We choose what we want to retain. We choose what’s important to us. When two people are given the same book and offered a highlighter, they will make different choices about what they relate to and what sparks their interest. They might even highlight different “keywords” based on personal preferences.

Imagine if you could manually index pages on the Internet rather than a search engine program doing it for you? Wouldn’t that be a more intimate way of searching rather than getting results that are marketed to be ranked and found?

When you read and highlight text in a book, you’re not highlighting keywords because there are 20 of the same one on a page. But this is what search engines do behind the scenes. They count the number of times a word appears on a page. People “count” the overall content as important or not.

Other choices are made for us as well. For example, there is tagging and tag clouds. When a writer posts new content to a blog, they can assign “tags”. Typically tags are commonly used words that tie into the post’s topic. Knowing there may be various ways to search for a topic, multiple tags can be used.

It would be interesting to ask readers if they would have assigned different tags. This feedback would help the writer to understand how their readers are looking for information. The same type of sorting is done with categories in blogs.

Information architecture relies on taxonomy to help determine content such as navigation labels. When offering directions to a specific section of a web site, it helps to be descriptive and yet the same words are used over and over again, rendering them useless for conversions. My favorite one is “Solutions”. That word means so many things to people, not to mention if offers no clue what’s hiding behind the link.

Remember, we’re searchers and we’re expecting accurate, sensible, immediate knowledge. Our choices are limited by the practice of limiting web site navigation terms to one or two words. You rarely see global navigation with labels like “Would You Like to Contact Us Today”, “Browse or Purchase Our Products” or “Get to Know Our Company and Staff”.

Instead we’re left with uninspiring “About”, “Contact” and “Products”. Users are never asked to “rate our link labels”, but they are sometimes asked to rate a book or product.

Theories Of Knowledge And The Relationship To Indexing

Could human indexing, rather than automated indexing, offer better search results? Presently, we accept that brand names will dominate search results. The more popular the brand, the more likely it is to be on top.

Above them are likely purchased spots. The advertised results are an expensive way of putting a company name or domain in front of searchers, but it doesn’t automatically mean more clicks because it’s understood those pages are not there because of user popularity.

This is one reason why I support social conversation marketing and social media marketing. We like peer to peer recommendations and are more likely to trust their opinions.

Human indexing could be more creative. And more precise. Just take a search for “God”. Christians are more likely to want information about God than they would content about the Native American “Great Spirit”, even though they’re the same type of entity. To make it more interesting, Native American tradition doesn’t consider God to be a noun. To them, the Creator is a verb.

To obtain certain types of knowledge requires, then, previous knowledge of tradition, culture and language. A search engine has no idea beforehand what a searcher’s beliefs and leanings are. This has given vertical search a chance, and web sites like Beliefnet, who understand their users and how they want to find information.

Researchers have been exploring ways of indexing that go beyond what marketers and designers think about. One of their puzzles is finding ways to index specific information that fits with different types of user behavior and mental models.

Do social clues provide help in human or automated indexing? How we interpret information, and offer feedback on it, may play a part in indexing, knowledge organization and information retrieval. One study found that human indexing was dependent on how well the indexers knew their topic, from the vocabulary to the criteria used to decide what is relevant and important.

Automatic indexing, which is programmed, relies on which technologies the programmers have learned about and their knowledge of the characteristics of the domain and the relevance of documents to potential queries.

Another interesting research point is the cognitive view of indexing. This suggests that people index and search in a specific way because they have a certain cognitive or mental structure. The criteria for similarity may be hardwired into our brain. In other words, we index information based on how we’re taught to interpret it.

Keywords That Convert

The success of “long tail” search results can be attributed to it being a more accurate and precise way to index information. Smart marketers look past lists of common or related keywords to uncover how and where people are looking for knowledge. One way they search is by asking questions, so optimizing content that answers commonly asked questions performs well in search results pages.

A horticulturist may seek information on a certain plant by typing its Latin name, whereas a general consumer may call it “ivy”. Who a web site is designed for decides what content to optimize for.

The massive failure of many social networking marketing strategies can be traced to the practice of throwing content out there without thought or planning. Trust is one of our criteria when looking for knowledge. Searchers learn quickly that their trust circle is tiny.

In 2011, rank and conversion success will be directly the result of targeting groups of people and topics that are researched well in advance. You will need to know how they search and who they trust when looking for accurate information. Tricking people into clicking into a site brings a click. It doesn’t mean that click will convert or result in a referral. Conversions are not part of a search algorithm.

User generated feedback, critical discussions, peer to peer knowledge share, consumer and professional referrals, and social conversations that inform, excite, enlighten and create strong communities of like-minded persons will drive specific , targeted traffic.

We’ll see an enormous push in 2011 for human generated, purpose-driven content on a huge variety of devices. More than ever before, the blending of search engine optimization, usability and human factors and social networking will make the most impact in getting out information and producing robust results.

Further Research:

Hjørland, B. (2011), The importance of theories of knowledge: Indexing and information retrieval as an example. Journal of the American Society for Information Science and Technology, 62: 72–77. doi: 10.1002/asi.21451

Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.

About the author

Kim Krause Berg
Kim Krause Berg is the SEO/Usability Consultant for Cre8pc. Her work combines website and software application usability testing with a working knowledge of search engine optimization.

Get the must-read newsletter for search marketers.