Bing’s Stefan Weitz: Where Is Search Going?

In my last column, I had the chance to chat with Bing Director Stefan Weitz about how Microsoft is approaching search as it sits today. But the question I asked that lead to the interview in the first place was “Where does search go from here?” Microsoft’s Bing team certainly has its own ideas of […]

Chat with SearchBot

In my last column, I had the chance to chat with Bing Director Stefan Weitz about how Microsoft is approaching search as it sits today. But the question I asked that lead to the interview in the first place was “Where does search go from here?” Microsoft’s Bing team certainly has its own ideas of where search might be going and that’s what I’ll be covering in the continuation of my conversation with Stefan. Ultimately, I’m looking at how search can become more useful for users.

Right now, there are two huge challenges that are boxing in search as a tool that’s truly useful in our day-to-day lives. The first is an input challenge. Language is notoriously ambiguous. In my last post, Stefan and I talked a little bit about the challenge of semantics and search. When I say “Jaguar,” what do I mean? Is it an animal, a NFL team, a car, an operating system, or some other obscure meaning that has crept into usage somewhere?

The language challenge

If we are people talking to people, we have the advantage of context to help us understand meaning. If I use the term “jaguar” in a conversation about which vehicle I’m buying, you can quickly determine that I mean the car. However, if I use the term when I’m talking about computing, you can guess that I’m talking about an operating system. But what if I just type “jaguar” into a search box? How does Google or Bing know what I’m talking about?

Right now, all they can do is use relevancy as a proxy signal for my intent. So, on Bing, it guesses that, without the advantage of context, I am probably talking about either the Jacksonville Jaguars, the cat or the car and hedges its bets by presenting a mix of results around those three possible meanings. Google has a similar disambiguation problem. Given the input challenge, both results sets are too diverse to be truly useful. Both require some additional tweaking from me before they can understand what it is I’m looking for. So, the first hurdle for search is to try to understand language:

Weitz: Where we need to get to, and where we’re working to get to, is doing better job of having the crawler and the parser really understand the language. When I say Crest White Strips, think of what today’s indexes are going to do. They’re going to find those words in a PageRank and return those results. The system has to know that Crest is a brand and white strips are a way of whitening teeth. Teeth whitening is done by a dentist. And dentists often don’t like using off-the-shelf products.

You have all the things that I know about Crest White Strips, just from casual human understanding standpoint. The engines today don’t know that. So much of the intent calculation we have to do to deliver a good set of results is bound up in this challenge of us imbuing the engines and the index and the parsers with a more human characteristic of understanding what they’re reading. That will get us to intent much faster than a lot of the mathematical tricks.

The triangle of disambiguation signals

What Stefan is talking about is building machine intelligence that starts to put context around language. It’s one of those areas where fuzzy human logic dramatically outperforms sheer computing power. In fact, the Turing test for artificial intelligence is for a machine to be able to conduct a conversation with a human without the human realizing they’re talking to a machine. This is a tremendously high bar to jump over. It requires machines to think, in the same way we do.

In the case of search, it requires the machine to connect a semantic label with known concepts that may surround that label, as in Stefan’s example of White Strips. We do this instantaneously and effortlessly (although miscommunication is no rare occurrence, even between humans) but thus far, machines haven’t been able to duplicate the feat. So, what are the signals that Microsoft might use to pull this off? Knowing more about the person who’s doing the talking is one potential signal:

Weitz: Personalization in search has been talked about for years and years and years. You can actually get fairly accurate in your personalization with a shockingly small number of variables: what time of day is it, where are you; those two alone give us a shockingly high ability to personalize results in a way that increases a clickthrough. Then you get the more complex stuff: what have you done before? Even that, we know, is fraught with problems. My Amazon account that my wife uses has to be pruned every so often to reflect my interests a little more accurately.

So, beyond personalization, we can also tap into the social graph, conveniently captured through online activity:

Weitz: The other intent model, and we’re not even to the point where we can say this with any authority, is your first order social network, your first circle of friends. What are their interests and how does that transitively accrue to your profile? The problem, of course, is that I have hundreds of friends on Facebook and I just don’t have that many friends. And even my friends that are close I don’t necessarily agree with nor do I consider them to be experts in a particular area like integrated search design that I have some interest in. I think people talk a lot about using your social graph to find your intent and I think that’s fraught with issues because the social graph is not a clean graph.

Finally, we can take our cues on a temporal basis, from what’s happening right now in the world. If Jaguar just released a new model that is setting the Twittersphere a-buzz and there is a sudden spike in search traffic, we can connect those two dots without a huge leap of faith:

Weitz: What it does do, especially with Twitter or a lot of these real time services, is provide additional signals to an engine. If you think about Twitter, for example, and how fast things rise and fall, the traditional model of ranking simply doesn’t work. It’s not fast enough, it’s not logical to assume you have in links to Tweets. It just doesn’t make any sense. We can use all these signals of UGC as part of an algorithm that takes into account the user’s intent, the most logical response to that intent and that response is determined by a number of signals: what’s happening in real time, what does your social circle think of these things, who is an authority on this particular topic, and what does that person (or entity) either read or write? There are a number of different factors that we look at.

But, even with this triangulated set of signals, it’s still extraordinarily difficult to determine intent:

Weitz: Right now we’re at the very early stages of intent. We can look at location and time. We can look at collaborative filtering models&mash;if you do these three queries, we can do the regression to figure out that the 2 million other people that did 3 similar queries ultimately ended up at this particular result. That’s a very mathematical way of understanding intent because we’re just looking at similar queries and similar destinations.

Ironically, search is the demise of destination

So, we have the input problem to address. But even that assumes that our current paradigm of search remains fairly constant, which is a hopelessly outmoded assumption. The likelihood of us continuing to use the ubiquitous query box is already rapidly eroding. Even now, we’re searching in ways we never did before. This, too, has to be factored into where search might be going. Are the days of search as a destination over?

Weitz: I generally search from the URL bar or I search from the browser search or I search from a gadget or I do things that I don’t even think about as search. I go to Yelp and I do a “search” or I go on Facebook and I do a “search” or I’m on my phone and click on the new Bing app and I get a map and I ask what’s nearby and I get a bunch of results. That’s a very implicit search. I didn’t ask it to do a search but the client knew the best way of find the information about what is near my physical location is to conduct a meta-search, passing it a bunch of variables: latitude, longitude, category, etc, all on my behalf without me having to actually do it and returning me back a bunch of information.

This notion of “doing a search” is rapidly becoming outmoded. My daughter is five and as you can imagine, my house has 12 machines running at any given time. There’s an iPod touch that’s on the web and she uses it to query things on. Her notion of “going to” an engine and searching is pretty comical. She just assumes that everything is always knowable, instantaneously, at her fingertips through some mechanism.

Stefan touches on a concept here that hints at what we have to start looking at when we think of search—the idea that search sits “under” our online activities—that it sits “under” an app or a platform. In my next Just Behave, I’ll be exploring this concept further with John Battelle. But for now, this brings us to the other huge challenge that faces search: how do you present results in a truly useful way? Information is seldom an “end state.” Information is a means to end. We want to do something with the information. And from that perspective, search still has a long way to go.

Weitz: It’s not just about throwing a bunch of docs that may have a high PageRank back at you, but this is about the model the web is moving towards: it’s much more dynamic, much more social, much more “real time”, much less static. One of the things we do, where it makes sense, is to preprocess the billions of data points that we have from all different data sources and when it makes sense, send the query a response that makes sense. It doesn’t have to be just another link. For example, it could be Farecast, where we see the prices going up and down for different airlines. It could be opinion ranking, where we actually look at all the reviews of restaurants across the web and summarize those in a smart way. We can throw back more than just an opportunity for more exploration, to be nice about it, and we can actually contribute to the user some knowledge which would have taken them hours, or hundreds of hours or an infinite amount of time to tackle it on their own.

Mobile…finally?

Perhaps the acid test of “usefulness” in search can be found on our mobile device. Here, when we search for information, it’s almost always because we’re ready to do something with it. So, appropriately, my last question for Stefan had to do with the future of mobile search usefulness.

Weitz: I’ve been in this industry for over a decade know and it’s always been that mobile will be “next year”. Next year will be the year for mobile..from 1999 on, next year will be the year for mobile. We are finally at the point where I can say with a straight face either we’re already there or it’s coming very quickly. We are seeing tremendous growth in the number of queries being issued from mobile devices.

It’s amazing what you can do with one of these devices from a search standpoint when you think of all the information we understand. We understand your latitude and longitude, so we know your location. We understand better your previous queries because unlike PC’s, the mobile device is fairly personal. My wife doesn’t generally get on my phone and start querying around. She has her own phone for that. My daughter does too. You have a better opportunity to tailor a better result because it’s less likely to be corrupted by another’s usage. The video, the photos, all the information that we can gather if the user gives it to us, this mass of data that we can use to better understand who that person is and what they’re actually looking for. We can push things to them without them even having to ask, which I think is the thing that excites me the most.

The things we’re seeing now with augmented reality, where we’re overlaying a video screen with dozens of implicit searches that are happening behind the scenes. It’s like you’re wearing the magic goggles from comic books, like you’re in the Matrix. When I hold up a phone and I just pan across a landscape, and over that landscape, as I’m panning, it’s pulling in houses that are for sale, it’s pulling in reviews from Yelp on local businesses, it’s pulling in information on the mountain range that I see off in the distance. These are the things of science fiction and they’re all search driven. That’s why this notion of a text box that you punch a query into on a phone is very “four years ago.”

As I mentioned, in the next column, I’ll be sharing some of the things I had a chance to chat to John Battelle about when we explored the potential future of search.


Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.


About the author

Gord Hotchkiss
Contributor

Get the newsletter search marketers rely on.