Google Scientist: We Want To Be Able To Respond To A Query Like “Book Me A Trip To Washington, DC”
Will the day come that Google can successfully respond when you tell it something like, “Book me a trip to Washington, DC” — walking you through all the queries and answers needed to complete such a complex request? Will the day come when we’re using devices that only offer voice-based search? Will the day come […]
Will the day come that Google can successfully respond when you tell it something like, “Book me a trip to Washington, DC” — walking you through all the queries and answers needed to complete such a complex request?
Will the day come when we’re using devices that only offer voice-based search?
Will the day come when visual search happens continuously via the camera on Google Glass?
Those are some of the search-based challenges that Google is working on, and they’re discussed in a recent interview that offers a peek at what Google thinks search will be like in the next decade. Or maybe “plans for search to be like” would be a more accurate phrase.
Google Research Fellow Jeff Dean discusses these topics and more in a recent interview with the Puget Sound Business Journal. He’s been with the company since 1999 and works in the Systems Infrastructure Group, where they do things like apply machine learning to search (and pretty much all of Google’s other products).
It’s pretty high-level stuff; you won’t find anything about keyword research or SEO or even the basics like “10 blue links” on a search results page. But you will find, for example, a peek at how Google is using machine learning to build out the Knowledge Graph.
We have the start of being able to do a kind of mixing of supervised and unsupervised learning, and if we can get that working well, that will be pretty important. In almost all cases you don’t have as much labeled data as you’d really like. And being able to take advantage of the unlabeled data would probably improve our performance by an order of magnitude on the metrics we care about. You’re always going to have 100x, 1000x as much unlabeled data as labeled data, so being able to use that is going to be really important.
Dean goes on to say that his team is working on “big problems” like being able to use voice and predictive search to answer queries like “Please book me a trip to Washington, DC.”
That’s a very high-level set of instructions. And if you’re a human, you’d ask me a bunch of follow-up questions, “What hotel do you want to stay at?” “Do you mind a layover?” – that sort of thing. I don’t think we have a good idea of how to break it down into a set of follow-up questions to make a manageable process for a computer to solve that problem. The search team often talks about this as the “conversational search problem.”
Google launched conversational search on its Chrome browser earlier this year, and the product is smart enough to follow a spoken sequence of searches like “how old is Barack Obama” followed by “how tall is he.” It recognizes that the “he” in the second spoken query refers to Obama.
Dean also mentions applying conversational search alongside Google Now, its predictive search tool.
Like, if it’s trying to give me restaurant reviews, there’s probably 50 possible restaurants to choose. And they might all be pretty good suggestions because it knows what sorts of food I like, but it’s still a list of 50 restaurants. Again, this would be a place where a dialog would be useful. “Are you in the mood for Italian?” Something like that.
He also talks about having devices a few years from now where voice is the only type of search offered, and about Google Glass offering augmented reality-style search where information is displayed automatically about buildings and signs that its camera sees.
If you want to spend some time thinking about where Google is taking search, it’s an interview that’s well worth reading.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.
New on Search Engine Land