Google Mobile Gets “What’s Nearby,” Voice Search Expands & “Goggles” Search By Snapping Pictures

The Google Search Evolution event today featured a fair amount of discussion of the impact of mobile on the future direction of search and the user experience. There was a flurry of announcements about mobile products and feature upgrades, summarized on the Google Mobile Blog. Danny live-blogged the event. Vic Gundotra, Vice President of Engineering, […]

Chat with SearchBot

The Google Search Evolution event today featured a fair amount of discussion of the impact of mobile on the future direction of search and the user experience. There was a flurry of announcements about mobile products and feature upgrades, summarized on the Google Mobile Blog. Danny live-blogged the event.

Vic Gundotra, Vice President of Engineering, discussed how mobile and the intrinsic features of increasingly powerful smartphones are going to change search: processing power, location awareness, voice and the camera. In fact Gundotra discussed how Google is seeking to integrate location awareness more deeply into everything its doing online and on the handset.

After the general remarks that “we may be at the beginning of the beginning” of a new era of computing and search, he turned to the product announcements:

Voice Search with More Languages

Gundotra explained that Google Voice Search has dramatically improved since launch and that recently the company added support for Mandarin — and today Japanese. There were several impressive demos of voice search in both languages. Gundotra also added that Google would be supporting more and more languages over time. Voice is a key aspect of Google’s mobile strategy to make mobile search easier and keep search front and center in the mobile user experience.

Another impressive feature of Voice Search was the integration of translation: a user speaks a question or query in one language and it’s translated in real time into another. (This was an experimental feature.)

What’s Nearby? (and Local Inventory)

What’s nearby, a relatively common feature of a range of mobile apps today, will appear both on the Google mobile homepage and Google Maps on Android devices. A single touch (and hold) on a point on the map will launch the menu whereby users can find businesses and attractions immediately around them. This is a local “discovery” tool, as opposed to search strictly speaking.

Picture 42

Gundotra explains it succinctly on the blog post:

To use the feature just long press anywhere on the map, and we’ll return a list of the 10 closest places, including restaurants, shops and other points of interest. It’s a simple answer to a simple question, finally. (And if you visit google.com from your iPhone or Android device in a few weeks, clicking “Near me now” will deliver the same experience . .

Gundotra also threw in the zinger that in the new year Google would be adding real-time product inventory data to product search. This feature is available on mobile devices in limited form from selected providers today (Krillion, NearbyNow, Milo, TheFind) but not from the big search engines.

Google Visual Search (Goggles)

Google “Goggles” for Android devices was the third part of the mobile announcements portion of the program and definitely the most “sexy.” The capability uses the camera — as a variation on “augmented reality” — to search on objects/images. Take a picture and get information back. Others can do this (e.g., Amazon/SnapTell, Nokia) but not as “horizontally” as Google intends.

According to Gundotra it works on selected categories of things: landmarks, works of art, and products. The expectation is that this will expand to more and more things and items in the real world over time. In one sense this is like Voice Search, where the keyboard is avoided in favor of an easier input mechanism — in this case the camera.

Gundotra explained the process in his blog post:

  • We first send the user’s image to Google’s datacenters
  • We then create signatures of objects in the image using computer vision algorithms
  • We then compare signatures against all other known items in our image recognition databases; and
  • We then figure out how many matches exist; and
  • We then return one or more search results, based on available meta data and ranking signals; and
  • We do all of this in just a few seconds

All of these developments start to take advantage of the handset’s “native” capabilities and move beyond the “flat” PC-search user experience based around text queries in a box.

In a related development today that wasn’t discussed at the Evolution event, Google initiated local-mobile search in the real world based on QR codes on small business window stickers. Yet another form of “search” that indicates Google’s mobile strategy is about a range of experiences rather than a unified entry point.

At risk of oversimplifying, if PC search is a “monologue,” then Google’s mobile strategy is an ensemble performance.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Greg Sterling
Contributor
Greg Sterling is a Contributing Editor to Search Engine Land, a member of the programming team for SMX events and the VP, Market Insights at Uberall.

Get the must-read newsletter for search marketers.