Google Paper: Better Image Search Though VisualRank / Image Rank

A Google Prototype for a Precision Image Search from the New York Times covers a new research paper (PDF format) from Google that talks about a way of ranking images based on analyzing "visual links" between them. Image search at the major search engines today largely relies on looking at words that are used around […]

Chat with SearchBot


Visual Rank Example


A Google Prototype for a Precision Image Search
from the New York Times
covers a new research
paper
(PDF format) from Google that talks about a way of ranking images based
on analyzing "visual links" between them.

Image search at the major search engines today largely relies on looking at
words that are used around images — on the pages that host them, in image file
names, and in ALT text associated with them. No real image recognition is done by
any of the majors. Search for "apples," and they haven’t actually somehow
scanned the images itself to "see" if they contain pictures of apples.

The method in Google’s paper changes that. In short, a group of images
retrieved for a query using traditional search methods is then further analyzed.
Image recognition software finds which images in the group seem most similar to
each other. It then estimates "visual hyperlinks" between them to produce a
final ranking.

The last part is important. No actual hyperlinks on the web are used to rank
the images, if I understand the paper correctly, other than in the first
traditional retrieval process. Instead, the algorithm guesses at how the images
would be linked together, with those being most similar having more virtual
links to each other. As a result, the most "linked to" images are calculated to
rank first.

The image above comes from the paper and shows examples of images found in a
search for [mona lisa]. The lines illustrate how they are all estimated to link
together, with the two in the middle (as shown in the close-up below) deemed the
most relevant based on linkage:


Visual Rank Example

The New York Times article says the researchers call the method "VisualRank,"
though that term is not used in the actual paper, which is entitled "PageRank for
Product Image Search," coming from how the method was applied to product search
results as a test. The paper itself talks of Image Rank at one point, so
VisualRank might be a new name the researchers are trying out.

Image recognition isn’t new or unique to Google, though this twist on using
virtual hyperlinks is. For background on what others are doing, see
Teaching Google To See
Images
from Chris Sherman last year. It covers players such as Riya. My
article on Polar Rose,
Polar Rose Promising Face Recognition Image Search
, also provides some
further background on image recognition as well as the
Google Images Labeler that
relies on human judgment to identify image.

For further discussion,
see Techmeme
.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Danny Sullivan
Contributor
Danny Sullivan was a journalist and analyst who covered the digital and search marketing space from 1996 through 2017. He was also a cofounder of Third Door Media, which publishes Search Engine Land and MarTech, and produces the SMX: Search Marketing Expo and MarTech events. He retired from journalism and Third Door Media in June 2017. You can learn more about him on his personal site & blog He can also be found on Facebook and Twitter.

Get the must-read newsletter for search marketers.