Tectonic Shifts Altering The Terrain At Google Maps

Google recently upgraded Google Maps with a new land parcel data layer, added a Map error reporting function, has promised map fixes to street errors in 30 days or less and has replaced Tele Atlas as their provider of roadway data. It was widely expected that Google might replace TeleAtlas but I don’ think anyone […]

Chat with SearchBot

Google recently upgraded Google Maps with a new land parcel data layer, added a Map error reporting function, has promised map fixes to street errors in 30 days or less and has replaced Tele Atlas as their provider of roadway data. It was widely expected that Google might replace TeleAtlas but I don’ think anyone realized it would be this soon.

I wanted to better understand Google’s plans for Google Maps in the United States and globally and put the changes into a larger competitive and social context. In an effort to clarify my thinking about the technology and the implications I contacted Mike Dobson of TeleMapics, mapping industry veteran and all around brilliant guy. I asked and he answered.

Mike Blumenthal: Does the recent Tele Atlas boot from Google Maps US mean that Google has developed or acquired routing algorithms?

Mike Dobson: Google worked with deCarta (under its former name Telcontar) when they were beginning the development of their mapping application. deCarta is known for its excellent drill-down server and highly efficient mapping applications. deCarta/Telcontar was the force behind Google’s early mapping and routing efforts (as well as those of Yahoo—by the way, Telcontar was originally founded by a small group of smart guys who had left ETAK (a company that was an early entrant in creating autonav databases that was acquired by News Corp, sold to Sony. Eventually ETAK merged with GDT (in a curious deal) and the combined entity was acquired by TeleAtlas).

Eventually Google progressed to the point where they no longer needed deCarta’s skills and let the contact lapse. Since then, the team at Google has been providing their own, unique mapping and routing technology and applying it to Navteq and TeleAtlas data for quite some time—so this is not news. However, “legal” routing requires that you are certain about the direction of one-way streets, turn restrictions etc., and that your data base is populated with these “official” restrictions on the movements of cars. Although the online routing application is not as stringent an application as providing enhanced routing capabilities in an in-dash navigation unit (or for ADAS (advanced driver assistance systems), it does mean that Google has been actively building an attribute database of road characteristics that links to its database of streets.

Who enforces the standard for “legal” routing? Or are you saying that it is the standard for more sophisticated devices and is self-enforced?

Whether Google or anyone else want to include this in their terms of use, the public has a reasonable right to assume that the maneuvers that Google provides in their routing directions (or the routing directions that are provided by anyone else) will not require the execution of vehicle maneuvers that are illegal or potentially harmful.

The map database used by Google does not simply represent a connected graph, but a graph attributed with street properties that allow a router to build a path across a graph “knowing” when it can maneuver a vehicle right or left, where to access the off or on ramps and other maneuvers that would be required to allow the driver to maneuver a vehicle across a network using a route that will not cause them to violate laws or damage themselves. I suspect the final arbiter of whether a company has created and provided a database that is inherently flawed will be a court of law

Urban Mapping has been helping/investing in a routing platform (I believe for transit)—is that something Google may be using?

To my knowledge, Google is using Urban Mapping’s neighborhood names database, but not its transit routing platform or transit data. Google has been collecting the same types of transit data of interest to Urban Mapping and would likely not believe it needed these data.

Clearly they have an advanced geo-OCR application that they are using to analyze & interpret their StreetView image data and convert the road signs etc.(they spoke about this briefly in their article)?

That is likely and would appear to be the most efficient manner of converting the Street View data into a useful source of attribution for the streets and roads in their databases. One interesting question about Street View is how good the original imagery is (especially the recent imagery with their new platform) and how superior it is to what we see on the Web. In some areas, if the original, is as bad as it appears online, then Google will be forced to obtain information on streets names and other attributes by using the positional information gathered by the Street View vans to set control points for conflating other databases to their own data.

Obviously, Google has not driven all of the streets in the U.S.(at least to my knowledge), so they may be using data from Census and other sources to collect street and road attributes in rural and other areas they have not yet mapped themselves. (This is a supposition on my part – perhaps they have driven the 4,000,000 miles or road in the US – but I think they have not yet completed this task.)

It is my belief that the major issue for Google will be updating their map/navigation database. I suspect they plan to try to accomplish the majority of this through harnessing the power of UGC.

(If you are interested in Street View, pedestrians, navigation and landmarks, you might be interested in a five-part blog exploring some of the ins-and-outs on Google’s interest in these topics. https://blog.telemapics.com/?p=129 , https://blog.telemapics.com/?p=136 , https://blog.telemapics.com/?p=138 , https://blog.telemapics.com/?p=140 and https://blog.telemapics.com/?p=142 )

They are now getting geospatial data from primarily 5 places to fulfill their expansive geospatial vision.

  • Renting what they don’t yet have.
  • Streetview
  • MapMaker
  • “Partnering” with public entities
  • Cell data

I would also add to your list: Correction data from users who opt to click the “report a problem” tag on the map.

There is no question that their data gathering effort for map database compilation is expansive, expensive and comprehensive. In the process of creating and maintaining their map database, I am sure that Google will have spent an enormous sum of money creating the quality of the data that meets their strategic objectives. Unfortunately, assembling the data is just the “first drop” in filling up a seemingly bottomless reservoir of data. Once you have collected the geospatial data necessary to meet your objectives, you will certainly know better than anyone else where it needs improvement and while you are fixing your “problem areas”, the rest of your data will age and require recertification—especially if you are using it to route people driving vehicles.

With the new “Report Errors” capability that Google implemented, when do they get errors submitted to them, how exactly does Google verify it? E.g. a road is a dead-end—how goes Google know that person is not lying? Or are they again just implementing algorithms to the submitted correction?

Oh come on, the Great Google Knows All. Wait, maybe that was the Wizard of Oz.

Google has not directly revealed its methodology, but the company certainly relies on algorithmic solutions to map updates where possible. It is clear to me that conflation and data mining across redundant sources are major components of their update process.

Some map features are relatively easy to check. For example a “new” street or a “dead-end” street should be visible on the most current imagery available to Google or possibly in probe data of the area. Obtaining or verifying the street name is more difficult, but can often be discovered in files created by local planning offices, since authorization at some level of government must preclude to building of street. In addition, Street View could be a prime source of confirmation for many map elements. Of course, one of the tenets of crowd sourcing is that the frequency of errors decreases with increased inspection. So, Google might make a wrong change from time to time, but the odds are that someone will correct it. On the other hand, if the data the Google is replacing was considered worse than what they have now, does anyone really loose. Well, that’s what Google is hoping you will think. Time will tell.

In the US they are gathering real time traffic information. Google’s probe data is coming from Google Maps on smartphones but not the iPhone. Android will be huge help there no?

Probe data can be of huge benefit to companies building navigable map databases. TeleAtlas, for example, recently announced that they added 1.25 updates to their database that had been gathered through Map Share and from their fleet of probes.

At present, Google’s footprint in the smartphone market is very small and the benefits it currently derives from these data are limited. Although, Google has indicated that they are using their probe data to generate travel time and traffic information, it is difficult to find out whether or not they are attempting to use probe data from Android phones for updating street geometry or other attribute information.

There are a number of issues involved in the use of probe data that complicate its use, including: the accuracy of probe tracking using cell phone-based GPS, whether the probe data is derived from GPS or A-GPS, the geographic variability in signal scattering (i.e. reflectivity caused by urban canyons, trees, the position of the receiver related to your body, etc,), consumers agreeing to permit the use of the GPS trace of their Smartphone, and the potential “cost” of obtaining the required permission to use the GPS data from the network or device operator.

While confidentiality issues seem under control, I suspect that people will eventually become a little more hesitant about sharing information that tracks their location. Yes, I know that it is collected anonymously, but if that track starts out at the same house each day and returns to the same house each night, doesn’t it suggest where you live? For that reason, at least one of the major PND companies, whose users have agreed to tracking, shaves the first two minutes and the last two minutes off of every path. Does everyone follow this standard? I don’t know, but everyone should be interested in how the DNA of their GPS traces are “neutered” by the companies using them. Further, the four minutes of data thrown away probably contains really useful information about local streets, but so it goes.

Do you think their Maps app circumvents this need for obtaining the required permission to use the GPS data from the network or device operator to some extent? Do you think their recent agreements with Verizon are relevant to this?

Yes, relevant, but small potatoes at the moment. Their map app soliciting updates is of much greater value, but the problem here (as with all UGC) is that the responses they receive are going to be spatially auto correlated with population density.

I suspect that the numbers of map changes contributed by people living in rural areas are a very small portion of the map update info Google receives from UGC. In addition, if you look at road lane miles driven in the US (and in Europe) you will find that high proportion of the mileages driven is on local streets, rural routes (state and county level highways) and rural streets. It is unlikely that UGC (which operates on the idea that more observations smooth out the errors in the reporting) will be able to solve the problems with updating roads throughout the extent of the geographic coverages Google provides.

However, coverage quality is a known problem that Navteq and TeleAtlas now deal with, so Google is just joining the group. It is not well known by most analysts that the vast majority of roads in the world are classified by the vendors as Category 5 – also known as local streets and these data are often hard to update or confirm, especially when they occur in locations distant from urban centers (the classical distance decay function applied to map correction reporting).

If TomTom has made great strides with their probe data then why has the Tele Atlas data been such poor quality in the US for Google?

Your observation is interesting. It is my take (and I may not be correct in this, but I think I am) that TeleAtlas has bad data because they fell for the corporate line conveyed by their acquisition GDT (Geographic Data Technologies) that data mining could solve most map database compilation problems. TeleAtlas never fully committed to field vans and field research in the U.S. and fell far behind Navteq—whose data collection efforts are firmly rooted in data collection in the field – because they have been unable to find anything better.

As noted previously, TA added 1.25 million updates to its database related to probe and MapShare contributed corrections. so their data may be improving.

The interesting questions here are “Can User Generated Content improve the quality of the TeleAtlas database and how long will it take to reach acceptable levels of accuracy?”. Whether UGC can be used to create comprehensive map databases of consistent quality at the required accuracy levels across the desired coverage areas is a challenge for the future and, I think is question mark for both TeleAtlas and Google.

Although Google has assembled a host of cartographic resources, and perhaps more high quality brain power than the rest of the world coined, I doubt that they have enough experience with compilation of spatial databases and the creation of map databases to truly comprehend the hellhole they have gotten themselves into. Of course, old farts like me may just be out of date (but I doubt it).

Those companies that Google rents its data from seemed doomed from my perspective. And the others? (See this analysis from WeoGeo).

Not sure that the others licensing data are doomed. For example, the IP laws in other countries favor government map copyrights—such as the powerful Ordnance Survey in the UK. However, if OpenStreetMap and Google can successfully break their stranglehold, the same may happen throughout the rest of the world.

Tele Atlas even in the hands of TomTom is not that profitable, they are slow to revise data, they lack the scale of Google….are they toast?

I suspect so, but a white knight might be in the shadows. In addition, it is too soon to tell. TeleAtlas was founded on the belief that in-car navigation would be a huge market. Although the uptake rates for in-car systems remain low, it is possible that TomTom, who is trying to enter the in-car market through ventures with Fiat and Renault, may be able to make headway in this market. If so, TeleAtlas could prosper. In addition, TeleAtlas may have a role to play in the market for Advanced Driver Assistance Systems and fuel-efficient vehicles (although Intermap Technologies and Navteq may win that market).

What would a typical candidate that could take over Tele Atlas/Tom-Tom look like?

It could be any of several players who might consider owning a map database company a strategic advantage. For example, although Microsoft is currently allied with Navteq, you may remember that Microsoft sued TomTom over a patent issue earlier this year that was settled very quietly. Perhaps a future option was discussed? Who knows? TomTom is now selling for a fraction of what it paid for TeleAtlas, so the right buyer could acquire two interesting assets for a reasonable price. With a fix here and there and the right management, perhaps TeleAtlas could be competitive.

Navteq is protected by Nokia but Nokia does not seem to have the same vision nor ability to execute in the maps world.

Navteq may have better data than Google due to its expertise in field collection, but is likely slowly losing that lead. They may be able to rebound. Also, Navteq serves markets that are likely unappealing to Google (in-car integrated navigation, fuel efficiency, ADAS, etc.). Finally, Nokia views Google and a competitor and is unlikely to work with them. The same is true of others in the growing anti-Google camp who wants use spatial data need a map data supplier, any company that simply chooses remain independent of Google for strategic reasons.

Microsoft? Still a babe in the geospatial woods. They seem to have a grand vision but a late start.

Microsoft is always a wild card. Look for some interesting announcements from them later in the year or perhaps next in respect to mapping and new technologies. I suspect they will raise the ante in their game with Google.

Would it be worthwhile for MSN/Yahoo to team up with OSM? They have some
fantastic data.

Probably better to ask this of Steve Coast, since he is more familiar with the current status of the OSM license and how it might be applied to commercial entities. However, MSN and Yahoo should be looking for sustainable competitive advantages and I am not sure that the quality, coverage and consistency of OSM data would be of benefit to them at this stage. Perhaps CloudMade might be an alternative?

One of the important issues in changing map database suppliers is “Does the potential new supplier have the equivalent coverage at equivalent accuracy level with as comprehensive data attributes as your current supplier?” OSM data is not yet the equivalent of that provided by Navteq or TeleAtlas in terms of accuracy, coverage or comprehensivenes, although it is being improved day by day. At present, I see no compelling reason for either Yahoo or MSN to team with OSM, although they are obviously monitoring the situation. Unfortunately, the proportion of the search market served by either company indicates to me that improved mapping is not going to be the key to attracting more searchers. Finally, Google chose not to use OSM data, presumably because it could not maintain the freedom to use the data as it needed under the OSM license.

Google has been at this a good long while and have built up a lead while no one is looking.

Speakers representing Google at conferences I have attended have always been vocal in their complaint that the map data and the business listings data being supplied to them were inaccurate, erroneous and of unacceptable quality. Google’s expense for data licensed from either Navteq or TeleAtlas was very modest—so modest, that they could not get’s the vendor’s ear by complaining.

So, I think that everyone was looking at Google while they developed their mapping machine, but did not understand the scope of the effort Google was mounting. In addition, many have failed to understand that Google’s mapping efforts and interests in geospatial data are driven by the need to expand the footprint and success of the company’s advertising business.

I consider Google’s MapMaker effort as an indication that the company is globally intentioned at the expense of short-term economic gain. Google, unlike Navteq or Tele Atlas, is focused on developing base data not just for industrialized nations with good road systems, but for every place that has cell phone coverage. Other companies will not be able to procure data the equivalent of Google’s in many countries, because neither TeleAtlas nor Navteq have collected it (as a consequence of their focus on navigation by automobiles). I believe that the MapMaker data is a significant competitive advantage for Google.

Their recent developments in the US indicate that Google can develop the underlying geospatial and routing information from their Streetview road data. Their last push on that was an expansion that was “filmed” last fall and rolled out late last year. Within about 9 months they were able to create a reasonably accurate geospatial representation of the US. No small task. Are they buying the data for all of the back-country roads with a liberal enough license to claim the copyright ownership?

I doubt that they are buying or licensing any substantial amount map data in the United States, other than some parcel data. They are probably relying on Tiger 2010 from the Census which is in the public domain, as well as other public sources of data. In some cases they are relying on data sharing agreements, but in all cases, Google is attempting to preserve its right to use the data in any manner they desire.

It does however seem to indicate that once they have enough Streetview data they can pull off switching the other countries out in less than a year even a country the size of the US. That would put their 5-year deal with Tele Atlas in perspective wouldn’t it?

In countries where the government owned mapping data is freely shared with private companies, as it is in the United States, Google can likely supplement their core map data with the additional information needed to create comprehensive, spatially complete, map databases. However, fee-free government sourced spatial data is generally not available in Europe and some other areas of the world (particularly in former colonies of the British Empire).

Further, in still other countries the street and postal addressing systems are very difficult to map and geocode.

I suspect Google will be slower to roll out map data in Europe, unless they choose to find a way to work with the governments involved. Thanks to Mapmaker, they may be able to make great progress in Third World countries, but address information may be a problem in many of these locales. All in all, expect Google to roll out new maps where the money in advertising is to be found.

If you think about their desire to have every bit of web based location data geo-tagged with KML, could one extrapolate this very same logic above to Google’s plan for POI and business data as well?

I think Google has collected map data for two reasons. First, it meets their corporate mission, which is “To organize the world’s information and make it universally accessible and useful.” Second, having a current, comprehensive and complete map database will benefit local search advertising, which should become an even bigger business for Google when mobile local search takes off.

It is my belief that Google’s current local advertising business suffers from both the low quality of the business listings data they use and the mismatch between business listings addresses as reflected by ability to plot them and provide accurate map location. In other words, Google has been leaving advertising money on the table because they have not been able to deliver potential customers to the stores that these customers are trying to find—which in turn limits the business owners’ appetite to advertise with Google. If Google plans to roll their advertising model out in the rest of the world, they need accurate map data and accurate business listings data, two things that they are trying to develop around the world.

How does the local advertising business suffer from the low quality of the business listings data? My experience is more that folks just don’t understand the bid system and their new flat rate product that they are testing might resolve that education gap.

Not sure we are on the same page. I agree that advertisers may not understand how to be effective when using AdWords to represent their business. Conversely the larger problem, in my view, is that Google does not accurately represent the contact information of many of the local businesses that are returned in a search result (due to some of the problems you mentioned above). Users are frustrated when the locations for which they are searching cannot be found at the locations where they are shown on the map.

I have blogged about this “targeting” problem numerous times (mainly under the topic of geocoding – if interested see my blog https://blog.telemapics.com/?p=101 ) and the quality of results do not seem to be improving. In addition, Google’s business listings are often incomplete and non-comprehensive in a spatial sense. It is my take that users don’t have faith that Google can show them local shopping opportunities with a reliable degree of accuracy. In turn, people are not inclined to click the local ads or consider the whole effort with enough conviction to make the system as productive for Google as it should be. Local is one case where the correct phrase is “Google it?” as opposed to “Google it.”

They have noted that their aggregate data is statistically more accurate than any of their suppliers although not enough so and not necessarily to their satisfaction. The old YP system and to some extent InfoUSA created “false positives” i.e. that they claimed a business was still open but was in fact closed. My observation on this is that Google’s methodology is creating exactly the opposite problem in with “false negatives” (for lack of a better phrase… maybe you have one) where they will have 3 locations for one business all geocoded differently or worse will merge two business entities that are still in business into one mixed up record. Or the absolute worst scenario, they have created new vectors to allow nefarious hijacking of legitimate listings with redirected 800#’s and website.

It’s important to rehash a little history here. When the yellow page industry ruled the world, there were books for cities, regions and neighborhoods across the United States, but there was not one unified source of YP data for the entire country, because there was no need for one (and there was no truly national YP company). Only with the advent of the Internet and the development of online business search capabilities was there a need for an authoritative, comprehensive, seamless, source of business listings for the entire United States. The response to that challenge has been less than satisfying and Google suffers from the limitations of the data collection methods of the companies that provide its business listings.

I think Google will have many problems in this area as they go forward. I suspect that they would have been better served using one supplier and prodding them to improve than using a mix of suppliers and data from their own business listings registry Many instances of the “multiple” business mix-up is caused by Google merging listings from various sources and not knowing how to identify multiple instance of the same business that are described with minor changes in the name or address. The business registry approach they are using could help resolve some of this, but most small business owners don’t have the time to understand or screw with all of the problems they can experience when listing with Google. A central registry is a better idea, but a tough position to win support for from the industry.

In addition to replacing Tele Atlas, Google has also added a “parcel” data layer to Maps that shows property boundaries for many locations in the US. Do you have any speculation as to who is providing the parcel data?

My understanding is that Google is relying on a variety of sources of which some are public (e.g. the city of San Francisco) and others are private. The “purported” license with a commercial firm in the business of supply is being held in confidence by both parties. There are only a few possibilities, but I would prefer not to speculate on this issue.

Can this parcel data be used to improve the accuracy of the road data in some way?

My belief is that Google is interested in parcels, at least at present, as a method of increasing the accuracy of their attempts to match addresses with map locations (geocoding). The concept of “roof-top” geocoding is based on knowing the actual location and shape of a property, which, at least conceptually, provides a better geocoding solution than is possible with most other methods.

Unfortunately, Google is finding out that the solution is not foolproof (although better than the alternatives). For example consider an elongated parcel along the edge of the road. It is likely that you would choose the centroid of the parcel to represent the location of the address, but this could result in a significant error if the house is located off-center. Similarly, consider a very large, irregular parcel surrounded by several streets. The centroid of the parcel might be closer to one street than another, but the house could be located on yet another of the bounding streets.

What are the commercial and social implications of Google having this parcel data on line? Will it be used somehow in their Real Estate Listing plans? Will its general availability (if not accuracy) make this information more usable by the general public?

I think it is too soon to tell where Google will go with parcel data. At present, they (as well as Microsoft) are interested in using parcel data to improve geocoding.

Leaving aside the question of politics (if that is possible), isn’t a mapping effort like Google has undertaken be better served by a monopoly? Doesn’t the cost, efficiencies of scale, the ability to leverage people and a single technology lend itself to a single solution rather than multiple solutions? Does the need for on-going updates and the cost of those indicate that a shared effort might be more productive than a competitive one? Should this information and the maintenance of it be put in trust or handled by a single entity for the use by all?

Wow, this topic provides the horns of a dilemma for a conspiracy theorist. Is a monopoly ever better than competition and for whom? But supposing this happened, who would control access to the data? And who would regulate the supplier. Oh, wait, it would probably have to be a government or quasi-government agency (e.g. the United Nations, ISO, etc,) and then all of these problems would go away. Right – and so would accurate, timely and up-to-date data. By the way, one of the reasons the approvals for the acquisitions of TeleAtlas and Navteq were held up a little longer than expected was that the EU was concerned about the reduction in competition that might result from these deals (did you know that Google formally complained to the U.S. government about Nokia’s proposed acquisition of Navteq?).

I guess I am just too oriented towards free markets to think a monopoly would be of advantage in market for map data. Google has taken the lead in map databases because they industriously endeavored to create a database that meets their unique needs. Yes, their financial strength is the reason that they were able to fund this effort and their technology was the reason they were able to enter quickly and trump others. However, there is nothing in their behavior that precludes others from doing the same.

Another view of the monopoly market might be found in studying the Ordnance Survey of the UK. You can license really great data at an enormous expense and under restrictive licensing terms. OpenStreetMap is one response to this “quasi-monopoly” and purpose-built street maps of London by Bartholomew, Lovell Johns and others reflect the notion that monopolies are not always beneficial.

Does Google perceive the collection and generation of geospatial information a key competitive differentiator or a necessary evil?

Another dilemma! I suspect that Google began their map compilation efforts as a necessary evil to remedy their dissatisfaction with the maps being provided to them by Navteq and TeleAtlas. Google passed on the opportunity to acquire both Navteq and TeleAtlas because they believed that they could produce something better. Once they started development, mapping really got in to their corporate culture and they began to realize that map compilation, presentation and distribution was a key competency within the grasp of their organization. It was only a few steps higher on the ladder before they realized that Google could be a market leader in map data quality and coverage.

The bottom line? “All information is meant to be free”… free for Google but once in always in, leveraged by all of the other data and technology to create a scale and functionality that can not be matched. They essentially own the foundational data and data points for the whole next generation of user experience.

I have often heard that the happiest day in a boat owner’s life is the day he buys his boat, while the second happiest is the day he sells his boat. Now that Google “owns” a big slug of geospatial data, they may learn the meaning of “buyers remorse,” since they will have to maintain and update the data based on the standards required by their advertising business.

Updating geospatial data is a pain. Google may believe that is has a “better idea” for doing this (I presume UGC is a core value), but doubt that Google yet understands the ongoing complexity of this challenge. Relying on UGC here, may be very unsatisfying. Google may know that it needs better data from its users in East Nogadoches, Texas. Now if the users contributing data in East Nogadoches, Texas only knew about this need and had time to do something about it in the next month or two. Of course, these types of concerns go away if you have enough probes, but will they? When?

As to your comment on Google and the user experience, I agree that they are hard to beat. But it is more than just their technology, it is their outlook and willingness to share its successes with its community. For example, any of us can obtain a free license to use the Google Map API. In addition, we can use AdSense to publish Google maps with Google AdSense ads at no cost from Google and make money in the process. Recently Google added the powerful GoogleBar to maps (see my blog on the topic—https://blog.telemapics.com/?p=178 ) allowing the users to search the maps for items of interest, such as kilts in Edinburgh, chocolates in Brussels or anything anywhere. The advantages that Google holds are not just from its technology, but are based on its awareness that putting technology in the hands of its users will result in successes that even Google had not imagined.

I would like to thank Bill Slawski, Ahmed Farooq, Barry Hunter & Gary Price for help in understanding some of the background material, finding resources and figuring out the right questions to ask. I also want to thank Mike Dobson, President of TeleMapics, for his incredibly generous sharing of time and information that so helped me understand the digital mapping world that is affecting us all.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Mike Blumenthal
Contributor

Get the must-read newsletter for search marketers.