Google opens up about how Maps review moderation works
Google’s overview details why it might update its policies, how it enforces them and the role of machines and humans in content moderation.
Google has published an overview of how Maps reviews work. The overview includes details about how the company develops and enforces policies and the role of machines and humans in content moderation.
Policy creation and enforcement. “As the world evolves, so do our policies and protections,” Google said in the blog post. Regular revisions help Google adapt its policies to better protect businesses from abuse.
As an example, the company pointed back to when governments and businesses began requiring proof of COVID-19 vaccination before entering: “We put extra protections in place to remove Google reviews that criticize a business for its health and safety policies or for complying with a vaccine mandate,” Google said.
In addition, “Once a policy is written, it’s turned into training material — both for our operators and machine learning algorithms,” the company said.
Review moderation powered by machine learning. User reviews are sent to Google’s moderation system as soon as they’re submitted. If no policy violations are detected, then the review can be live in a matter of seconds.
To determine whether a review might violate policy, Google’s machine learning-powered moderation systems evaluate reviews from multiple angles, such as:
- Whether the review contains offensive or off-topic content.
- Whether the account that left the review has any history of suspicious behavior.
- Whether there has been uncharacteristic activity at the place or business being reviewed (e.g., an abundance of reviews over a short period of time or whether the location recently received media attention that might motivate people to leave fake reviews).
The role of human moderators. Google’s human moderators are responsible for running quality tests and completing additional training to remove bias from the machine learning models. Training the models on the various ways words and phrases can be used helps improve Google’s ability to screen for fake reviews while also reducing the chances of false positives.
Human moderators also review flagged content. When Google finds fake reviews, it removes them and, in certain cases, will also suspend the user accounts they came from.
Proactive measures. Google’s systems continue to analyze reviews and monitor for abnormal patterns even after reviews are live. Such patterns include groups of people leaving reviews for the same cluster of businesses, or a business receiving an unusual amount of 1 or 5-star reviews over a short time period.
In addition, the human moderation team works to identify potential abuse risks. “For instance, when there’s an upcoming event with a significant following — such as an election — we implement elevated protections to the places associated with the event and other nearby businesses that people might look for on Maps,” Google said.
Why we care. Reviews are a local ranking factor. Understanding how Google handles reviews can help you abide by its policies and improve your business’ visibility.
While the information above isn’t new, it does give us an under-the-hood look at Google’s moderation system in greater detail than the company has previously shared.
RELATED: How Google and Yelp handle fake reviews and policy violations
Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.
Related stories