Google reveals how reviews are analyzed on Maps

0
72


Google has explained exactly how reviews are moderated on its Maps service in a detailed blog post, noting that much of the “work to prevent inappropriate content is done behind the scenes.” The blog post explains exactly what happens when a user posts a review of a business like a restaurant or local store on Maps. He has outlined the measures that are taken to ensure that false and abusive reviews are not published. In the past, Google has also explained how recommendations work on youtube.

The post has been written by Ian Leader, Group Product Manager, User Generated Content at Google. “Once a policy is written, it becomes training material, both for our operators and machine learning algorithms, to help our teams detect content that violates the policy and ultimately maintain Google reviews. helpful and authentic,” Leader wrote.

According to the company, the moment a review is written and published, it is sent to the company’s “moderation system” to ensure there is no policy violation. Google relies on both machine learning-based systems and human reviewers to handle the volume of reviews it receives.

Automated systems are “the first line of defense because they are good at identifying patterns,” the blog post explains. These systems look for signs to indicate false, fraudulent content and remove it even before it is published. Signs automated systems look for include whether the content contains anything offensive or off-topic, and whether the Google account posting it has a history of suspicious behavior in the past.

They also look at the place the review is posted about. Leader explains that this is important, because if there has been an “abundance of reviews in a short period of time,” this could indicate that fake reviews are being posted. Another scenario is if the place in question has drawn attention in the news or on social media, which could also encourage people to “leave scam reviews.”

However, training machines also requires maintaining a delicate balance. An example given is the use of the term gay, which is derogatory in nature and is not allowed in Google reviews. But Leader explains that if Google teaches its “machine learning models that are only used in hate speech, we could mistakenly remove reviews that promote a gay business owner or LGBTQ+ safe space.”

That’s why Google has “human operators” who “regularly perform quality tests and complete additional training to remove bias from machine learning models.”

If the systems find no “policy violations”, the review is triggered within seconds. However, Google claims that even after the review is active, its systems “continue to analyze contributed content and observe questionable patterns.”

These “patterns can be anything from a group of people leaving reviews in the same Business Profiles group to a business or place receiving an unusually high number of 1- or 5-star reviews in a short period of time,” according to the blog.

The team also “works proactively to identify potential abuse risks, reducing the likelihood of successful abuse attacks.” Examples include if there is an upcoming event, such as an election. The company then implements “elevated protections” for venues associated with the event and other nearby businesses. Again, “it will monitor these places and businesses until the risk of abuse has subsided.”

.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here