Ensuring the Credibility of Online Reviews

Michael Fauscette
Michael Fauscette  |  September 21, 2017

Companies of all sizes depend on the data in online reviews to make personal and business decisions.

The proliferation of online review sites and the use of reviews on marketplace sites have become so engrained in our decision processes that most of us wouldn’t consider not using them.

Importance of online reviews

Whether you’re looking for a good dinner location (Yelp and OpenTable), buying a vacuum (Amazon, Target, Best Buy, etc.), or researching a new accounting software solution (G2 Crowd), there are reviews from actual users of the product/service to help you make the decision.

But there’s a wide range of risk and value in the list of potential reviews. Certainly buying dinner, while perhaps important in the moment, has considerably less value than buying business software (and spending thousands or even millions of dollars).

No matter the level of risk and value, you need to accept those reviews as trustworthy – or they have no use to you.

Can you really trust these reviews, and are there ways to tell how trustworthy a site’s reviews are?

There are incidents that call in question trustworthiness of reviews, mostly in a consumer (B2C) context. Businesses have been accused of getting fake reviews posted in any number of ways, ranging from hiring people to write reviews to using some sort of automation to “write” and post reviews.

It is incumbent on the website to provide a method of establishing trust. On Amazon, for example, you will see some reviews with a “verified purchase” label, which indicates that there was validation that the person reviewing did purchase the item. This increases trust levels.

As the value of the decision increases, so does the risk. For reviews related to expensive and/or high-risk transactions, there should be a much higher standard for establishing trust.

3 verification methods for online reviews

Artificial intelligence is showing up in business processes at an increasing pace over the past couple of years, and the range of activities AI can accomplish is very broad. It’s no surprise that academics and researchers have used AI to generate reviews that are very difficult to recognize as fake.

For review sites with little-to-no vetting and identity management, this is a significant issue. In other words, how does the site establish that:

  • The individual doing the review is an actual person.
  • The individual had the opportunity to have direct experience with the subject of the review.
  • The individual did use/have experience with the subject of the review.

There are a few best practices related to each of these three validation and verification points.

1. Verification through social media

First, establishing identity is not that difficult to set up. However, depending on the review site’s business model can present some challenges.

The simplest method for establishing that the reviewer is indeed a person and verifying identity can be accomplished by using a social login like LinkedIn or Facebook.

The benefit of this approach is that the integration can also provide additional demographics about the reviewer, saving them time and effort in the process.

The downside to this approach is related to the possibility that the reviewer doesn’t have a specific social account, has forgotten the password/login credentials, or distrusts the site’s use of their personal data.

There is some inherent attrition of reviewers related to the use of the social login. If the site cares more about getting the review than the certainty of the reviewer’s identity, then this approach is not desirable to them.

In addition to the social login, the site could use business email addresses. This isn’t as credible a method and doesn’t include the additional demographics, but would likely have much less attrition of reviewers.

2. Verification through direct experience

Verifying that the reviewer had the opportunity to have direct experience with the review subject is a little more difficult. If the site uses social login then the profile can be used to help with that evaluation.

For example, on a B2B software review site like G2 Crowd that uses LinkedIn login, LinkedIn profile data is used to establish the possibility that the reviewer uses the software product.

If the reviewer is an accountant and is reviewing accounting software, it’s very easy. But if that accountant is reviewing mechanical CAD software, that would be very suspect (not impossible, but highly unlikely).

3. Ensuring direct experience

That leads to the next problem: Did the reviewer use the product? For a B2C site like Amazon, it’s easy to check to see if the reviewer purchased the product that they are reviewing. For a B2B site, that is more difficult, but could be established if the review is written on the seller’s site.

Otherwise, there would need to be additional evidence that the reviewer used the review subject. Using G2 Crowd as an example again, we have the software reviewer submit a screenshot of the reviewer logged into the latest version of the software that is being reviewed. That leads to a “verified current user” label.

There is a role for AI and/or algorithms in review verification, though.

Scoring a review at the beginning of the moderation process using an algorithm that is tuned to find common indicators of fraudulent or questionable reviews helps make the moderation process quicker and increases the likelihood that the fraudulent review is identified.

Right now, I don’t believe that this algorithm is a replacement for human moderation as a part of the process. It doesn’t mean that in the near future the algorithm could be accurate enough to at least approve the reviews that score over a defined threshold. That would leave moderators to focus on reviews that scored lower and have a higher potential for fraud.

Wrapping up

As you can see, review validation and verification isn’t a single, simple activity. To be most effective, businesses must have a way to verify identity, show potential for the reviewer to have used the reviewed product or service, and validate that the reviewer did use the product.

Algorithms and human moderation are integral to the vetting process. This set of processes has a high probability of identifying questionable reviews and should drastically increase the trust level of site visitors.

Michael Fauscette
Author

Michael Fauscette

Michael is an experienced technology executive with a diverse software background that includes experience as a software company executive and leading a premier marketing research team. Michael is a published author, blogger, photographer and accomplished public speaker on emerging trends in business software, digital transformation and customer experience strategies.