## 2013-01-29

### 5 + 5 = 7.7

I am sure they have some logic somewhere to explain this, but as a visitor to their site the idea that two reviews scoring a full 5 stars somehow makes a total of 7.7 out of 10, is a tad, err, special. I am not sure I trust their maths.

We were wondering is somehow older reviews counted loss somehow - but you would make then have less weighting in an average, not make the a worse review.

There really is no sane way to make two reviews of 5 somehow make a total of 7.7 out of 10.

Very strange.

#### 9 comments:

1. Maybe they are using one of BT's algorithms

2. 5+5 is now "up to" 10 :)

3. What I've seen with some sites is that you can give a star rating, but not write a review. Maybe it's counting these unseen ratings.

4. I wonder if they're displaying the lower bound of a confidence interval?

http://www.evanmiller.org/how-not-to-sort-by-average-rating.html discusses why you'd want to do that, instead of some straight averaging. Basically, though, the point is to avoid the paradox where two 5 star ratings is considered "better" than ten thousand 5 star ratings, and one 4 star rating; you do this by using statistics to answer the question "if I treat the ratings I have as input to a statistical model, what is the lower bound on the 'real' rating they'd have if everyone rated them, given a 95% confidence interval?"

If they are doing that, what they're really showing you is that their model suggests that AAISP's real rating is somewhere between 7.7 and 10, given two ratings so far, both of which are tens. As the number of ratings increases, the range will decrease - if you had a thousand 5-star ratings, the range might be 9.99 to 10, and displayed as 10/10.

5. Interesting.

I added another 5 star review, it's now showing 7.9 out of 10.

1. 8.1 now.

If it's going up .2 per review we need another 9 people :p

6. If they know what they're doing, they're probably using something called Bayesian statistics. This is a method where prior information (or a prior assumption) gets updated as more data gets added. With no reviews at all, the prior assumption might be that AAISP would have an average score of 5 out of 10. However, after the first person provides a 5 star review, you can't say the average would be five stars, but now based on this one data point, you can infer that the odds are skewed slightly in favour of getting higher scores. You can then update the prior assumption with real information, and your best guess at the mean score creeps up (you can't say it's 10/10, as this might be a one off). As more 5-star reviews come in, your confidence rises, and your best guess at the mean score goes up, but you wouldn't be fully confident of 10/10 being the mean score unless you'd had a huge number of reviews.

I suspect the site isn't showing the bottom end of a possible range, it's showing the current best estimate with limited information. This is based on the starting assumption of a mediocre mean, but gradually being updated as real data comes in. Of course, with our knowledge of AAISP, our prior assumption might be a score of 9.9 out of ten, and not five out of ten, so we'd much more quickly hit a realistic rating.

7. Thanks for raising this question - this is Joakim from Trustpilot.

42jon is right. The scoring algorithm (also known as the TrustScore) is based on Bayesian statistics. But if a company has no reviews the score is 7.0 - not 5.0 as you may assume.

The TrustScore is the wisdom of the crowd concentrated into a single number. We need enough data to soundly make bold claims about the customer satisfaction. A single angry customer should be able to make his/her voice heard on Trustpilot but not rip an entire business apart.

If you are interested, there's more info on our blog:
Review ranking

Hopes this solves the riddle.

1. "...the wisdom of the crowd..."

Now there's an oxymoron if ever I've heard one!

Comments are moderated purely to filter out obvious spam, but it means they may not show immediately.