Hello Brickset Gurus,
Love Brickset, and thanks Huw and everyone that helps make this site so wonderful.
One thing I've been confused about many times is the rating system, which I've discovered is explained in this thread: http://www.bricksetforum.com/discussion/9009/how-are-the-review-ratings-calculated
. Basically a pretend 2.5 rating is included and used to calculate the average rating. A benefit of this approach is it makes the Top Rated query avoid treating a single 5 star review the same as fifty 5 star reviews. I understand the motivation behind not wanting models with limited reviews from overtaking, say, Black Seas Barracuda's 4.89 (weighted) average. And the weighted average is a handy way to treat fifty 5 star reviews as better than forty.
With the current model, a single 5 star review gives you a weighted average of 3.75, which will display 4 stars and show 3.75 on hover (or 3.75 on the detail page for the model). A single 4 star review gives you a weighted average of 3.25, which will display 3 stars and show 3.25 on hover (and detail page).
I'm concerned with the false impression this weighted average presents. Consider:
* As I write this, there is a single 2013 Creator model with more than 5 reviews, and most with reviews have 1 or 2. There are no 2014 Creator models with 5 reviews, and most items with a review have 1 review.
* It's when a model is first released (or otherwise still in stores) that ratings are most sought after. At least that's my impression for the majority of Lego consumers (who aren't buying old sets on Ebay), though certainly not all Brickset users fit this model. And that's precisely when the weighted average is least accurate and presents a false impression on what the reviewers actually thought of a model.
I often see a model that appeals to me with a 3 star rating (or 3.25 on hover) and think to myself that the reviewer must not have liked it. Sometimes that causes me to skip over a model. Other times I'll click to the review, and I might find glowing review feedback, just not quite good enough to warrant a 5 star rating. A 4 star review is much different than a 3 star review, and to see so many 1 review models coming in at 3 stars, incorrectly, I think is concerning.
I also think I must not be the only person that, before coming across the forum post, just assumed Brickset's rating system was broken. If I see a rating of 3.75 I think "OK, probably three 4's and one 3". But that's what a single 5 star rating gives you at Brickset. On a number of occasions I've attempted to work out the math and have given up, just assuming the rating was not to be trusted.
Are there other options that allow the Top Rated query (and related queries) to work OK that don't result in an incorrect average rating? Such as:
1) Require models to have at least X reviews to be included in Top Rated, maybe X = 10 (which the top 25 all have), and then sort by real average. That would work for Top Rated on its own, but not when combined with other query parameters.
2) Separately store (or compute) the real average and the weighted average, displaying the real average when I view a particular model, but using the weighted average to determine items to display (and in what order) in the Top Rated query.
I also considered making the Top Rated query first sort by real average and secondarily sort by number of reviews, but a problem there is a single 5 star review would be enough to bump Barracuda.
I'm not sure the right solution, but I did want to raise (as some past posters have) the concerns I see with the current system, which can cause users to possibly a) lose trust in Brickset (rating system appears broken as math doesn't add up), or b) not pursue models that are either new or have few reviews based on an incorrect assumption that they were rated lower than they actually were.