0 like 0 dislike
3 views
There is a site on it, users put different companies rating.

There is K — number of votes, N is the average incoming in the interval [1, 5].

Need some measure of T(K, N), which is a function of the average company's valuation and number of votes.

For this indicator, companies will be sorted.

The indicator should take into account the fact that the smaller number of votes, the greater the likelihood that the assessment differs from the objective (we will for simplicity be regarded as an objective average score of all users of the earth :)).

Has come up with just such a formula:

N = K+(K-3)*(N-1)*k/Nmax, where Nmax is the maximum number of votes, k is a coefficient chosen from common sense, looking at real data.

Maybe I reinvent the wheel and there are some mathematically justified and vital proven formula for such things?
| 3 views

0 like 0 dislike
There is a ready and now working algorithms, for example, the IMDB rating, which is based on Bayes ' theorem. The formula is very well explained here: www.wowwebdesigns.com/formula.php
\r
\r`WR = (v / (v+m)) * R + (m / (v+m)) * CR = average rating of the given objectv = number of votes for this objectm = (optional) the minimum number of votes needed to display in the top 25C = average rating of all objects\r`
Yet to review is an old article on Collective Choice: www.lifewithalacrity.com/2005/12/collective_choi.html Then painted the ever-popular ranking system.
by
0 like 0 dislike
I would use something like:
weighted average rating on this criterion * K-ENT reliability.
K-ENT reliability = weighted average number of all assessments for each criterion are normalized to 1.
ie — if your criteria voted more than weighted average number of votes, the weight of this assessment increases.
\r
meaning — the more assessments, the more weight result.
by

0 like 0 dislike