| TVC 2012 scoring system|
|LMI Tests -> Annual Competitions||4 posts • Page 1 of 1 • 1|
|First TVC was held in OAPC website, and we used a scoring method considering the fixed competition time. We tried to ensure continuous participation, but the method had some flaws. We added "best of 3" approach to this method in the second TVC, the goal was to reduce the effect of a single bad result. But the overall scoring system happened to be more flawed in the end. |
So please share your ideas about the scoring system, so we can decide on a fair method. We will again hold four competitions and there will be a "Tapa master" in the end. What would be the fairest scoring system in such a competition?
|Hi Serkan, |
I think we should also confirm, if the following will be applicable to TVC 2012.
1) The 4 contests will be same length, but difficulty of puzzles will increase with each contest
2) The point value per minute will remain same / increase across contests?
3) Will there be any bonus points for finishing early? or will there be any optimization puzzles?
This is given, but all contests will be open for a minimum period of 48 hours.
Also there will not be any monthly puzzle tests during this period (Feb and March) at LMI. So I am hoping participation will not be a problem for many players.
Considering this, we can drop the "best of 3" rule that we introduced last year.
|I wrote up my comments last year after the contests so I'll just repeat the concerns I had here: |
1) 3 out of 4 scoring ok, but with weighting it is not - it doesn't make sense to have solvers drop 1 out of 4 tests when they are worth 4 different values of points. You can use 3 out of 4 if all are out of 1000 normalized points, but not 3 out of 4 if they are worth 1000, 1100, 1200, 1400. Solvers will probably drop the first test, or be well behind others if they have to drop the last test.
2) No time bonus - Solvers finishing early, particularly very early, should receive some reward for the saved time as is done on all LMI tests.
3) No curving - tied to the last point, the tests were not normalized to the top solver's score so that every test was on a 0 to 1000 scale. When you have a very easy test (without time bonus), or a very hard test (with no one coming close to finishing), the test results cannot easily be compared, certainly based on total score. But if the top solver's score becomes 1000 in every case, and the rest normalized as LMI does on tests, then you can compare relative performances much better.
I'm fine with using just 3 of 4 tests, of any difficulty, but the three problems above should be addressed. This means equal weight, time bonus for early finish, and normalization as LMI uses in general for the Puzzle/Sudoku Ratings.
|As mentioned in the IB, best 3 of 4 will be used to determine the final winner. |
Here are the changes since last year
1) No weightages to individual contests
2) Bonus points will be applicable for early finishers.
3) Scores will be normalized. Top player gets 1000, others' normalized score will be computed proportionately based on their score. (Unlike LMI ratings, ranks will not play a role)
Hope that addresses all the issues.
|4 posts • Page 1 of 1 • 1|
|Search this forum|
Printer friendly version
E-mail a link to this thread