@ 2015-11-19 5:59 AM (#19991 - in reply to #19833) (#19991) Top | |||||||||||||
Posts: 152 Country : United Kingdom | detuned posted @ 2015-11-19 5:59 AM Standardising a series of puzzle contests is a very interesting problem, and one that was considered at the 2013 WPC. Assuming a range of 0-100, the median rank by raw score was given 50 and the lower half of the field given a score between 0 and 50 in proportion to the ratio of their raw score compared to the median raw score. For the top half, it's a similar story except that 100 points was given to the 10th (I think) rank, with 1-9 getting more than 100 points. This was done firstly to fairly reward exceptional solving whilst not penalising anyone else - the thinking being that by the time you got to 10th then it doesn't matter so much who was in the contest. That WPC had the additional problem of calibrating those standardised points to the rest of the competition, but it seemed to more or less work out. Since then I've had thoughts on how to improve this, and I have a system which I think would work really well which I never got around to trying beyond playing with some croco puzzle data. Id be happy to play around with the scoring data from this series this weekend if I get a chance... | ||||||||||||
@ 2015-11-19 2:01 PM (#19993 - in reply to #19990) (#19993) Top | |||||||||||||
Posts: 774 Country : India | rakesh_rai posted @ 2015-11-19 2:01 PM ghirsch - 2015-11-18 8:50 PM .... In addition, this leads to another problem for the Indian results which is that scores may be too dependent on who is taking the test rather than the difficulty (this isn't really present for the international results since there are so many top scorers). For instance, since Rohan authored this test and could not take it (it is hard to say how he would have done), the top score was lower and thus everyone's scores will be inflated. Maybe a better way to calculate scores would be to include some measure of one's ranking in the round. Or maybe include the median score in the calculation somehow. We had the same discussions when we created the LMI Ratings ( http://logicmastersindia.com/forum/forums/thread-view.asp?tid=119 ) five years ago. I agree with the points raised by you. As a matter of fact, we already have normalized scores (NS) available for these tests which takes into consideration score, rank and median. It would be interesting to see if the results are any different, though. | ||||||||||||
@ 2015-11-19 4:26 PM (#19994 - in reply to #19990) (#19994) Top | |||||||||||||
Posts: 739 Country : India | vopani posted @ 2015-11-19 4:26 PM ghirsch - 2015-11-18 8:50 PM In addition, this leads to another problem for the Indian results which is that scores may be too dependent on who is taking the test rather than the difficulty (this isn't really present for the international results since there are so many top scorers). For instance, since Rohan authored this test and could not take it (it is hard to say how he would have done), the top score was lower and thus everyone's scores will be inflated. Yes, we're aware of this. In fact, the Math round is high scoring for most players and wouldn't get discarded, and so the only person who doesn't benefit from this is me. But that's fine, considering these scores are only a preliminary selection for the national finals. But standardizing a series is not easy, like Tom mentioned, and some of the ideas that have been tried out seem to have worked (with WPC 2013 as a classic example). It will be interesting to see if these work under different circumstances or whether they are specific to a particular series or type of contest. | ||||||||||||
@ 2015-11-20 11:03 AM (#20009 - in reply to #19833) (#20009) Top | |||||||||||||
An LMI player | An LMI player posted @ 2015-11-20 11:03 AM
|