PR 2024 R4 - Word & Object Placement (26th Apr - 2nd May) Score Discuss
PR 2024 R5 - Shading & Regions (3rd - 9th May) has started Discuss
Mock Test 1439 posts • Page 2 of 2 • 1 2
@ 2010-03-01 12:54 PM (#131 - in reply to #128) (#131) Top

amitsowani




Posts: 349
1001001002020
Country : India

amitsowani posted @ 2010-03-01 12:54 PM

Thanks for the awesome analysis. Nice way to rate the classics.

I think all the techniques that you have used are absolutely acceptable in competitive sudokus. These techniques can be identified by participants without breaking their head too much or being tempted to resort to trial and error. :)




@ 2010-03-01 2:06 PM (#132 - in reply to #10) (#132) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2010-03-01 2:06 PM

score / Results has been updated to show the split between Round1 and Round2.

Couple of observations
1. The average score in Round1 is much higher than average score in Round2. So, probably Round2 should have been longer.
2. Many players didn't attempt Round2. (so, either many players are just interested in Classics OR some players gave up after finding Round1 little tough)

Only Zafer completed Round1 with 4 minutes of bonus time. So really a good show from considering many other good players solved only 5-7 in Round1
@ 2010-03-01 7:54 PM (#136 - in reply to #132) (#136) Top

rakesh_rai




Posts: 774
500100100202020
Country : India

rakesh_rai posted @ 2010-03-01 7:54 PM

As already noted, scores in this mock are very low, implying that the test became tougher than it was perhaps planned by the organisers. We can perhaps infer that two short 50 minute formats can work well in an offline competition, but it is not optimal for online contests.

Round 1: Two contestants completed all 10. So timing wise, it was planned ok. If we use the grades from scanraid, we arrive at the following:

1 119 moderate
2 126 moderate
3 101 moderate
4 82 moderate
5 106 moderate
6 84 moderate
7 154 tough
8 306 tough
9 166 tough
10 194 tough

I am not sure about the accuracy of this system but this does imply that the 20 pointer was tougher than the 40 and 50 pointers. And the 90 pointer was not tougher than the 70 pointer. In fact, classic 8 was attempted by the least number of contestants (another reason could be that it is difficult to reach to number 8, whether you start from 1 or 10). But overall, the points distibution was good as all "moderate"s had less points than "tough"s.

Round II: Consecutive sudoku was nice with logical steps needed to proceed. Thats the only one I can comment on since I got stuck in most others I tried, and solved diagonal the hard way. Here, we probably tried to fit in more sudokus than the alloted time. In 50 minutes, 5 or 6 sudokus (of the level in the mock) would have been perfect. The main regret is that most participants were unable to even try out a majority of sudokus.

Another reason for people skipping round 2 could be the rigid "20 minute gap" constraint between the rounds. If the two rounds were independent, we would have seen more scores in round 2.
@ 2010-03-02 7:53 PM (#137 - in reply to #10) (#137) Top

Ours brun




Posts: 148
1002020
Country : France

Ours brun posted @ 2010-03-02 7:53 PM

I had some informatic problems during round 1 so I could try only the easier puzzles, which were very pleasant to solve. I think 50' was a good timing for this round.

Then I tried round 2 but did mistake on mistake and so I decided to stop playing and relax. :bleh:

I came back on the puzzles later, and my impression is that puzzles were globally hard, and some really tough. It would have been great to have the round last about one hour and a half.

Finally, the 20' break between the two rounds didn't bother me.
@ 2010-03-03 4:54 AM (#139 - in reply to #136) (#139) Top

cnarrikkattu



Posts: 25
20
Country : United States

cnarrikkattu posted @ 2010-03-03 4:54 AM

I would be a bit careful about relying on scanraid ratings for human solving. It doesn't seem to correspond too well with actual testing. For example, see Nick Baxter's remarks about the USSC puzzles here (and from direct experience, the 3rd puzzle of the third round was pretty much the toughest one of the bunch).
@ 2010-03-03 7:25 PM (#141 - in reply to #136) (#141) Top

Ours brun




Posts: 148
1002020
Country : France

Ours brun posted @ 2010-03-03 7:25 PM

I agree with this, Scanraid's ratings are sometimes far away from what we could expect. Of course every software will sometimes deliver "strange" scores according to us, but Sudoku Explainer is generally considered more reliable. Anyway I didn't try the tough puzzles yet, so I can't tell about my own experience...
Mock Test 1439 posts • Page 2 of 2 • 1 2
Jump to forum :
Search this forum
Printer friendly version