Riad's April Contest 2024 (13th - 21st April) Score Discuss
PR 2024 R4 - Word & Object Placement (26th Apr - 2nd May) has started Discuss
DWBH — LMI October Puzzle Test #3 — 26th-28th October 201366 posts • Page 2 of 3 • 1 2 3
@ 2013-10-27 10:12 PM (#13262 - in reply to #13260) (#13262) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2013-10-27 10:12 PM

achan1058 - 2013-10-27 10:00 PM
That's what I like to hear. I am surprised that wasn't more of these. In fact, that one happened long before I started doing contests on LMI.
Well, there is none planned at the moment as well. That also means, potential test authors know what they can target to write.
@ 2013-10-27 11:28 PM (#13263 - in reply to #13211) (#13263) Top

wicktroll



Posts: 16

Country : Hungary

wicktroll posted @ 2013-10-27 11:28 PM

 How balanced do you think the puzzle types of this test were? Perfectly balanced
 What was your opinion of the distribution of easy/hard puzzles? Just right
 What did you think about the puzzle quality of the test? Fairly Nice
 What was your opinion about the answer key extraction? Answer keys could have been better
 How did you feel about the length / time limit for this test? A bit too many puzzles
 Of the puzzles you solved/attempted, how well did the point values reflect the difficulty? Most puzzles were worth the right amount
 What was your opinion of the booklet formatting and printing? Just right


@ 2013-10-28 12:51 AM (#13264 - in reply to #13250) (#13264) Top

murat



Posts: 2

Country : Turkey

murat posted @ 2013-10-28 12:51 AM

Why pinpoint this test on being timed poorly, when it followed the general guidelines?

To make it clear, my comment was not particularly about this test. I think any test with instant grading can last longer than usual to have more fair ranking of the average solvers. If instant grading were not available the current duration of the test would be perfect, because making it longer would cause early finishers to risk losing a lot of bonus points due to possible mistakes. However, I don't see any disadvantages of making the test longer when instant grading is available.
@ 2013-10-28 2:06 AM (#13266 - in reply to #13264) (#13266) Top

motris



Posts: 199
10020202020
Country : United States

motris posted @ 2013-10-28 2:06 AM

Just my two cents:

Team USA had a similar discussion to this at the recent WPC -- how rounds can feel much friendlier and fairer by letting more solvers finish. We agreed that an underappreciated feature of Instant Grading is that it really means you can now run any test indefinitely with some form of "Twist" scoring (here, that might be 80% after 60 minutes, 50% after 90 minutes) because solvers are entering solutions throughout the test.

I would not have minded this test running more like the WPC Mini Marathon or LMI Puzzle Marathon tests which seem to be the best solving format for all solvers from beginners to experts. It certainly removes some of the frustrations from puzzle selection, puzzle scoring, and round timing. Of course rules decisions should be left to the authors and LMI organizers, but I wonder if a general trend towards "friendlier" puzzle sets and rules is one way to help grow the audience of competitors.

Edited by motris 2013-10-28 2:09 AM
@ 2013-10-28 7:18 PM (#13270 - in reply to #13211) (#13270) Top

Joo M.Y



Posts: 72
202020
Country : South Korea

Joo M.Y posted @ 2013-10-28 7:18 PM

 How balanced do you think the puzzle types of this test were? Perfectly balanced
 What was your opinion of the distribution of easy/hard puzzles? Just right
 What did you think about the puzzle quality of the test? Very nice
 What was your opinion about the answer key extraction? Mostly perfect answer keys
 How did you feel about the length / time limit for this test? Just right
 Of the puzzles you solved/attempted, how well did the point values reflect the difficulty? Most puzzles were worth the right amount
 What was your opinion of the booklet formatting and printing? Just right


@ 2013-10-28 10:18 PM (#13273 - in reply to #13211) (#13273) Top

An LMI player



An LMI player posted @ 2013-10-28 10:18 PM

 How balanced do you think the puzzle types of this test were? Fairly balanced
 What was your opinion of the distribution of easy/hard puzzles? Just right
 What did you think about the puzzle quality of the test? Very nice
 What was your opinion about the answer key extraction? Mostly perfect answer keys
 How did you feel about the length / time limit for this test? A bit too many puzzles
 Of the puzzles you solved/attempted, how well did the point values reflect the difficulty? Most puzzles were worth the right amount
 What was your opinion of the booklet formatting and printing? Just right


@ 2013-10-28 10:25 PM (#13274 - in reply to #13211) (#13274) Top

joshuazucker



Posts: 31
20
Country : United States

joshuazucker posted @ 2013-10-28 10:25 PM

 How balanced do you think the puzzle types of this test were? Perfectly balanced
 What was your opinion of the distribution of easy/hard puzzles? Just right
 What did you think about the puzzle quality of the test? Very nice
 What was your opinion about the answer key extraction? Mostly perfect answer keys
 How did you feel about the length / time limit for this test? Just right
 Of the puzzles you solved/attempted, how well did the point values reflect the difficulty? Most puzzles were worth the right amount
 What was your opinion of the booklet formatting and printing? Just right


Beautiful theme! Lots of the puzzles were very nice to solve, too, which I know is challenging when the theme constrains the placement of the givens so much.
@ 2013-10-29 4:49 AM (#13276 - in reply to #13211) (#13276) Top

esther59



Posts: 8

Country : Switzerland

esther59 posted @ 2013-10-29 4:49 AM

 How balanced do you think the puzzle types of this test were? Fairly balanced
 What was your opinion of the distribution of easy/hard puzzles? Just right
 What did you think about the puzzle quality of the test? Very nice
 What was your opinion about the answer key extraction? Perfect answer keys
 How did you feel about the length / time limit for this test? Way too many puzzles (too little time)
 Of the puzzles you solved/attempted, how well did the point values reflect the difficulty? Most puzzles were worth the right amount
 What was your opinion of the booklet formatting and printing? Just right


@ 2013-10-29 6:02 AM (#13277 - in reply to #13211) (#13277) Top

Grizix



Posts: 30
20
Country : France

Grizix posted @ 2013-10-29 6:02 AM

 How balanced do you think the puzzle types of this test were? Perfectly balanced
 What was your opinion of the distribution of easy/hard puzzles? Just right
 What did you think about the puzzle quality of the test? Average
 What was your opinion about the answer key extraction? Perfect answer keys
 How did you feel about the length / time limit for this test? A bit too many puzzles
 Of the puzzles you solved/attempted, how well did the point values reflect the difficulty? Many puzzles were worth too much or too little
 What was your opinion of the booklet formatting and printing? I have a different complaint


For the booklet complain :
the grid was to dark for slitherlink, rectangles and fillomino, it was really hard to draw visible borders onto the printed grid.
@ 2013-10-29 6:42 AM (#13278 - in reply to #13277) (#13278) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2013-10-29 6:42 AM

Grizix - 2013-10-29 6:02 AM

For the booklet complain :
the grid was to dark for slitherlink, rectangles and fillomino, it was really hard to draw visible borders onto the printed grid.

achan1058 - 2013-10-26 8:33 AM

Maybe it's just my laser printer, but the given lines are dark enough that I have trouble drawing lines on top of the given lines in the slitherlink and fillomino.
Certainly agree with that. In fact we recognized all 3 images to be a problem, but didn't get the PB corrected before the test started. I should have notified Matej earlier. Sorry about that.
@ 2013-10-29 3:24 PM (#13279 - in reply to #13278) (#13279) Top

chaotic_iak




Posts: 241
1001002020
Country : Indonesia

chaotic_iak posted @ 2013-10-29 3:24 PM

Yay for Top 20 finish :D /end self bragging

It's interesting that this test has both "Way too many puzzles" and "Way too few puzzles" at the same time. (Also "Too many easy puzzles" and "Too many hard puzzles".) Hm...

Edited by chaotic_iak 2013-10-29 3:25 PM
@ 2013-10-29 6:18 PM (#13281 - in reply to #13211) (#13281) Top

macherlakumar




Posts: 123
10020
Country : India

macherlakumar posted @ 2013-10-29 6:18 PM

 How balanced do you think the puzzle types of this test were? Fairly balanced
 What was your opinion of the distribution of easy/hard puzzles? Just right
 What did you think about the puzzle quality of the test? Very nice
 What was your opinion about the answer key extraction? Mostly perfect answer keys
 How did you feel about the length / time limit for this test? Just right
 Of the puzzles you solved/attempted, how well did the point values reflect the difficulty? Most puzzles were worth the right amount


@ 2013-10-29 7:57 PM (#13282 - in reply to #13250) (#13282) Top

detuned



Posts: 152
1002020
Country : United Kingdom

detuned posted @ 2013-10-29 7:57 PM

prasanna16391 - 2013-10-27 7:16 AM

I still don't understand this, because its like every test. For anyone who can't finish a test, puzzle selection matters. Thats not luck. It would be luck if points distribution was not given and someone went and attempted the Kakuro thinking it to be easy or the Akari thinking they'd get more points for it. But its known which are the lower pointers and the higher pointers, its upto the solver to select the puzzles in a way to optimize their performance. Thats a part of competing. Every Monthly test barring exceptions is typically designed so that the top 3-4 players can finish and there's a few easy puzzles for the rest. In fact, this test has more easy puzzles than others, in my opinion. Its not like 75 minutes is an unheard of time, in fact the ever successful TVCs are also of 75 minutes.

The guidelines LMI gives to the authors was followed here. There were a few easy puzzles, and it was finishable for top solvers. There was never claims that it would be otherwise. Why pinpoint this test on being timed poorly, when it followed the general guidelines?

Obviously, if the discussion is that the general guidelines themselves should be re-visited, then thats a completely different matter. I'll again point to the existence of Beginners' Contests and the need to make the distinction that the Monthly tests are set based on general WPC difficulty and finish-ability.


I think you are missing something here Prasanna, and that is how the points are distributed. Puzzle selection for a competition can indeed come down to luck if you have high variance puzzles (often the case on monthly tests as authors want to show off novel designs and tricks) or you haven't taken into account a sufficient number of testers. I think that when puzzle selection has a bigger impact on ranking than overall puzzle solving skills then this can cause a lot of frustration, and it would be a mistake for anyone running a competition not to at least consider this issue.

My own personal preference when running a competition is to have more like 20 finishers, and to make sure there are no puzzles with "outlier" levels of difficulty/points distribution which potentially shake things up if you don't solve them or if you attempt to solve them and waste lots of time scoring 0.

I have to say I didn't really find that too much of an issue with this test, which I enjoyed (despite having to solve in paint); my comments are of a more general nature.
@ 2013-10-29 8:30 PM (#13283 - in reply to #13282) (#13283) Top

prasanna16391



Posts: 1780
100050010010020202020
Country : India

prasanna16391 posted @ 2013-10-29 8:30 PM

detuned - 2013-10-29 7:57 PM

I think you are missing something here Prasanna, and that is how the points are distributed. Puzzle selection for a competition can indeed come down to luck if you have high variance puzzles (often the case on monthly tests as authors want to show off novel designs and tricks) or you haven't taken into account a sufficient number of testers. I think that when puzzle selection has a bigger impact on ranking than overall puzzle solving skills then this can cause a lot of frustration, and it would be a mistake for anyone running a competition not to at least consider this issue.



Well, if the criticism was about points distribution being off I wouldn't have argued, because most of the time thats down to personal preference. The criticism I argued with was basically that this test was not timed properly.

Obviously it could come down to luck if the distribution is not done right. Points distribution is always an important aspect of organizing a contest. But that comes into relevance only once the choice is made by the participant. Its unfair to say there's luck involved in the selection process itself. You know you are choosing a puzzle thats "supposed to be" high/low difficulty based on points assigned. If it then proves otherwise, its the points distribution that can be criticized, not the timing of the test, which pretty much followed the usual Monthly Test standards (whether the standards are ideal is an entirely different topic) and any deviation was on the easier side.
@ 2013-10-29 10:13 PM (#13285 - in reply to #13279) (#13285) Top

achan1058



Posts: 80
20202020
Country : Canada

achan1058 posted @ 2013-10-29 10:13 PM

chaotic_iak - 2013-10-29 4:24 AM

Yay for Top 20 finish :D /end self bragging

It's interesting that this test has both "Way too many puzzles" and "Way too few puzzles" at the same time. (Also "Too many easy puzzles" and "Too many hard puzzles".) Hm...

It's simple, really. If you scored well on it (or even that if you have the impression that you did well on it, as I have personally found), you will find it easy and say that the test should have more puzzles. If you did poorly, you will say the test have too many puzzles. While more experienced contest takers will try to factor out their personal performance in their review, this process isn't perfect. To the more casual contest takers, they aren't going to consider that at all. If they only managed to do 1/4 of the set, of course they are going to say there are way too many puzzles.

Edited by achan1058 2013-10-29 10:17 PM
@ 2013-10-29 10:25 PM (#13286 - in reply to #13211) (#13286) Top

greenhorn



Posts: 164
100202020
Country : Slovakia

greenhorn posted @ 2013-10-29 10:25 PM

Hi everybody,
I am really pleased that Matej fulfilled his promise to prepare a set of puzzles that would make us happy. He often tried to put smiles into his puzzles which sometimes makes me worry because the level of difficulty is closer to tears than to laughter.
In this set, most of the puzzles were human friendly, even though Akari and Masyu where too easy, but in comparism with Kakuro, some easier puzzles were needed. Dispite all, I was not able to finish kakuro within the original time of 60 minutes, and I was a little scared, because only solving the rest of the puzzles took me 58 minutes or so and this does not include printing and submitting answers. That is why we discussed the time limit and finally Deb and Matej have come up with compromise of 75 minutes lenght. From my view, this seemed to be ideal, because 10 puzzlers were able to finnish and 59 puzzlers got the half of points. If we let the initial time limit, Thomas would be the only person to finnish and the rest should have feeling that they participated in badly timed sprint test.
I agree with Deb and some of you, who mentioned the Instant grading, which allows to have a longer competion, but in this case I do not think that adding 30 minutes (instead of 15) will improve the image of this competition - competion full of great and smiling puzzles, do not forget. If some of you were attempted to solve the kakuro and failed, it is not because of bad timing (remember that 10 players have no such problem), but because of revaluation of your skills and bad tactics, which possibly makes the main difference between average players (as me) and the top-solvers.
At the end, I want to express admiration to all contestants, especially to those who finished all the puzzles in time. I am only concerned about the low participation of Slovak puzzlers due to Slovak Puzzle and SudokuTeam Championships held the same weekend. Hope you enjoyed Matej´s puzzles.

Matus
@ 2013-10-30 2:24 AM (#13288 - in reply to #13211) (#13288) Top

jhrdina



Posts: 8

Country : Czech Republic

jhrdina posted @ 2013-10-30 2:24 AM

Very nice

compe tition
Thanks Matej
Jirka
@ 2013-10-30 2:32 AM (#13289 - in reply to #13288) (#13289) Top

jhrdina



Posts: 8

Country : Czech Republic

jhrdina posted @ 2013-10-30 2:32 AM

jhrdina - 2013-10-30 2:24 AM

....Very..........nice...

compe.............tition
..Thanks......Matej...
...........Jirka........



This is what I have intended :) J.
@ 2013-10-30 5:00 PM (#13291 - in reply to #13283) (#13291) Top

detuned



Posts: 152
1002020
Country : United Kingdom

detuned posted @ 2013-10-30 5:00 PM

prasanna16391 - 2013-10-29 3:30 PM
Its unfair to say there's luck involved in the selection process itself. You know you are choosing a puzzle thats "supposed to be" high/low difficulty based on points assigned. If it then proves otherwise, its the points distribution that can be criticized, not the timing of the test, which pretty much followed the usual Monthly Test standards (whether the standards are ideal is an entirely different topic) and any deviation was on the easier side.


OK - lets leave timing of tests out of this discussion, I'm interested in the claim that there is no luck involved in puzzle selection for a test which won't be finished by all but a handful.

I have to disagree with this, to some extent.

For example, what about high variance puzzles? You haven't addressed this despite the fact you will almost certainly have at least one or two of these on every monthly test. It's an interesting topic by itself trying to grade those.

You have to take into account that different people solve puzzles in different ways, and high variance puzzles can amplify the problem. Suppose you had four testers of roughly equal solving ability, who test solved a puzzle in 2 minutes, 5 minutes, 7 minutes and 8 minutes. How many points do you assign the puzzle? The median of 6 minutes seems a good choice, but this is clearly a puzzle which has some intuitive shortcuts, and if you happen to be the right kind of intuitive solver on the test and choose this puzzle, I think it's impossible to argue that you won't be receiving some pretty fortunate points.

The trouble is, if you try and adjust the points total down slightly then you'll probably be undervaluing the puzzle for a lot of other people. Of course things can work out in exactly the same way in the other direction, which can obviously be more frustrating if you you're on the wrong end of things.
@ 2013-10-30 9:49 PM (#13294 - in reply to #13211) (#13294) Top

prasanna16391



Posts: 1780
100050010010020202020
Country : India

prasanna16391 posted @ 2013-10-30 9:49 PM

I agree with all of this actually. A puzzle, once chosen, can have lucky/unlucky results, depending on solver's strengths vs required logic and a whole lot of other factors.

BUT, again, my main point is that this has to do with points distribution, and something that can be noted only after the puzzles are selected and solved and give out unexpected experiences. It is possible there will be some luck in the puzzle selection. It is also possible that there won't be and all the points are actually pretty accurate, at least for most individuals. The point is, as a solver, no one should go in "expecting" a puzzle to be wrongly valued.

The context here means I wasn't saying luck will never be a factor. I was replying against it being a given that luck will come in during puzzle selection. But, I hope you agree, IF the points are assigned right for an individual, then puzzle selection for that individual becomes more about having their own strengths and strategies. And as a solver, one should go in expecting that to be the case.
@ 2013-10-31 12:43 AM (#13296 - in reply to #13294) (#13296) Top

motris



Posts: 199
10020202020
Country : United States

motris posted @ 2013-10-31 12:43 AM

Because all individuals solve differently, I'll argue it is impossible to assign points "right" for more than a few individuals on a given test even when we do our best as test-setters. With several test-solvers (Grandmaster Puzzles currently uses about 6-8), you can get a view of consistent puzzle times and high or low variance puzzles but your points are really just a best estimate of the mean value and some puzzles have very wide spreads and some have very small spreads even for individual solvers.

As I see tests, any loop puzzle or number placement puzzle will probably be overvalued for me. Any arithmetic puzzle will probably be undervalued. Looking at this test, for example, I took longer on the Tapa ? (over 3 minutes for 30 points) compared to the Different Neighbours (2.5 minutes for 95 points) and Indirect Yajilin (2.25 minutes for 75 points). Some of this was a mix of good and bad luck but there is quite a difference in returns from this scoring. If I had to leave one of these puzzles unsolved at the end, would I make the right choice? On the other hand the Kakuro took me what looks like 15 minutes for 110 points. I actually feel all those four puzzles were about in the right score range for the general solver (Tapa maybe wanting to be 45). But still, the points do not come close to matching my results.

Tom's point, and mine, is that score/time guidelines are useful but by their very nature imprecise things and hoping for "perfect scoring" will always be a challenge. The fairest ranking of players will always be the time it takes them to complete the entire set of tasks before them. That way solvers will all have cleared the same hurdles, the easy and hard ones, and so if you want a good ranking of 10-50 solvers you should try to get 10-50 solvers to finish the test.

That we had 10 solvers finish is very good and first and second are clear. The effects of different penalty systems will change the order of 3-5 but we have the clearest sense of the relative solving results of these solvers. And no, I am not using the final scores for this comparison. Just the raw times.
@ 2013-10-31 1:07 AM (#13297 - in reply to #13211) (#13297) Top

joshuazucker



Posts: 31
20
Country : United States

joshuazucker posted @ 2013-10-31 1:07 AM

Changing the subject a lot here ... the puzzles I struggled with the most on this test were Battleship and Different Neighbours, and the Tents I feel like I solved more by guessing/intuition than by really knowing how it was going to work, and none of these were puzzles I ended up submitting during the 75 minutes (I wasted most of my last 15 on failing to solve the neighbors and on failing to count to 6 properly with the snake). Any tips on how to get started on those three puzzles? Once I got a little bit of a start on neighbors (by checking a couple different cases) and on tents (by guessing), I could see how to finish the solve pretty straightforwardly. But that first step was hard/lucky to find.

Back to the original topic, the kakuro took me about the same time as the median of the other puzzles I solved -- in other words, for me it was massively overvalued. The hamle was also some relatively quick points for me. The slitherlink and the tapa took me a lot longer than their point values would suggest, in fact taking almost as long as the kakuro and the hamle took me, certainly way over half the time despite being worth less than half the points. I see, though, that other solvers spent a lot of time on the kakuro, so I'm not suggesting it was valued wrongly, just that there's some combination of strategy and luck in puzzle selection when you're not going to solve all the puzzles.

I used to be able to go into these tests figuring that I wouldn't solve even half of the puzzles, which meant that I could pretty much focus on types I know I'm pretty fast with, and even if it means I end up not looking at a puzzle that's overvalued on average for most solvers, chances are the ones I don't do would turn out to be inefficient for me! Now it seems like I have to plan to at least look at all the puzzles and go for it on the ones that I know how to start quickly, and the strategy/luck is more often in knowing when to give up early vs sticking with a hard puzzle to the end.

Maybe a lot of the variance (luck?) is in the design of the test -- a high-value kakuro is likely to mean some easy points for me, and gains relative to most other solvers, whereas a high-value slitherlink is likely to be a puzzle I don't even try to earn the points on unless I'm in danger of finishing the test. I agree with motris's point that the best measure of my overall solving ability on the test would be the time taken to solve all the puzzles. I may be able to get 2/3 of the points in the allotted time but that doesn't mean I would get all the points in 3/2 of the time!
@ 2013-10-31 1:23 AM (#13298 - in reply to #13297) (#13298) Top

greenhorn



Posts: 164
100202020
Country : Slovakia

greenhorn posted @ 2013-10-31 1:23 AM

joshuazucker - 2013-10-31 1:07 AM

Changing the subject a lot here ... the puzzles I struggled with the most on this test were Battleship and Different Neighbours, and the Tents I feel like I solved more by guessing/intuition than by really knowing how it was going to work, and none of these were puzzles I ended up submitting during the 75 minutes (I wasted most of my last 15 on failing to solve the neighbors and on failing to count to 6 properly with the snake). Any tips on how to get started on those three puzzles? Once I got a little bit of a start on neighbors (by checking a couple different cases) and on tents (by guessing), I could see how to finish the solve pretty straightforwardly. But that first step was hard/lucky to find.


For example, try to solve different neighbours with letters. At the end you will be able to identify which letter stands for which number.
The "eyes" in the tents have only two possiblities - to form a "square" or "tetramino". It is obvious, that one construction should not be used twice.

Edited by greenhorn 2013-10-31 1:25 AM
@ 2013-10-31 1:44 AM (#13299 - in reply to #13296) (#13299) Top

prasanna16391



Posts: 1780
100050010010020202020
Country : India

prasanna16391 posted @ 2013-10-31 1:44 AM

motris - 2013-10-31 12:43 AM

The fairest ranking of players will always be the time it takes them to complete the entire set of tasks before them. That way solvers will all have cleared the same hurdles, the easy and hard ones, and so if you want a good ranking of 10-50 solvers you should try to get 10-50 solvers to finish the test.



Define "good ranking". Are we then saying that, in this test for instance, the top 10 was a completely fair reflection of skill, but the rest of them were all dependent on luck? Maybe in one test if a person has a bad day due to puzzle selection, then you'd say that one performance was on luck, but that is about the same level of luck as someone solving 9 out of 10 puzzles the fastest, and then getting completely stuck on one to the extent that they can't finish. Over a bunch of tests, both of these factors get negated and generally the rankings turn out fair. Its why in the WSC/WPC the preliminary rounds usually end up being a fair reflection of skill, even though they probably have the same finishing level of top few.

In general, I disagree with this point that a bunch of solvers clearing the same set of tasks minimizes the luck factor. There's always luck involved that a person with the most skill overall still might not spot that one deduction necessary to solve one puzzle. There's also then the choice issue, where you know this test is finish-able and should be done by you, but you're stuck, so do you guess, and then you enter into more luck territory. I made this last point because its the same as a slower solver taking a chip at a high pointer even if they're not sure of finishing within time and then choosing to guess through with time running out. In the end, I just feel both scenarios even out for everyone and its just down to the individual and the relevant competitors having a good or bad day, as it is in most competitive environments.
@ 2013-10-31 3:54 AM (#13300 - in reply to #13299) (#13300) Top

motris



Posts: 199
10020202020
Country : United States

motris posted @ 2013-10-31 3:54 AM

I'm saying "good ranking" in the sense of robust to changing conditions. Let's say two different test solvers were used for the test, and new scores were applied. Would Murat's 13/15 without Akari and Slitherlink still be better than all of the solvers with 14/15 solved? What if the test ran for 70 minutes. Or 80 minutes. How one sets the rules and scores for a test solvers cannot complete can cause large swings in rank position in these ranks. Regardless of how one sets these things when a test can be finished, the top 10 are basically unaffected. That is what I mean by having a good ranking for those solvers. It doesn't matter if player A was a tester or X was the time and not Y.

To Josh's question -- I definitely found the test rewarded me for being able to sense where intuition and logic would work well. I did use letters with different neighbours to link a whole bunch of cells, but then went to intuition and using uniqueness to get the assignments quickly. The Battleship, surprising enough, was really done by logic which is unusual for the type and made it one of my favorite puzzles. There is a really neat consequence of the logic of the two "1" rows and the two columns with three available spots that each need to take "2"s. Once I caught onto that, it was a much different puzzle.

Edited by motris 2013-10-31 3:55 AM
DWBH — LMI October Puzzle Test #3 — 26th-28th October 201366 posts • Page 2 of 3 • 1 2 3
Jump to forum :
Search this forum
Printer friendly version