Sudoku Champs Practice Test - U10 & U12 has started Discuss
Sudoku Champs Practice Test - U15 & U18 has started Discuss
Fillomino Fillia — LMI June Puzzle Test — 4/5th June87 posts • Page 3 of 4 • 1 2 3 4
Should individual submission time for each puzzle be displayed in score page for every participant?
Should individual submission time for each puzzle be displayed in score page for every participant?
This is for all LMI tests, not specific for this test.
OptionAdded byResults
Yes, it will be interesting to see.Administrator17 Votes - [89.47%]
No, it will not be much usefulAdministrator2 Votes - [10.53%]
View Results

@ 2011-06-06 9:25 AM (#4727 - in reply to #4726) (#4727) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2011-06-06 9:25 AM

Thanks for the detailed walkthrough. It is indeed a 'Star Battle' varia, than a Fillomino varia.
Some very beautiful logic there, and I can only recommend everyone to solve the Star Fillomino first, before looking at the document.
@ 2011-06-06 9:36 AM (#4728 - in reply to #4727) (#4728) Top

motris



Posts: 199
10020202020
Country : United States

motris posted @ 2011-06-06 9:36 AM

I certainly wasted most of my time on the one non-fillomino here (at least Melon's point about my score looking bad after 55 minutes was I'd taken 7 minutes to finish the classics and then 48 to knock off the two stars and the first sum with my second submission). I immediately knew how the 20 pointer would work (80 cells accounted for by the givens, with 20 stars to find), but really struggled to get the logic going my way. And even when I'd intuited the right things, I made an error or two so it took a second copy to finish it off. Certainly a high-variance puzzle.
@ 2011-06-06 10:30 AM (#4730 - in reply to #4712) (#4730) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2011-06-06 10:30 AM

MellowMelon - 2011-06-05 10:45 PM
Something to note is that the manual override system is done by entering the person's wrong answer as an alternate correct answer, so this is why I bring up the idea of someone else giving the same answer. One person's typo, like perhaps in your case, could be another's mistake on the page.

Although I can't reveal details until after the test ends, your sample rule about "allow a single digit only to be deleted/inserted/substituted if accompanied by a promise that it was a typing error not a puzzle mistake" may result in the problems of the above paragraph for a particular puzzle in this test. There is a common wrong answer being submitted that is plausible as a typing mistake but also very likely to be an error on the page. If we follow this rule and accept one person's promise that this commonly mistaken digit was a typo, the manual override system forces us to credit every single person that made the error. Whether this is a problem with the system itself or not could be argued, although my opinion is that it's fine.

Since Palmer mentioned about it, let me explain why the score page works the way it is.
Every puzzle has a perfect solution key, and it may have 0 or more alternate solution keys which authors decide to accept. When a player reports claims for a puzzle, authors validate the request and decide to either give credits or the other way. If they decide to give points, any other player who made same submission get points too. The other player could have made a typo or a genuinely solving mistake.

The question is why don't we just give points only to the player who claimed. After running the tests for close to 1 year, we realize that most of the players don't claim for points. We might see few claims in the forum, but authors spend lot of time verifying each and every wrong submission. So, we really can't go by who claimed and who didn't. If we are giving points to X for an imperfect submission, we must give points to Y & Z who also same submission mistake.

Like every system, this may be debatable. If their are strong objections against how this works or there are alternate solutions, let us know.
@ 2011-06-06 10:50 AM (#4731 - in reply to #4730) (#4731) Top

rakesh_rai




Posts: 774
500100100202020
Country : India

rakesh_rai posted @ 2011-06-06 10:50 AM

One change which I would like to see is that, in all these cases where players submitted a wrong answer due to whatever reason (transcription error, bad handwriting, keyboard issue, typo, etc) they do not deserve "full points" for those puzzles. As someone mentioned earlier, it is slightly unfair to those who spent time in ensuring their answer keys are correct by double checking, for example. So, while I am in favour of giving some credit to such cases, they should get only a % of points (80%, 75%, 50%, whatever seems appropriate, but not 100%).
@ 2011-06-06 11:08 AM (#4732 - in reply to #4579) (#4732) Top

MellowMelon



100
Country : United States

MellowMelon posted @ 2011-06-06 11:08 AM

I think I would be in favor of that. 75 or 80 sounds about right.
@ 2011-06-06 2:50 PM (#4734 - in reply to #4579) (#4734) Top

deu



Posts: 69
202020
Country : Japan

deu posted @ 2011-06-06 2:50 PM

Thanks for a really good competition!
I especially liked Classic 4 (I spent about 5 minutes to find where to start it) and 3 puzzles with >10 points.
I think Even-Odd (Bottom) is a difficult puzzle, but I solved it smoothly thanks to Mathgrant's practice puzzle, which reminded me of some techniques in Yin-Yang.

This is the first monthly puzzle test which specializes in only one puzzle type.
I am interested in whether this trend will continue or not.

About partial credits: As Logic Masters Deutschland has already adopted, 80 percent (or around) seems good.
@ 2011-06-06 2:50 PM (#4735 - in reply to #4732) (#4735) Top

euklid



Posts: 28
20
Country : Austria

euklid posted @ 2011-06-06 2:50 PM

80% have been used at the most recent German Logic Masters contests. This surely is a good idea to implement also for all LMI contests.

Stefan
@ 2011-06-06 3:26 PM (#4736 - in reply to #4734) (#4736) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2011-06-06 3:26 PM

deu - 2011-06-06 2:50 PM
This is the first monthly puzzle test which specializes in only one puzzle type.
I am interested in whether this trend will continue or not.
First of all, congratulations on so good a finish. It will be interesting to see what effect it has on LMI ratings.

So far, we've never enforced any authors to have puzzles from different types.
It was completely Grant and Palmer's idea to present a Fillomino based set. Credits to them because some of the puzzles needed strategies from other puzzle types.

Whether we'll have more such contests, well, it depends upon what authors can come up with.
@ 2011-06-06 4:09 PM (#4737 - in reply to #4736) (#4737) Top

Nikola



Posts: 103
100
Country : Serbia

Nikola posted @ 2011-06-06 4:09 PM

Applauses for authors and congratulations to deu!

I also want to point out my favourites. These are certainly the second star puzzle, math variants, but the best puzzle and the hardest at the same time was the second odd/even. Very fun and enjoyable test!

Nikola
@ 2011-06-06 4:17 PM (#4738 - in reply to #4719) (#4738) Top

GaS



Posts: 24
20
Country : ITALY

GaS posted @ 2011-06-06 4:17 PM

ronald - 2011-06-06 2:53 AM

I never would have thought Fillomino could be so enjoyable.


Same for me, excellent puzzles for a great contest, many thanks to the authors and the organization.
I like very much starbattle puzzles and so I lost 25+ minutes to solve the difficult star puzzle without success, no problem for the first step I check within 30-60 seconds, but I didn’t saw the second step, the four red rectangles in mellow walkthrough, at all… It was really a great puzzle!

As usual, I lost some points for very, very, stupid errors but, indeed, my target are not the top positions and so... who cares? :-)

Wait for Fillomino FIllia -2 :-)

GaS


Edited by GaS 2011-06-06 4:44 PM
@ 2011-06-06 4:45 PM (#4739 - in reply to #4579) (#4739) Top

yureklis



Posts: 183
10020202020
Country : Turkey

yureklis posted @ 2011-06-06 4:45 PM

deu - 2011-06-06 2:50 PM
This is the first monthly puzzle test which specializes in only one puzzle type.
I am interested in whether this trend will continue or not.


When I first saw Roland's (Roland Voigt) "Hochhausrätsel-Wettbewerb" (Skyscrapers and Variations-2009) at LM Deutschland, I thought this contest idea is very brilliant. Because all puzzles of test are based on a classic puzzle, and of course this helps the puzzle solver to get better results. Because there is a solid rule which belongs to classic type, and this helps to understand rules easily. After this contest Nils Miehe prepared a "Rundweg-Wettbewerb" (Slitherlink and Variations) at the same web page. I was getting familiar with this contest type, and it started to seem better to me.

After WPC 2009 Antalya, Gulce and I were thinking about a Tapa contest. But we didnt know how it would be back then. Maybe it could contain Tapa and some variations. But after I saw Roland's contest idea, everyting was clear in my mind. So we decided to make a Tapa Variations Contest, based on this contest type. After making first four TVC's we were sure that we would repeat it next year, and we did TVC 2011 here under LMI.

Also Roland did second Hochhausrätsel-Wettbewerb, and second Rundweg-Wettbewerb was held at LM Deutschland. I thought I should make a contest at LMD for contributing to this contest type series. Jörg Reitze and I made a contest, it is named "Schlangenrätselwettbewerb" (Snake and Variations) [Probably second series will be held in august]. Also Voigt brothers made one "Pentomino-Wettbewerb" in 2011.

Andrey Bogdanov has recently been making variations series in Forsmarts and Diogen, until now he made Domino and Variations, Yin-Yang Variations and Scrabble Variations. Probably he will continue these variation contests.

And finally Palmer and Grant made this beautiful Fillomino and Variations.

I am sure that different authors would follow this path, and we will see a lot of contests which is based on this contest type. Because as a puzzle community we have a lot of classic puzzles, and we have wonderful puzzle designers all over the world.

I want to thank Roland and followers to start a very fun contest habit, and of course I want to thank LM Deutschland. Becase LMD always try different things, contest type, concepts, applications etc.

Best

Serkan

* LMD contests: http://www.logic-masters.de/Meisterschaften/liste.php ( to see the puzzles, you should register)
* Andrey Bogdanov contests: http://forsmarts.com/forum/viewtopic.php?id=302
* TVC 2010 series: http://oapc.wpc2009.org/archive.php
@ 2011-06-06 7:19 PM (#4743 - in reply to #4739) (#4743) Top

MellowMelon



100
Country : United States

MellowMelon posted @ 2011-06-06 7:19 PM

Thank you everyone for the positive comments. I didn't imagine this test would be received so well.

On the flipside to the points that Serkan brings up, throughout the whole process I had several worries about doing a themed contest like this one, especially when it seems as though LMI tests are entering into more and more prominence. My reason for feeling this way can be summed up by the note about how I was going through a past WPC (2003?), came upon a Dominoes round, and thought "Ah crap... this is gonna suck". Both LMI and the UK are using these tests for rating systems, the latter for WPC qualification, and although the LMI one doesn't have an end at least some stock seems to be put in it. A contest like this one will throw off the results of people who are really good or really bad at Fillomino.

That said, I think picking Fillomino was probably a good choice, as it is not so common. In discussing this point on the UK forums drsteve brought up the point that a similar contest with Slitherlink would probably make the above problem of people who are good and people who aren't much worse. The reasons for this are probably identical to the reasons that Sudoku tests here, what you could think of as a themed puzzle test, are considered entirely separately. There is too much emphasis on Slitherlink/Sudoku, so the correlation between skill on general puzzles and skill on these types is too low. In fact Sudoku has developed into its own brand of competitions, with the skills on them and skills on general puzzles separated quite a bit.

In any case, mathgrant and I had considered these issues at some point in the process, and we tried to ensure that people with strength in a certain subset of WPC skills would be able to put them to use. For people good at word fill-ins, we had Shape. If Black and White / Yin-Yang is your thing, we had Even-Odd. For arithmetic, we had Sum and perhaps Shikaku. Latin square type puzzles weren't possible though, so the Star variation was included to have long-range row/column deductions, which was the closest we felt we could get. I can say for sure that Sum and Star would not have appeared if we hadn't been considering these things. So although Serkan may have a point in saying that it's a bit easier to make a fun and enjoyable themed contest, one still ought to be careful and keep things like this in consideration.

So I think there are reasons for LMI to avoid hosting too many themed contests, and it will be better if the majority of their tests are variety like Evergreens or the Decathlon. I suppose this is a bit selfish to say since we just took up one spot in the quota, but I have reasons for believing it that I just explained. And there's also the wonderful habit of LMI to trust the authors to deliver a high quality contest without telling them what to do or not to do, so I think this will probably have to stay a guideline rather than anything enforced.

Also, for these reasons, sorry, I doubt a Fillomino-Fillia 2 is coming. A second mathgrant/MellowMelon collab is likely, but not for a long time as I want to stick to competing for awhile.
@ 2011-06-06 8:23 PM (#4744 - in reply to #4743) (#4744) Top

motris



Posts: 199
10020202020
Country : United States

motris posted @ 2011-06-06 8:23 PM

I certainly agree with an 80% standard for manually fixed solutions.

If the code/interface allows, I might even propose a more radical change to the system. When a person submits an answer, it is instantly graded and returns points. If the submission is wrong, the value is now presented as 80% of the value and the solver has to retype. If they are wrong again, it goes to 60% and so on down. In this way, solvers will know when they are "done" with a puzzle and similarly done with the whole test. Also, if they've made a really stupid entry mistake (which won't even be fixed by manual checking), they'll have an opportunity to fix that mistake to regain credit although they won't get full points. This kind of instant grading (with penalty) is used in some programming contests and would be interesting to try in one of these. I was certainly going to propose trying it for the next test I write, but I'd be interested to hear opinions on it now. I know some people will say this is different from live test grading, and therefore a bad idea since they prefer the live competition format, but these tests are not live tests and so running it more like a site like croco-puzzle where you get instant feedback with your solution makes sense to me (at least as a change from the ordinary). It would probably be ideal to test first on a sudoku contest where applet-solving is common and answer entry is standardized (rows/columns of 9 numbers).

Edited by motris 2011-06-06 8:30 PM
@ 2011-06-06 8:29 PM (#4745 - in reply to #4744) (#4745) Top

MellowMelon



100
Country : United States

MellowMelon posted @ 2011-06-06 8:29 PM

In the spirit of keeping LMI tests closer to an online version of WPC rounds (although I guess the playoffs are similar), I think that change might be a bit too much. Also, it would probably make me scared to death of clicking the Submit button at any point. I would be willing to try it for at least one test, but I'm not too confident that I would be fond of it.
@ 2011-06-06 8:35 PM (#4746 - in reply to #4744) (#4746) Top

mathgrant




Posts: 15

Country : United States

mathgrant posted @ 2011-06-06 8:35 PM

Despite having competed only once, I really think penalizing the solver for changing answers is a bit much. Giving partial credit instead of full credit for mistyped answers sounds fair, though.
@ 2011-06-06 8:35 PM (#4747 - in reply to #4579) (#4747) Top

motris



Posts: 199
10020202020
Country : United States

motris posted @ 2011-06-06 8:35 PM

As your score will only ever improve with this system (unless you ever get to a time where you can check all your answers, which for most solvers is rare), it's interesting that this would make you more scared. I think the degradation of scores doesn't have to be as fast (everyone can always get one free change), if that is the concern, but the goal is to get solvers who have finished puzzles, but have issues with typing, to not lose points. If they are legitimately wrong with the puzzle, they will not regain points and will stay at 0. If they are right, they will eventually enter what is intended. For solvers at all levels I think the disappointment of typos can be removed with changes in the system. I prefer to start at 80% as some "mistakes" give information to the solver, but when the current system would give most solvers 0 points (or a manual regrade) in these cases, 80% is a lot more than 0%, and this removes both the need and challenge to do manual regrading.

As a different change, I've spoken with Deb about adding a "I'm done with this test" button. Right now the clock continues to run as a few of us frantically check everything we entered. I sometimes spend a long period of time just checking work which isn't fun while I wait to be able to check my score. To match the live tournament structure, you should not be getting bonus if you are still working on things. That includes checking your work. So add in intermediate checking or add in a finished with test option to start the bonus clock.

Edited by motris 2011-06-06 8:45 PM
@ 2011-06-06 8:48 PM (#4748 - in reply to #4747) (#4748) Top

mathgrant




Posts: 15

Country : United States

mathgrant posted @ 2011-06-06 8:48 PM

You have to remember that I have no competition experience, and thus no idea what a real-life (non-electronic) competition's supposed to feel like.

I'm all for giving partial (but not full) credit for typos, but I'm not sure how much I like being penalized for entering a wrong answer and then fixing it before the time limit, as opposed to getting it right the first time. Certainly, I don't feel like these two systems belong together (unless the penalty for a wrong answer is steeper than the penalty for a fixed answer, thus encouraging people to fix their answers).

Edited by mathgrant 2011-06-06 8:48 PM

@ 2011-06-06 8:50 PM (#4749 - in reply to #4579) (#4749) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2011-06-06 8:50 PM

mathgrant : The real question is how many players get time and chance to change the answer once submitted. It would be only those players who finish all puzzles ahead of time. May be few players double check what they have typed, and they can still do that before they submit.
Unfortunately, I don't have any real data to share how many times submissions have changed for a particular puzzle for a particular player.

motris: Yes, the "I'm done button" is pending. I don't think it can be done before next Sudoku test. But certainly before July puzzle test #1, which will be yet another Nikoli test, and I'm sure we'll see from frantic submissions from some.
@ 2011-06-07 10:03 AM (#4751 - in reply to #4579) (#4751) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2011-06-07 10:03 AM

Given that their is lot of support for 80% for obvious typos, we'll implement that right from the next test.

One question I've : Should we also allow this for Sudoku tests? So far in Sudoku tests, we don't allow any manual override, as I posted here.

Regarding motris's radial suggestion, personally, I think we have to try this at least once before we exactly know what to expect.
It is not as much a technical challenge, but the bigger challenge is for authors/organizers to come up complete list of valid alternate solution codes for each puzzle before the test starts. With a Sudoku test, it is much easier. But not necessarily so in a puzzle test. Although the answer keys are strictly defined in all tests and LMI submission system flags when the answer is not in expected format, in every test there are many submissions which are otherwise valid except the entered format.
I would certainly be interested to try this in motris's forthcoming test, whenever that will be planned.
@ 2011-06-07 1:41 PM (#4752 - in reply to #4751) (#4752) Top

Administrator



20001000500202020
Country : India

Administrator posted @ 2011-06-07 1:41 PM

There were more number of votes to display submission time for each puzzle. Score page now displays that - http://logicmastersindia.com/M201106P/score.asp
@ 2011-06-07 9:59 PM (#4760 - in reply to #4751) (#4760) Top

Para



Posts: 315
100100100
Country : The Netherlands

Para posted @ 2011-06-07 9:59 PM

debmohanty - 2011-06-07 10:03 AM
The bigger challenge is for authors/organizers to come up complete list of valid alternate solution codes for each puzzle before the test starts.


This will be a hassle for genres like say battleships where coordinates will be asked. Because someone might put MA instead of AM, or enter them out of the intended order.
I don't really like the idea of giving people the chance to correct mistakes though during the test time after they have submitted. I think people should get the chance to have their typos corrected, which is normal in a puzzle championship, but you never get the chance to completely resolve a puzzle after submitting, unless it's in the playoff format where it's just about trying to finish all puzzles as fast as possible. I think it should just remain like any main puzzle round. Where you submit your answers, they get checked and if you think your mistake should still get points, you can submit it to the judges for evaluation to see if they feel you deserve the points.
@ 2011-06-07 10:45 PM (#4763 - in reply to #4760) (#4763) Top

debmohanty




1000500100100100202020
Country : India

debmohanty posted @ 2011-06-07 10:45 PM

Para - 2011-06-07 9:59 PM

debmohanty - 2011-06-07 10:03 AM
The bigger challenge is for authors/organizers to come up complete list of valid alternate solution codes for each puzzle before the test starts.


This will be a hassle for genres like say battleships where coordinates will be asked. Because someone might put MA instead of AM, or enter them out of the intended order.
The current score page handles this already. AM or MA will be handled fine
The problem is when someone enters A1 OR M1.
@ 2011-06-07 10:53 PM (#4764 - in reply to #4763) (#4764) Top

mathgrant




Posts: 15

Country : United States

mathgrant posted @ 2011-06-07 10:53 PM

debmohanty - 2011-06-07 11:45 AM
Para - 2011-06-07 9:59 PM
debmohanty - 2011-06-07 10:03 AMThe bigger challenge is for authors/organizers to come up complete list of valid alternate solution codes for each puzzle before the test starts.
This will be a hassle for genres like say battleships where coordinates will be asked. Because someone might put MA instead of AM, or enter them out of the intended order.
The current score page handles this already. AM or MA will be handled fineThe problem is when someone enters A1 OR M1.

I'm tempted to use the same answer format motris used in 20/10 (contents of rows/columns).

@ 2011-06-07 10:54 PM (#4765 - in reply to #4760) (#4765) Top

motris



Posts: 199
10020202020
Country : United States

motris posted @ 2011-06-07 10:54 PM

There are certainly other battleship entry modes that work. I used rows/columns with 0 = water, N = ship size for my test. That would be a unique gradable string. The only common entry error was not getting the sense of N in there, so something like 1000101111 instead of 1000104444 appeared which I accepted at the time as the information of ship connectedness was in that row.

My discussions with Deb on improving the "finish" experience of a test is specifically so I can run a test that more than 2-3 people can finish. Right now I think there is a bit of a hole in the solver experience when the test ends very early but you cannot receive results until the clock runs out. There is neither a "turn in" functionality, as there exists in live tournaments to start your bonus clock, nor a partial check functionality, as exists on all the online sites I play at, but either would improve the experience. If I'm running a test where I expect 15 solvers to finish, I wouldn't mind it feeling more like a WPC playoff where time to finish is the only relevant measure, and losing 30 seconds to a minute if you turn in something wrong is an appropriate penalty. For those solvers that would finish, it is very rare to be turning in a completely wrong paper, so I expect the sense of "giving another chance" is less relevant for the podium.
@ 2011-06-08 4:38 AM (#4770 - in reply to #4765) (#4770) Top

Para



Posts: 315
100100100
Country : The Netherlands

Para posted @ 2011-06-08 4:38 AM

motris - 2011-06-07 10:54 PM

Right now I think there is a bit of a hole in the solver experience when the test ends very early but you cannot receive results until the clock runs out. There is neither a "turn in" functionality, as there exists in live tournaments to start your bonus clock, nor a partial check functionality, as exists on all the online sites I play at, but either would improve the experience.


The difference in that is that if you have online applets, the solution will definitely be wrong. I think it is okay in an online applets to do so, because you'll definitely have made a mistake there in solving the puzzle(even if it is like your WPC in Brazil mistake). My point is more that I think it's unfair to give the same point spread to someone who makes a typo in filling in the answer key but solved the puzzle correctly as to someone who makes a mistake in a puzzle and then gets to resolve it. The solution there might be, to evaluate if the initial mistake was an answer key or a solution problem manually and either award for example 80% or 50% of the points to the solver.

I agree though that it would be handy to have a finish button to check your scores quicker.

Fillomino Fillia — LMI June Puzzle Test — 4/5th June87 posts • Page 3 of 4 • 1 2 3 4
Jump to forum :
Search this forum
Printer friendly version