Yes but there are common methods, if I show someone 4x4 and they say 20 I know what the mental error probably is and can test it pretty quickly. Once I know the error I can pick from a wide array of tools to help fix that. People who use something totally different are rare enough you won't often have to encounter them, there's usually only a handful of methods people use.
True, but that handful tends to get screwed hard precisely because there are no wellknown methods of teaching them and the rarity of their optimal learning style makes the chance of any developing remote.
I think overall it is more representative of a thresh hold between conscious and subconscious thought. There's still subconscious thought and memorization going on, because people tend to remember 4x4 =16 rather than 4+4+4+4=16 and explains why on a 4x4 grid of dots removing one or two dots would significantly slow down their answer. They can 'count' the stars on a 8x8=64 grid a lot faster than the US flag of two overlapping grids, 6x5 and 5x4 = 30+20=50 setup. Of course, I see a flag as two rectangle grids overlapping, to be counted separately then added, other might view it as 4 double rows of 11 stars plus half a remainder, 4x11+6=50. The first method will likely be the better one, except in cases where one has a very high ratio of width to length, in which case that double row method is probably faster.
For the record, I tend to conceive it best as alternating lines with one less in the second, especially if I must reproduce it; then I need only remember which comes first, and my visual memory is at least good enough to say, "the long one." I just do not consciously think of them as rectangle; obviously your conceptualization as two rectangles offers the advantage of knowing from the start where to stop, though mine is probably easier to remember.
The thing is though, with very rare exception most people draw a blank with any arrangement of >5 objects. I actually stumbled across that observation in one of my old Encylopedia Americanas, in the article on "awareness," but usage of the word and understanding of what it conveys has altered so much in the fifty-odd years since that article that it may be hard to link the phenomenon with that term now. I meant to post a thread about this a year or two ago, but doing a search for "awareness" turned up nothing except the temptation to necro, so I guess it never went beyond a thought. Type a series of lines like:
....................
..
......
.
..........
....
........
.....
and most people will be able to instantly identify every other one, but ONLY every other one. A few people might get 6, but I doubt even 1% of people would spot 10 without counting, despite the fact we see ten objects everytime we glance at our hands. Even choosing an easily recognizable way to represent >5 objects is not easy unless we put them in multiple rows (so we can count the rows;) we could use the vertices of a hexagon for 6, and a central point for 7, but after that, well, OK, we could do the same with a septagon, but how many people recognize even a regular septagon without counting the sides/vertices?
We just do not do well with large numbers, which is why most people have a very difficult time conceptualizing truly huge numbers. A million is a thousand thousands, a trillion is a million millions; that is about as far as precise as most of us can get. Subconsciously or otherwise it just does not register as anything but "a crapton," or perhaps "a crapton more than a crapton." About the only people who begin to grasp the difference between $10 million in assets vs. $10 billion are the ones who actually possess one or the other.
I wasn't even aware geometry included a logic section. I'm also in the bad handwriting club, but never had remedial, just bad grades.
Plane geometry is SOLELY logic; what frustrated me for a long time was its utter lack of numbers. Telling a 13 year old to do math without numbers produces ""
I only got remedial handwriting in sixth grade; after that I think most teachers gave me up as a lost cause. I will never forget the time in APUS History when our teacher went around telling each of us the one area she thought we should brush up on just before the test; her comment to me was basically "you know ALL of it; concentrate—hard—on writing neatly and legibly." And that was despite printing everything; I do not even bother with cursive, because it is as painful for me to write as for others to read. I got a 5, so I guess block letters were good enough.
Absolutely on the latter, the best way to master a subject is to teach it, but I only agree in part on standardization, I think it better to teach all major standards, more flexibility, less 'oops' mental barriers to trip over.
Well, the more standards known for teaching the same principle, the more accessible it is. And you are right that more valid approaches to the same principle reduces the prevalence of blindspots.
Though I feel there's something confused, rambling, and very much a massive digression to everything I wrote here Do not feel obliged to reply point for point
It is something of a sickness with me. I will say THIS card player does NOT think in base 13 superbase 4, even when playing. Perhaps I SHOULD, but my memory is not in good enough shape to count every card; usually I just count honors so I know what is high in each suit, and distribution so I do not lead anything CERTAIN to be trumped (or worse, give the bad guys a rough-slough.) Sometimes that gets me in trouble once all the honors are gone and I cannot remember if an 8 or 7 or whatever is good or a higher non-honor spot is still out there. One such occasion proved especially embarrassing because I had lost count of the distribution as well, which left me wondering if my heart 8 (or 7, forget which) was high when it was not just the high heart, but the LAST heart.
From what I can tell, most people tend to think in terms of "un/somewhat/very likely," and do not go further absent the incentive you reference. In AD&D a natural 20 is a crit success and 1 is a crit fail (or vice versa,) while in GURPS a 3 or 4 is crit success and a 17 or 18 is a crit fail*. Most people will look at that and think "makes sense; criticals are supposed to be rare, or at least uncommon," some might even opine that the ability to produce either with two rolls rather than one makes them more common in GURPS. However, the chance of rolling 20 (or 1) on a d20 is a fairly respectable 5%, while the chance of rolling 17 or 18 on 3d6 is <2%—even though there are 4 times as many ways to do it! People who are not veteran gamers (or mathematicians) seldom realize that.
Anyway, to see it written out and annotated, try the below link.
*GURPS further complicates things because a natural 17 is only a "normal" failure for skills >15, and any natural roll 10+ below an unmodified skill is a crit success. Both incentivize buying skills past 15, which would otherwise be almost pointless since there are only 11/216 ways to roll >15 on 3d6.
Ah, gurps... the only RPG think mentioned in the conversation thus far
As it should be, since it is the only one WORTH mention. I am a bit surprised you have no further comment on cards though; bridge is far more mathematically fascinating than dice are. I only know one person with the math and card background to debate my position that the "8 ever, 9 never" finesse rule is wrong, and he refuses to take the bait. If you care to google "8 ever 9 never finesse" you should see quickly what I mean:
With 9 cards, without the queen, playing for the drop is better than an IMMEDIATE finesse, but playing the king or ace first drops all singleton queens (on the left 1/2^4 times and on the right 1/2^4 times) while preserving the ability to THEN finesse the jack and 10 into the king (in the other 14 cases.) A finesse is 50/50 by definition (the queen is either "onside" or not,) so playing the ace then finessing wins (1+1+7)/16 times, or 56.25%.
On the other hand, playing for the drop throughout only works when the queen is singleton (12.5%) or doubleton (37.5%), which ironically makes it "the inferior 50% chance." We can hedge a bit by saying playing the ace immediately exposes the 4-0 onside split in time to take the finesse, but that still only brings it to parity with the delayed finesse, and it is still the finesse—NOT the drop—that wins the queen. Essentially, it is a different way of looking at the same process first described; if one played the ace, saw the suit split 4-0 with the queen onside, but then led the king anyway to play for the impossible drop, the queen would be lost.
Such has been my contention for about 20 years, but I am inclined to view it differently after looking over the following link, which factors in the probability of an opponents holding in the OTHER three suits against the probability in the suit missing the queen: http://www.durangobill.com/BrSplitHowTo.html
He concludes, rightly, I think, factoring the three seemingly "irrelevant" suit distributions actually alters the all important "percentage play." Not much (<1.7%,) but enough to alter what had been dead even: Suddenly the delayed finesse only wins 56.2% of the time, less than before, while the drop works 57.9% of the time, much more. What is particularly interesting is the difference owes to the chance of a 4-0 or 3-1 split decreasing and that of a 2-2 split increasing, even though bridge "conventional math" says, "suits missing an even number of cards most likely divide unevenly; suits missing an odd number of cards most likely divide as evenly as possible." This rule (which Pascals Triangle quickly demonstrates) still holds (3-1 splits are still more likely than 2-2s overall) but by less than before, which makes the difference.
GENERALLY SPEAKING, the chance of a given suit distribution can be roughly calculated at the table as nCr/n^2, where n is the number of cards missing and r the number in a given opponents hand. Once play begins, each player can always see two hands (his own and dummys,) leaving declarer to ask, "which defender has x?" and the defenders "does declarer or my partner have x?" Thus it is a nice game for those who enjoy math, because the percentage play is set in stone from the first card led, though it is often hard to find (and rarely assured success; nothing infuriates me more than a low percentage holding setting my contract, or making an opponents, when the most probable one would do the opposite.)
The first thing they taught me was "if you can count to 13, you can play bridge;" the second was "play is easy: BIDDING is hard." In fact, many playing rules of thumb exist for those with neither the card nor math experience to derive them. The best BIDDING rule is "always trust your partner; never save your partner," because there is nothing more annoying (or fun to watch) than two partners pushing their contract ever higher arguing over a trump suit. "Gee, ya'll make 4 Spades easily; 6 Diamonds is kinda hopeless though. Or 6 Spades. Six of anything, really. Can we play for money next time...? " The only other bidding rule I esteem as highly is "never open a four card major suit," but the Grim Reaper should take care of the last 4 card major player soon (if he has not already.)
Rerolls or dropping the lowest does greatly change things; I picked up the latest edition of Talisman a few months ago and it now allows rerolls via "fate," which is game BENDING at the very least. Many confrontations I would have blithely skipped when possible before are now far less intimidating. I think many games offer that drop vs. add variety just to make min/maxxing that much more difficult, but min/maxxing is like that phrase Chris Berman used to love: You can't stop it, only contain it. I have to admit though, once I found bridge dice lost a lot of appeal. It did not stop me working out a formula to calculate the odds of rolling an x on ydz, but I am usually all about the cards when I play these days. Maybe I should try the hopelessly dated Twilight 2000 again; it used cards as RNGs a lot.
Last First in wotmania Chat
Slightly better than chocolate.
Love still can't be coerced.
Please Don't Eat the Newbies!
LoL. Be well, RAFOlk.