Open Side Menu Go to the Top
Register
People are bad at estimating People are bad at estimating

09-26-2009 , 01:39 PM
Quote:
Originally Posted by VanVeen
yes, it is. it was so insightful i almost posted that it was insightful. it overlaps hugely w/my own ideas re: goals and sub-goals and defining ''win-states'' and so forth. absolutely essential to understanding human behavior. more.. eventually.

for now, playing to win. my ebook will be better
Interesting link. Those concepts are definitely suggestive, but I don't quite see how to apply them to behavior in general. Have to wait for the e-book I guess...
People are bad at estimating Quote
09-26-2009 , 02:42 PM
Quote:
Originally Posted by madnak
Honestly, sometimes I think running too far with the results we do have is the only viable option. Especially in a situation where everyone already does that, and not doing so is a significant handicap (in terms of ability to get published, be taken seriously as something other than an uber-nit, etc).
I think that's what's appropriate with the test that started this whole thread, as long as we don't get too precise with it: I would think any reasonable person would look at the results and say that, though it's hard to say exactly how big the effect is, people really do suck at evaluating their own ability to get things right, and specifically they almost all overestimate that ability.
People are bad at estimating Quote
09-26-2009 , 02:59 PM
Quote:
Originally Posted by Subfallen
Interesting link. Those concepts are definitely suggestive, but I don't quite see how to apply them to behavior in general. Have to wait for the e-book I guess...
Really? That was interesting? I thought it was terrible, and the comments are hilarious. VanVeen, are you this Sirlin guy?
People are bad at estimating Quote
09-26-2009 , 03:27 PM
I didn't read the comments. The interesting concept, IMO, is how often people have conscious "goals" that are disconnected from underlying emotional priorities. Zero chance that VV = Sirlin though.
People are bad at estimating Quote
09-26-2009 , 03:50 PM
My impression was that the Streetfighter article was trying to show that there is a positive correlation between winning, according to a given rule set and victory criterion, and trying to win according to that same rule set and victory criterion. Groundbreaking stuff.

Then the Akuna example, contrasted to the bugs, supports the idea that agreed-upon rule sets are a matter of social custom, and that a custom is appropriate when the author thinks it's appropriate.
People are bad at estimating Quote
09-26-2009 , 04:03 PM
Quote:
Originally Posted by Subfallen
I didn't read the comments. The interesting concept, IMO, is how often people have conscious "goals" that are disconnected from underlying emotional priorities. Zero chance that VV = Sirlin though.
I hope not.
People are bad at estimating Quote
09-26-2009 , 04:07 PM
Quote:
Originally Posted by atakdog
My impression was that the Streetfighter article was trying to show that there is a positive correlation between winning, according to a given rule set and victory criterion, and trying to win according to that same rule set and victory criterion. Groundbreaking stuff.
No matter if this accurately summarizes Sirlin's intent, I doubt VanVeen linked the article to reference those ideas.

But obviously he'll elaborate if he wants to, so w/e.
People are bad at estimating Quote
09-26-2009 , 05:00 PM
Quote:
Originally Posted by Subfallen
No matter if this accurately summarizes Sirlin's intent, I doubt VanVeen linked the article to reference those ideas.

But obviously he'll elaborate if he wants to, so w/e.
In case it wasn't clear: I didn't mean that was the author's actual intent, just that's all he seemed to manage to show.
People are bad at estimating Quote
09-26-2009 , 07:31 PM
Quote:
My impression was that the Streetfighter article was trying to show that there is a positive correlation between winning, according to a given rule set and victory criterion, and trying to win according to that same rule set and victory criterion. Groundbreaking stuff.
where have i heard something like this before? oh, right, from a hypothetical scrub: They say "that guy didn't do anything new, so he is no good." the author did not say he'd discovered a heretofore unheard of concept or phenomenon. what he did was restate a very general and abstract conceptual relationship and apply it to the specifics of a novel and popular domain in a clear and accessible way. i think that qualifies as praiseworthy.

as for your (mis)interpretation of the article, do you think you failed to achieve the ''victory criterion'' of applying what the author was trying to say to the topic of this thread because you're incapable of doing so, or was it because your intentions (your evolving goal-states) when reading the article were in conflict with conceptual cross-pollination? how would you go about answering that question? incidentally, how we should test the more general and inane conjectures of research psychologists depends on your answer to the latter.

Quote:
VanVeen, are you this Sirlin guy?
no, but sirlin is probably the smartest dude talking about games (of any kind) on the internet in a way that is accessible to the typical gamer.

Quote:
The interesting concept, IMO, is how often people have conscious "goals" that are disconnected from underlying emotional priorities
at least someone can read.

intra-psychic and interpersonal goal conflicts are ubiquitous and self-evident. how do we determine where they exist and in what degree? how do we fix conflicts between goals when our meta-goals dictate that we should? this is very far from trivial esp for those whose meta-goal (often this means long-term) is truth discovery or a domain-specific competency or the avoidance of self-defeating behavior ("utility maximization" through "goal modification and realignment" is a favorite topic of mine). it's also very far from obvious since virtually no one uses ''goal-states'' to frame their discussions and arguments except in the most vague and disingenuous ways.

that's why this article was so popular with blogging decision-theorists and programmers of self-modifying AI (...). it's a (very) well written introduction to an extremely useful conceptual framework.
People are bad at estimating Quote
09-26-2009 , 07:56 PM
as for the OP:

people cannot reliably and consistently map symbols to internal states without having practiced doing so. the data therefore suggest that: a) the referent of "90% confidence" is inter-subjectively correlated by default (sans feedback/practice - curious), and; b) what "90% confidence" means to most test takers is very poorly correlated with the formal mathematical definition of "90% confidence". why b is so is a question for the stamp collectors of psychological research.

i would imagine that with the right incentives and feedback subjects (and some other things mentioned by the more rigorous posters) would achieve the ''victory criterion'' relatively quickly. in the real world things are never so easy. the ''victory criteria'' of most tasks do not demand super accurate self-assessments and they often include all sorts of criteria only relevant to maintaining internal goals (and therefore largely inaccessible to today's researchers). sorry to be a curmudgeon. i just find this sort of research ridic general vague stupid.
People are bad at estimating Quote
09-26-2009 , 08:58 PM
Good points. I find it odd that you link to people like Sirlin and Roissy. You are smarter than them, as far as I can see.
People are bad at estimating Quote
09-26-2009 , 10:12 PM
Quote:
Originally Posted by madnak
Good points. I find it odd that you link to people like Sirlin and Roissy. You are smarter than them, as far as I can see.
I was a bit surprised when VV mentioned Roissy was one of his favorite blogs, but after reflecting on it for a moment decided it was perfect.
People are bad at estimating Quote
09-27-2009 , 12:46 AM
Am I the only one who found the instructions contradictory?

Quote:
Don't make your ranges too narrow or too wide, but be sure they're wide enough to give you a 90 percent chance of hitting the correct value.
In order to get 90% chances I think my range had to be very wide, but I was ambiguously instructed not to make it "too wide."
People are bad at estimating Quote
09-27-2009 , 08:41 AM
I suspect if you included the line "It doesn't matter if your estimate is so wide that it's useless" in the test intro, you'd get very different results. When people are asked for an estimate, they assume they're being asked to provide something useful, and that necessity precludes them offering estimates which span 10 orders of magnitude.
People are bad at estimating Quote
09-27-2009 , 08:50 AM
By the way, my answer to all the questions is "plus or minus infinity, excluding any number which is divisible by 10 if rounded to the nearest integer". Did I win?
People are bad at estimating Quote
09-27-2009 , 10:39 AM
Quote:
Originally Posted by VanVeen
this is very far from trivial esp for those whose meta-goal (often this means long-term) is truth discovery or a domain-specific competency or the avoidance of self-defeating behavior...
No kidding...it's light years from trivial. I doubt there's been a single day in my conscious life that I could characterize as anything less than self-destructive.

Bleh. If you write a good book on this, you will help a LOT of people.
People are bad at estimating Quote
09-27-2009 , 01:14 PM
Quote:
Originally Posted by ChrisV
I suspect if you included the line "It doesn't matter if your estimate is so wide that it's useless" in the test intro, you'd get very different results. When people are asked for an estimate, they assume they're being asked to provide something useful, and that necessity precludes them offering estimates which span 10 orders of magnitude.
I dont understand why people keep saying this, extremely wide ranges, while not VERY useful, are certainly more useful than explicitly incorrect ranges. Is your point that its better to just take a 1% of getting lucky? Thats more useful than making a very wide range? I cant see how.

We are talking about two strategies with very small amounts of utility, but huge range is still more useful than narrow incorrect range.
People are bad at estimating Quote
09-27-2009 , 02:33 PM
Quote:
Originally Posted by vhawk01
I dont understand why people keep saying this, extremely wide ranges, while not VERY useful, are certainly more useful than explicitly incorrect ranges.
Actually I don't think you're right about this. I got the Alexander question wrong by answering 500-450 BC; however I think this is a more useful range than someone who puts 1000 BC-AD 500, in giving someone a concept of when Alexander lived and what impacts he had on the world that developed after him.

A range of estimates is just that; it's every data point in the range. Just because a range includes the actual value, doesn't change the fact it includes values that are orders of magnitude away, and someone using the range for a practical purpose has to use all the values in the range.
People are bad at estimating Quote
09-27-2009 , 02:42 PM
Upon further reflection I think there could exist a formula for an estimate range's usefulness based on the average difference between each point in the range and the actual value. Ranges that include the true value will have the advantage of starting out with points very close to the value, but if the range is very large the average difference will increase accordingly. A narrow range that comes close to but misses the exact value will still have a high score for usefulness.
People are bad at estimating Quote
09-27-2009 , 02:44 PM
Quote:
Originally Posted by JohnnyHumongous
Actually I don't think you're right about this. I got the Alexander question wrong by answering 500-450 BC; however I think this is a more useful range than someone who puts 1000 BC-AD 500, in giving someone a concept of when Alexander lived and what impacts he had on the world that developed after him.

A range of estimates is just that; it's every data point in the range. Just because a range includes the actual value, doesn't change the fact it includes values that are orders of magnitude away, and someone using the range for a practical purpose has to use all the values in the range.
Ok, I suppose in specific circumstances that might be true, but I dont think it holds generally. If I want to know where a hurricane is going to hit, I'm better off knowing "Somewhere on the East coast" than I am knowing "Along the SC coast" and then having it hit FL.
People are bad at estimating Quote
09-27-2009 , 08:13 PM
Quote:
Originally Posted by vhawk01
I dont understand why people keep saying this, extremely wide ranges, while not VERY useful, are certainly more useful than explicitly incorrect ranges. Is your point that its better to just take a 1% of getting lucky? Thats more useful than making a very wide range? I cant see how.

We are talking about two strategies with very small amounts of utility, but huge range is still more useful than narrow incorrect range.
I was talking about why people do badly in this. My point was about psychology, not the utility of wide answers.

Put it this way - suppose you substituted the words "useful estimate" for "estimate" in the problem. Even though the terms of the problem remain exactly the same, do you think this would influence people to put narrower ranges down? I think the phrasing of the problem is already doing that. What people are being asked to do here is not estimate the answers, but provide limits on their own ignorance, which is not really made clear. I understand that really it's the same thing, but psychologically it isn't.

I offer this up as a competing explanation to "people are bad at estimating" and "people don't want to admit their own ignorance", neither of which I think are on the money here. If you asked me "If I asked you to give me an estimate for the volume of the Great Lakes, what are the chances you would get the order of magnitude correct?" I would have no hesitation in answering about 5-10%. I have a lot more psychological resistance to providing you with an "estimate" spanning 15 orders of magnitude, even though it amounts to the same thing.

If this were a real world estimation problem, there would be a penalty for submitting wide ranges. Here there's no reason for me to even spend any time doing calculations to come up with an answer. Rather than think about the problem, I'll just widen all my answers by 50%. It's not really intuitive when asked to do an "estimation problem" that there are no prizes for having a narrower range.
People are bad at estimating Quote

      
m