Open Side Menu Go to the Top
Register
Rachel Maddow Shows Stupidity Rachel Maddow Shows Stupidity

07-28-2015 , 12:23 PM
Quote:
Originally Posted by Trolly McTrollson
If the MOE is +/-3%, there's about a 2.5% chance that the two percent guy is actually at >5% I think. I'd need to math it out a bit to get >4%.
Thanks. I think I get it.
07-28-2015 , 12:23 PM
Quote:
Originally Posted by NMcNasty
Link

Seems like she was making a clear error but was corrected by her guest towards the end of the segment (Lee Miringoff, Marist Polling director). Also, despite her error, her point still stands because candidates are polling within 1% of each other.

Paging Nate Silver...
She said "I'm not a pollster," and asked an expert. The expert corrected her and she accepted the correction. The expert also said, "I agreed with 99.03% of what you just said." I don't see the need to point and laugh.

A better opportunity to call someone stupid comes when politicians and FOX hosts say, "I'm not a scientist" and then proceed to ignore what scientists say.
07-28-2015 , 12:24 PM
Quote:
Originally Posted by Jbrochu
This is why we shouldn't chase away all the nerds and racists...
I listen to Maddow 5 days a week.

Maybe listen to Rush once a week

Maybe listen to Alex Jones once a week

Only watch O reilly to see if he's making fun of John Stewart for anything

So it's 5 Maddows to 2-3 others....

In this forum that qualifies me as a racist at least in unchained

I mean the only way to not be called loony or be insulted is to have 5 maddows to nothing else
07-28-2015 , 12:27 PM
That margin of error mistake is ubiquitous in politics.
07-28-2015 , 12:51 PM
Quote:
Originally Posted by govman6767
I listen to Maddow 5 days a week.

Maybe listen to Rush once a week

Maybe listen to Alex Jones once a week

Only watch O reilly to see if he's making fun of John Stewart for anything

So it's 5 Maddows to 2-3 others....

In this forum that qualifies me as a racist at least in unchained

I mean the only way to not be called loony or be insulted is to have 5 maddows to nothing else
07-28-2015 , 01:02 PM
Quote:
Originally Posted by Low Key
"A lesbian read a cue card that may not have been perfectly cromulent," seems to be the point of this thread.
I mean, she made a mistake and it was directly related to the point she was making. She said "I can do the math," and she actually couldn't. So I get the criticism.

I don't see how her inability to spot her mistake here means that she has a math illiteracy problem that greatly affects her analyses.
07-28-2015 , 01:14 PM
Quote:
Originally Posted by goofball
That margin of error mistake is ubiquitous in politics.
Can you explain what the mistake is? I'm still not quite getting it.
07-28-2015 , 01:16 PM
Quote:
Originally Posted by govman6767
Maybe listen to Rush once a week

Maybe listen to Alex Jones once a week
Listening: not a problem.
Nodding along enthusiastically: kind of a problem.
07-28-2015 , 01:16 PM
Quote:
Originally Posted by goofball
That margin of error mistake is ubiquitous in politics.
Can you explain what exactly the mistake is? I'm still not getting it.
07-28-2015 , 01:37 PM
I'd argue that DS's claim that this has a non "little" impact on her ability to analyze other subjects is a more egregious error than hers.
07-28-2015 , 01:45 PM
Quote:
Originally Posted by Trolly McTrollson
Can you explain what exactly the mistake is? I'm still not getting it.
The mistake is looking at polling data with a margin of error of, say, +/- 3% and thinking that means, for every candidate, that their true support level lies somewhere between x-3% and x+3%.

For example, if there are only two candidates in the race, candidate 1 has 51% and candidate 2 has 49%, and the poll has a +/- 3% error, it is fine to say that the two candidates are within the margin of error, so that candidate 2 could actually have more support than candidate 1.

Maddow was trying to apply this same logic to a larger field size and to candidates on the tail of the distribution to argue that, for example, candidate 1 that received 4% in a poll could actually have more real support than candidate 2 who polled at 7%. Her argument was that if the margin of error was +/- 3% then it is possible that the true support level for candidate 1 could be as high as 7% (4%in the poll + the max possible error) while the true support for candidate 2 could be as low as 4% (7%- the max possible error). The problem and the mistake that newspeople often make is that the math is not as simple as just adding or subtracting the moe, especially with many candidates and tails. You can tell this is a mistake because a candidate polling at 1% in this poll could never have a true support rate of -2%, right?
07-28-2015 , 01:46 PM
I saw something about this last time I watched her show but what I vaguely remember (I was also playing Hearthstone at the time) was that she was calling out Fox for not including more candidates in the debates (ie the saner ones who are polling dismally) rather than the MOE. I think this was on Friday. Was this from last night's show?
07-28-2015 , 01:48 PM
Quote:
Originally Posted by Trolly McTrollson
Can you explain what exactly the mistake is? I'm still not getting it.
There are actually two issues:

1) Standard error is a function of both p (the number of respondents choosing the candidate) AND N (the number of respondents). For a poll of a fixed sample size, standard error is highest at p=0.50, and falls as p approaches either 0 or 1. The rule of thumb for calculating confidence interval uses a p=0.50 assumption, so she's overstating the standard error for those candidates polling poorly.

2) She's appears to be assuming that "confidence interval" implies a uniform probability within the interval (and if she's not assuming that, it's the ubiquitous mistake I was referring to, plenty of people do assume it). In other words, when she says "Christie is polling at 4 plus minus 2 points" that doesn't mean "Christie's true number is as likely to be 2, 4, 6, any thing that falls within the interval." What it does mean is that Christie is 95% to be between 2 and 6, but Christie being at 4 is still the most likely outcome.

So to sum up: longshot candidates have tighter MOEs and tighter confidence intervals, AND those means/confidence intervals are articulating a normal distribution, not a uniform distribution.

So for a sample size of 1000, let's use an example of Christie 4%, Paul 3%, Cruz 2.5%.

What it doesn't mean is
Christie: ANYWHERE between 1 and 7
Paul: ANYWHERE between 0 and 6
Cruz: ANYWHERE between -0.5 and 5.5

What it does mean, is that each candidate's outcome is a normal distribution (peaking at the mean and falling off toward the tails) with the following parameters:
Christie: mean of 4% with stdev of 0.63%
Paul: mean of 3% with stdev of 0.55%
Cruz: mean of 2.5% with stdev of 0.50%


Also, this whole thing just refers to sampling error (the error from polling a subset of the population), there's also pollster bias, as well as (obviously) temporal uncertainty - Christie at 4% now doesn't say a ton about what Christie will be a year from now.

Add to all that, I don't know why she has a bug up FNC/CNN using national polls. I think anything that marginalizes Iowa/NH/SC in the primary process is a good thing.
07-28-2015 , 01:50 PM
Thanks, nit, that makes sense.
07-28-2015 , 01:59 PM
Quote:
Originally Posted by AlexM
Uhm, everyone who gives daily performances on TV is reading from cue cards. Even if they wrote the cards themselves and know what they're talking about down cold, it's still going to be easier to read from a card.

Just was an important part of my argument. She does a lot more than reading cue cards.
07-28-2015 , 04:03 PM
Quote:
Originally Posted by uke_master
I'd argue that DS's claim that this has a non "little" impact on her ability to analyze other subjects is a more egregious error than hers.
Its not the actual error. It is the fact that the error is not obvious to her.
07-28-2015 , 04:28 PM
Quote:
Originally Posted by David Sklansky
Its not the actual error. It is the fact that the error is not obvious to her.
Why would it be, exactly?

If someone explained how margin of error worked to her and she still didn't understand it, then you could start to talk about her inability to analyze ****.
07-28-2015 , 04:44 PM
Quote:
Originally Posted by Autocratic
Why would it be, exactly?

If someone explained how margin of error worked to her and she still didn't understand it, then you could start to talk about her inability to analyze ****.
But she didn't ask it to be explained before doing a whole segment on it. Because it didn't feel wrong to her. So why wouldn't similar issues that have a math/probability component that she misunderstands not be improperly analyzed because she doesn't know what she doesn't know?

Again though I think the same could be said for the FOX and CNN guys (except for Chris Wallace.)
07-28-2015 , 05:15 PM
Only on this forum would anyone ever give a **** about this. Seriously, who cares.
07-28-2015 , 05:37 PM
Fortunately her job doesn't hinge on statistical analyses.
07-28-2015 , 05:43 PM
Seems like pollsters present uncertainties in a way that's different from most other fields. I would just assign each candidate their own standard deviation instead of having a global MOE. By oversimplifying it, they've made it easy to misinterpret.
07-28-2015 , 05:50 PM
Quote:
Originally Posted by PeteMesquite
Only on this forum would anyone ever give a **** about this. Seriously, who cares.
You cared enough to post in the thread.
07-28-2015 , 05:55 PM
Quote:
Originally Posted by Trolly McTrollson
Seems like pollsters present uncertainties in a way that's different from most other fields. I would just assign each candidate their own standard deviation instead of having a global MOE. By oversimplifying it, they've made it easy to misinterpret.
Honestly this probably wouldnt really help that much (though I agree it would be better) for the reasons goofball pointed out. Hearing "37% with a MOE of 3%" is "misleading" to most people who read it as "He has between 34-40% of the vote." He has between 0 and 100% of the vote if thats how you are going to interpret that. Each value becomes less and less likely the further (in either direction) that you get from 37%, and there isnt really anything special or non-arbitrary about plus/minus 3%, except for convention. There are parallels to using p of 0.05 uniformly in scientific significance testing. It is an honest convention but it falls afoul of the fact that people like black/white and have a real hard time with continuous things. It would be just as illuminating, and less likely to be misleading, to just simply report the n of your polling sample. It conveys essentially the same information, without the false confidence of a confidence interval and without the false interval of a confidence interval.
07-28-2015 , 06:10 PM
Thanks goofball
07-28-2015 , 06:55 PM
(i will try to answer properly some of the issues raised so dont let the length discourage you again lol)

She made multiple bad errors here. I like her in general because she intelligent but she is very partisan also at times (generally on the correct side of things but not always and it is better to be more objective in order to appeal to more people in general with you logic). Here she screwed up the math but her argument still has some merit that excluding the bottom 6 can be a mistake and at least who is number 10 in reality is a question that easily has over 75% chance to not be the observed number 10 with 6 others left below (at least the 3 of them relevant)



Look here
https://en.wikipedia.org/wiki/Multinomial_distribution

Basically when you have many choices (candidates) each having probability pi (i from 1 to k say) all adding up to 1 if you sampled the population N times you expect the i candidate to have

N*Pi as average result and sd = N*Pi*(1-Pi).

You would then select confidence intervals to report. For example if you wanted to have 95% chance that the true probability was within what was observed+- the error you would be looking for a deviation (radius around say) from the mean that has 5% chance to land you outside the range of reported average +- the error. This is how you can find the error that corresponds to that confidence interval. For example at 95% confidence interval you want to have a chance of only 1/2 of that 5% =2.5% (1 for each direction up down to add up to 5%) to be higher than the avg plus the error. So you are looking at a number x standard deviations away than the average such that the chance to be below avg+x*sd is 97.5% (1-5%/2). This is true at 1.96 standard deviations as can be found by the cumulative probability tables or software packages.

The error therefore is 1.96*(Pi*(1-Pi)*N)^(1/2) and in relative terms (ie divided by the population N you tested) is 1.96*(Pi*(1-Pi)/N)^(1/2).

Now keep in mind i think they define the polling error they quote in general in the polls reported in news at some eg 95% confidence interval as the error you would have if Pi=50% (which is the maximum the above error can be clearly). That would give 1.96/2/N^(1/2)=0.98/N^(1/2)

This is how they arrive at the result for example in this link https://en.wikipedia.org/wiki/Margin_of_error in the section different confidence intervals.

If they wanted 99% confidence interval it would be 1.29/N^(1/2).


I am not sure what the polls she is talking about are (at what confidence interval so i will assume 95%).


In that case a poll error of 4% would suggest that 0.98/N^(1/2)=4% or N=600 people asked.

If they use 5-6 different polls and take an average it will be an error much smaller than the 4% she used as reference though. Eg see if they used 5 polls of 600 each you have technically N=3000 and then at 95% confidence interval the error is 0.98/3000^(1/2)=1.8% much smaller than the 4% of each poll.

Now when you are talking about small Pi in the tails you clearly as David Sklansky and others noted cannot use again 4% (or the 1.8% if using many polls added up).

You need to then go to 1.96*(Pi*(1-Pi)/N)^(1/2).

For example if N=3000 (5 x600 polls) and Pi=2.6% then the error is only

1.96*(0.026*(1-0.026)/3000)^(1/2)=0.57% not 4% not even 1.8%.

So when she says 2.6+-4% she should be saying 2.60+-0.57%.

You can then ask questions like how likely it is that the 2.2% guy is ahead of the 2.6% making it an injustice to be left out of the top 10 say or even generalize it to ask what is the chance any of the other 6 say are better in reality than the 10th cutoff guy.

That is a harder question to ask. You can answer it easily only if you know the true probabilities in the sense of asking what is the chance that you are in reality higher and you will appear lower in sampling.

In general if you have pi,pj and you want the chance that i gets more voters/samples for them than j you can assume both are normal (central limit theorem) and take the difference (also normal) and that will have a expected value of (pi-pj)*N and the sd of that difference will be the (si^2+sj^2)^(1/2) (see addition or subtraction of normal distributions)

with si^2=pi*(1-pi)*N , sj^2=pj*(1-pj)*N.

So sd=((pi^2*(1-pi)^2+pj^2*(1-pj)^2)^(1/2)*N^(1/2)

I mean if you have 2.60%+0.57% and 2.20+-0.53% then the difference is 0.40%+-0.77%.

So the chance the 2.2% comes ahead of the 2.6% in a poll in real life (ie result 0.52 sd above expected) is 30%.


Now we do not have this here we do not know the true probabilities, these are sample estimates out of say 3000 people. The reverse question is the interesting one. What is the chance that the true order is different than observed in the samples. I think the crude estimate will remain the same (near 30%) but i need to check properly how to answer such reverse question.

Also one can easily see by looking at the accumulation of so many candidates near 2% that the probability someone other than the observed 10th is the real 10th is probably way over 75%.

This is the legitimate criticism of selecting only 10 for debates. Yes you have to cut off at some point because eg 16 is more cluttered debate than 10 that is already big.

You would likely either need much more accurate polls or develop a format of 3 or more debates and include also the bottom 6 or 10 say or so few at a time or maybe with some drawing based on how they scored that would still give them the chance if not to be in all debates to at least be in a few of them (1 or 2 or 3 for the bottom 10 say at least out of 3 or 4 total debates keeping say the top 6-7 fixed always there).


(which you can consider an expansion to follow the logic better of what goofball and DS on first page who already posted earlier to give proper credit)

Last edited by masque de Z; 07-28-2015 at 07:20 PM.

      
m