Open Side Menu Go to the Top
Register
Polling Polling

11-07-2020 , 05:36 PM
Quote:
Originally Posted by Former DJ
Winners and Losers From 2020 Election

According to this article:

https://www.msn.com/en-us/news/polit...cid=uxbndlbing

the polling industry is due for a major overhaul of their methodology.

<begin>

The polls

The polls got a battering after the 2016 election — in some ways unfairly, in my estimation. They missed in some crucial states, but overall (and nationally) they weren’t that bad, and the decisive states didn’t have much quality polling.

The polls in the 2020 election, though, have no such excuses. The missed especially badly in the Midwest (again) in Iowa, Ohio and Wisconsin. But they also missed Florida by about five points and badly missed Sen. Susan Collins’s (R-Maine) clear win. Collins trailed in virtually every poll; as of now, she not only won, but she also avoided an instant runoff by taking more than 50 percent and leading Democrat Sara Gideon by more than eight points. Texas’s presidential race and the South Carolina Senate race also weren’t nearly as close as we were led to believe.

It’s time for a reckoning when it comes to how these poll are conducted. It’s difficult when political coalitions are changing, yes. But it’s getting to a point in which even leads that are outside the margin of error in many cases can’t be trusted.

All of this comes with the caveat, as in 2016, that national polls weren’t nearly so off. Biden led in them 51.8 percent to 43.4 percent, according to the final FiveThirtyEight poll average. Biden currently leads by about four points, and that’s expected to grow, especially with California always counting its votes late. The margins could also creep somewhat closer to the polls in key states, given most of the late-counted votes are friendly for Biden.

But the poll-doubters have been vindicated, to a significant degree. And any coverage in the future should reflect that increasing uncertainty.

<end>

Personal Comment/Observation

I'm lousy at math - which probably explains why I have difficulty understanding how statistics work. (All those Greek letters confuse me.) This lack of comprehension and understanding probably explains why I've never been feared at the poker table. )

Be that as it may, I wonder if the real problem [with polling] is that - in really close elections with a sharply divided electorate - a poll needs a much larger sample size in order to obtain a more accurate result? Not being an expert on any of this, it just seems to me that a sample size of 1,076 registered voters (or "likely" voters) is woefully inadequate when it comes to predicting the behavior of a million (or several million) actual voters. (Is this what some commentators are referring to when they state that a certain demographic - such as Cuban-Americans in South Florida - were under sampled?)

Could this be the real problem?
HI DJ:

Concerning your last issue the answer is no. And without going into an explanation, this is what statistical theory tells us, and what seem like small sample sizes relative to the size of the population are correct providing the sampling itself is done properly.

Best wishes,
Mason
Polling Quote
11-07-2020 , 05:40 PM
Quote:
Originally Posted by rivercitybirdie
But you could ask kindergarten students to make predictions and some would do a great job

Trafalgar is biased garbage
If they're biased garbage, can you tell us where the biases appear? I bet you can't.

It's my opinion that response bias has creep into many of these polls, and remember, I do have experience doing work in survey design. But exactly how these biases creep in and to what degree, would only be speculation on my part.

Mason
Polling Quote
11-07-2020 , 06:29 PM
Quote:
Originally Posted by Mason Malmuth
If they're biased garbage, can you tell us where the biases appear? I bet you can't.

It's my opinion that response bias has creep into many of these polls, and remember, I do have experience doing work in survey design. But exactly how these biases creep in and to what degree, would only be speculation on my part.

Mason
They produce consistently biased results, and unlike most pollsters, they hide their crosstabs so that people cannot see where their biases have crept in. Any outfit that is consistently biased and far more secretive than everyone else should not be trusted.
Polling Quote
11-07-2020 , 06:37 PM
Let's update with more results coming in:

Quote:
Originally Posted by goofyballer
Need more info:

Nevada:
RCP: Biden +2
Trafalgar: Trump +1
Actual: Biden +??

Bad:

Georgia:
RCP: Trump +1
Trafalgar: Trump +5
Actual: Biden +??

Pennsylvania:
RCP: Biden +1
Trafalgar: Trump +2
Actual: Biden +??

Arizona:
RCP: Biden +1
Trafalgar: Trump +3
Actual: Biden +??
Nevada can go in the "bad" category, as it will finish about Biden +2

We know Georgia, Pennsylvania, and Arizona are all bad and now we can attach numbers to them: Biden +a hair in GA, Biden +1 in PA, Biden +1 in AZ
Polling Quote
11-07-2020 , 06:53 PM
Quote:
Originally Posted by Mason Malmuth
If they're biased garbage, can you tell us where the biases appear? I bet you can't.

It's my opinion that response bias has creep into many of these polls, and remember, I do have experience doing work in survey design. But exactly how these biases creep in and to what degree, would only be speculation on my part.

Mason
You can't either until you get their datasets and look over the questions they ask. So this is a really weird jibe.
Polling Quote
11-07-2020 , 08:03 PM
Quote:
Originally Posted by MrWookie
They produce consistently biased results, and unlike most pollsters, they hide their crosstabs so that people cannot see where their biases have crept in. Any outfit that is consistently biased and far more secretive than everyone else should not be trusted.
They also just tell you how they are biased

Quote:
What we’ve noticed is that these polls are predominantly missing the hidden Trump vote, what we refer to as the shy Trump voter,' he said.

''There is a clear feeling among conservatives and people that are for the president that they’re not interested in sharing their opinions readily on the telephone. These people are more hesitant to participate in polls. So if you’re not compensating for this ... you’re not going to get honest answers,'' he added.
So they go in thinking that Trump's support is going to be understated because people don't like to freely admit they are voting for Trump. Whatever secretive method they use to get Trump supporters to admit it just ended up under counting Biden's support by 5-7 in AZ, PA, MI and WI. Completely predictable and not super surprising.
Polling Quote
11-07-2020 , 08:51 PM
Quote:
Originally Posted by MrWookie
They produce consistently biased results, and unlike most pollsters, they hide their crosstabs so that people cannot see where their biases have crept in. Any outfit that is consistently biased and far more secretive than everyone else should not be trusted.
And you're a scientist? but certainly not a businessman. If they're doing things better, or at least think they are, why would they release information to let their competitors catch up?

MM
Polling Quote
11-07-2020 , 08:56 PM
Quote:
Originally Posted by Paul D
You can't either until you get their datasets and look over the questions they ask. So this is a really weird jibe.
This is correct. In fact, even if I had their data as well as their questions, I wouldn't be able to tell. That's not how population sampling works.

To accurately answer things like response variance, response bias, and conditioning, reinterviews will need to be done and this might include asking similar but somewhat different questions.

Mason
Polling Quote
11-07-2020 , 11:36 PM
Quote:
Originally Posted by Mason Malmuth
This is correct. In fact, even if I had their data as well as their questions, I wouldn't be able to tell. That's not how population sampling works.

To accurately answer things like response variance, response bias, and conditioning, reinterviews will need to be done and this might include asking similar but somewhat different questions.

Mason
This is incorrect. If you saw questions like "Are you voting for our president or the socialists?" you would obviously concede that the poll is biased. There's more subtle and unintentional types of questions where people with proper training could determine biases. I would have to go back and look in my old statistic books to remember their specific names as I haven't dealt with polls outside of college.

Then there's the case if you found a poll that seemed 50% republican inside a county that was 70% registered democrats or vice versa. That would obviously look like a sample that wasn't random at all. You could also find errors by whoever used the data in the first place.
Polling Quote
11-08-2020 , 03:03 AM
Quote:
Originally Posted by Paul D
This is incorrect. If you saw questions like "Are you voting for our president or the socialists?" you would obviously concede that the poll is biased.
No. This would show that the questions have been worded in such a way that the respondents are being conditioned to produce certain answers. That's not the same as a bias.

Quote:
There's more subtle and unintentional types of questions where people with proper training could determine biases.
I don't think this is true either. Based on my experience to determine whether a bias actually exists, a reinterview of many of the respondents would be required.

Quote:
I would have to go back and look in my old statistic books to remember their specific names as I haven't dealt with polls outside of college.
I doubt that this is true either. Very few schools, unless there have been a lot of changes over the years, have courses in finite sampling theory. When I worked for the Census Bureau, I had to go at night to American University to take a graduate level course in finite sampling theory. My school, Va Tech, at least at that time, did not offer such a course in their statistics department.

Quote:
Then there's the case if you found a poll that seemed 50% republican inside a county that was 70% registered democrats or vice versa. That would obviously look like a sample that wasn't random at all.
Not necessarily. This could mean that while the sample was random, the population from which the sample came from was not the total population from which the sample should have come from.

Quote:
You could also find errors by whoever used the data in the first place.
This might be correct depending on exactly what you mean which isn't completely clear to me.

Mason
Polling Quote
11-08-2020 , 11:55 AM
Quote:
Originally Posted by Mason Malmuth
Specifically, when someone goes against what everyone else is saying, they're not likely to be correct. However, in this case, the Trafalgar people have already been right when most everyone said they had it wrong.To me, that implies that more data needs to be collected before a strong opinion should be formed.
We have more data now. In 2020 it seems the Trafalgar people got it wrong when most everyone got it right, especially with respect to Biden's vote % in key swing states. Consensus was ~50% and right on the money. Trafalgar was in the 44-46% range which is about a 20:1 shot to occur without systemic error/bias. This was the result predicted by many here and exactly what you would expect it Trafalgar got it "right" in 2016 by luck rather than something that carries over election to election like skill. Maybe you can ask that professional statistician friend of yours how to interpret this new, but wholly predictable data.

Last edited by ecriture d'adulte; 11-08-2020 at 12:09 PM.
Polling Quote
11-11-2020 , 09:47 PM
Quote:
Originally Posted by Mason Malmuth
This is correct. In fact, even if I had their data as well as their questions, I wouldn't be able to tell. That's not how population sampling works.

To accurately answer things like response variance, response bias, and conditioning, reinterviews will need to be done and this might include asking similar but somewhat different questions.

Mason
Remember when America rejected trump?
Polling Quote
11-12-2020 , 12:22 PM
Quote:
Originally Posted by Mason Malmuth
And you're a scientist? but certainly not a businessman. If they're doing things better, or at least think they are, why would they release information to let their competitors catch up?

MM
This is just an absolutely incredible misconstruing of my post. Believe me, I understand pollster business models. There are two.

1. Pollsters sell themselves as a means for getting an accurate assessment of public opinion, so that people in politics and marketing have actionable information that they can use for targeting and messaging. In order to convince people that they are accurate, they show a long track record of successes and disclose sampling and methodological information that shows that they are doing everything on the up and up and that their samples are representative. They do not have to disclose proprietary sample weighting methodologies, which they understandably would keep close to the vest.

2. Pollsters can also sell themselves as a source of telling the customer what they want to hear, regardless of the truth of the matter. These people will trumpet their successes, hide their misses, and focus only on the top line without demonstrating that their samples are indeed representative of public opinion instead of just what the customer wants to hear.

Trafalgar is clearly in category 2. There's plenty of room for opprobrium for pollsters in category 1, like Quinnipiac, who had some major whiffs, but that doesn't mean that Trafalgar is anything other than a category 2 pollster.
Polling Quote
11-12-2020 , 12:44 PM
In addition to keeping their cross tabs secret, Trafalgar should consider keeping the rest of the polling data a trade secret to keep the competition from learning their top-flight methodologies.
Polling Quote
11-13-2020 , 11:34 AM
Quote:
Originally Posted by Mason Malmuth
I never said that Trafalgar was my favorite polling outfit. What I said was that it appeared they were using a different methodology than most other pollsters and that might explain their differences. The election itself should determine whether their methods are superior or not.
Again, are their methods superior or not? This is almost exactly what the dishonest companies like Tralagar do. Constantly hype up how they have it correct before an election, then disappear if they were wrong. They'll make no changes and be back next cycle to do the same thing despite the huge miss.
Polling Quote
11-13-2020 , 04:27 PM
Quote:
Originally Posted by ecriture d'adulte
Again, are their methods superior or not.
How would I know. I would have to have access to not only their methodology but would need similar access to the methodology of other pollsters, and this assumes I would even understand it.

Why do you ask such silly questions?

MM
Polling Quote
11-13-2020 , 04:37 PM
Quote:
Originally Posted by Mason Malmuth
There have been a number of statements about the Trafalgar Group doing a poor job of predicting in 2018. So, I found this:

https://www.realclearpolitics.com/ar...in_138621.html

and it looks like Trafalgar did a good job of predicting in 2018.
Quote:
Originally Posted by Mason Malmuth
How would I know. I would have to have access to not only their methodology but would need similar access to the methodology of other pollsters, and this assumes I would even understand it.

Why do you ask such silly questions?
Heh
Polling Quote
11-13-2020 , 04:56 PM
Quote:
Originally Posted by Mason Malmuth
How would I know. I would have to have access to not only their methodology but would need similar access to the methodology of other pollsters, and this assumes I would even understand it.

Why do you ask such silly questions?

MM
You said pretty clearly before the election

Quote:
Originally Posted by Mason Malmuth
The election itself should determine whether their[Trafalgar] methods are superior or not.
What changed, other than Trafalgar missing badly? Seems like a classic "heads I win, tails we really need to delve into the philosophical and scientific justification of coin flipping"
Polling Quote
11-13-2020 , 08:53 PM
I really don't understand. If Trafalgar's results are worse than the top polls over 50+ pieces of data, why are you spending so much time entertaining the very slim possibility that they are a great polling firm? It's very contrarian/Davidian/long tail discussion interest of you to continue to engage in that manner.
Polling Quote
11-14-2020 , 04:15 AM
Quote:
Originally Posted by Mason Malmuth
HI DJ:

Concerning your last issue the answer is no. And without going into an explanation, this is what statistical theory tells us, and what seem like small sample sizes relative to the size of the population are correct providing the sampling itself is done properly.

Best wishes,
Mason
Don't the calculations of sample size suppose spatially homogeneity to a certain degree? Miami is blue but with very localized sharp red peaks. That variation could be missed by insufficiently fine sampling.

There's a big difference between sampling from a uniform distribution and sampling from an al lost uniform distribution with highly localized concentrations of mass.
Polling Quote
11-14-2020 , 09:50 PM
Quote:
Originally Posted by ChicagoRy
I really don't understand. If Trafalgar's results are worse than the top polls over 50+ pieces of data, why are you spending so much time entertaining the very slim possibility that they are a great polling firm? It's very contrarian/Davidian/long tail discussion interest of you to continue to engage in that manner.
He's not willing to admit that Trafalgar is pretty bad despite all the obvious evidence. Before the election it was "the election will determine whether they are good" and now it's "silly question, how could I possibly determine whether or not they are good". FWIW I don't think he's trolling given his history and he actually can't reason his way out of the obvious self-made trap he fell into.
Polling Quote
11-15-2020 , 02:31 AM
Quote:
Originally Posted by d2_e4
Joined 2005 and asking that? Is this the real cardsharkk04 or a troll account?
Fwiw I still can't tell if Mason is just trolling you guys lol
Polling Quote
11-15-2020 , 11:43 AM
Quote:
Originally Posted by cardsharkk04
I'd also take < 28% of black support in MI for Trump.

I'd also take JJ vs AQ
In a real shocker, it looks like you would have won.

Quote:
Originally Posted by wiki
Biden was able to boost minority turnout, winning 93% of [MI] blacks.
Polling Quote
11-15-2020 , 01:47 PM
Quote:
Originally Posted by goofyballer
Let's update with more results coming in:



Nevada can go in the "bad" category, as it will finish about Biden +2

We know Georgia, Pennsylvania, and Arizona are all bad and now we can attach numbers to them: Biden +a hair in GA, Biden +1 in PA, Biden +1 in AZ

Cherry pick data much? When you Cherry pick you can make Jingle Bells sound like Che Gelida Manina......

Quote:
FL

RCP Biden +0.9
T Trump +2
Result Trump +3.4


WI


RCP Biden +6.7
T Biden +1
Result Biden +0.7


NC

RCP Trump +0.2
T Trump +2
Result Trump +1.3


OH


RCP Trump +1.0
T Trump +5
Results Trump +8.2


And of course what you call a whiff (PA) Trafalgar was within their MOE of their survey so an objective person cannot possibly call it a whiff. And AZ fell a whopping 0.3% outside Trafalgar's MOE error bars.

In actuality the only battleground state they really missed was GA. Compare that vs other polling companies this cycle
Polling Quote
11-15-2020 , 02:10 PM
They were not within the margin of error in PA. They had Biden at 46 and he is at 49.9, they used a 1000 person sample which generates a margin of error of ~3. And for Arizona they had Biden at 46 and he really got 49.4, so off by 3.4.
Polling Quote

      
m