Open Side Menu Go to the Top
Register
Polling Polling

11-05-2020 , 05:38 PM
Nate Silver doesn't predict states. He takes the data (polls) and gives you a range of possibilities/chances of winning each state.

Yes, it looks great when all the 51%+ states on their model happen to realize, but that doesn't necessarily make the model more accurate than if a few 51%+ don't realize.

He said the polls would have to historically be off in order for Trump to win, but given that he measured the chances of polls being that far off at around 9%, that's pretty impressive to me (and we're not even in his bottom 9% predictions, we're more in his 10-25% range it seems). How do you predict that something that has never been off by nearly this much is going to be off around 9% of the time? The answer is that you know how wide the results can diverge from polling and you model it as such.

People overhyped him for "getting every state right" 8 years ago or whatever, so he got too much credit then, and he'll get buried too much during this election too, but by my eyes he seems to be a pretty good at election modeling.
Polling Quote
11-05-2020 , 05:47 PM
Yeah and he got Florida "wrong". But if he doesn't miss Florida 30% of the time his model is definitely wrong because those are the odds he gave Trump.
Polling Quote
11-05-2020 , 05:58 PM
If I say there is a 99% chance it will rain tomorrow and it doesn't, was I wrong?
Polling Quote
11-05-2020 , 06:22 PM
Quote:
Originally Posted by d2_e4
If I say there is a 99% chance it will rain tomorrow and it doesn't, was I wrong?
It depends on where you live.
Polling Quote
11-05-2020 , 06:40 PM
Quote:
Originally Posted by d2_e4
If I say there is a 99% chance it will rain tomorrow and it doesn't, was I wrong?
Not necessarily, because you avoided making a prediction. Once you go from stating chances to making predictions, interesting and horrible things happen.

For example, if you always predicted tomorrow's weather by copying today's weather into the forecast, you would (in most places of the world) have a model that make pretty good predictions. It would also be useless compared to many models with worse prediction rate, but better prediction of changes.
Polling Quote
11-05-2020 , 06:52 PM
Quote:
Originally Posted by tame_deuces
Not necessarily, because you avoided making a prediction. Once you go from stating chances to making predictions, interesting and horrible things happen.

For example, if you always predicted tomorrow's weather by copying today's weather into the forecast, you would (in most places of the world) have a model that make pretty good predictions. It would also be useless compared to many models with worse prediction rate, but better prediction of changes.
That's kind of my point. If the methodology is sound and predicts a small chance of an event occurring and that event occurs (or vice versa), we can't really say it's a bad prediction, especially with an event where the sample size is 1 trial every 4 years. To make any judgement of whether the prediction was "good" or "bad" one would have to review and understand the methodology behind it, as the actual results are basically irrelevant.
Polling Quote
11-05-2020 , 06:57 PM
I agree with poster that hate silver got way too much positive hype 8 years ago. It's nd to meet much grief more recently
Polling Quote
11-05-2020 , 08:10 PM
Quote:
Originally Posted by Trolly McTrollson
Looks like Trafalgar’s predicted 44% of African-Americans voting Trump was utter nonsense.
Didn't this come up before and they never said this number?

MM
Polling Quote
11-05-2020 , 08:16 PM
Quote:
Originally Posted by Mason Malmuth
Didn't this come up before and they never said this number?

MM
they have lots of crazy cross-polls (word?)...... whether the quoted number is accurate or not.

at best, trafalgar is right for the wrong reasons.

at worst, it's just meaningless biased blather... like Tucker Carlson doing a poll
Polling Quote
11-05-2020 , 08:18 PM
Quote:
Originally Posted by ChicagoRy
Nate Silver doesn't predict states. He takes the data (polls) and gives you a range of possibilities/chances of winning each state.

Yes, it looks great when all the 51%+ states on their model happen to realize, but that doesn't necessarily make the model more accurate than if a few 51%+ don't realize.

He said the polls would have to historically be off in order for Trump to win, but given that he measured the chances of polls being that far off at around 9%, that's pretty impressive to me (and we're not even in his bottom 9% predictions, we're more in his 10-25% range it seems). How do you predict that something that has never been off by nearly this much is going to be off around 9% of the time? The answer is that you know how wide the results can diverge from polling and you model it as such.

People overhyped him for "getting every state right" 8 years ago or whatever, so he got too much credit then, and he'll get buried too much during this election too, but by my eyes he seems to be a pretty good at election modeling.
Hi Chicago:

In fairness, Nate Silver is not a pollster. He's someone who evaluates the polls and then comes to a conclusion. But where he has an issue today is that some of the polls he gave poor ratings to seem to have done a better job than some of the polls he gave high ratings to. And, in my opinion, that's something that Silver needs to address.

While not directly related to your post, there was a poll, I believe done by the Washington Post, that said Biden had a 17 percent lead over Trump in Wisconsin. I think some of our posters would try to argue that this was a better poll that Trafalgar because it got Wisconsin right while Trafalgar had Trump winning that state.

Best wishes,
Mason
Polling Quote
11-05-2020 , 08:21 PM
Quote:
Originally Posted by rivercitybirdie
they have lots of crazy cross-polls (word?)...... whether the quoted number is accurate or not.

at best, trafalgar is right for the wrong reasons.

at worst, it's just meaningless biased blather... like Tucker Carlson doing a poll
Hi River:

But you don't know either of these conjectures. However, over time, when more information comes out relative to the exact differences between the polling techniques, we might have a better idea as to the answer, and that would have to include the possibility that Trafalgar was doing things better than their competition.

Best wishes,
Mason
Polling Quote
11-05-2020 , 08:23 PM
Quote:
Originally Posted by Mason Malmuth
I think some of our posters would try to argue that this was a better poll that Trafalgar because it got Wisconsin right while Trafalgar had Trump winning that state.
As others have explained, that's actually the kind of logic required to think Trafalgar did such a great job in the first place:

Quote:
Originally Posted by LektorAJ
Yes the polls are off in some places but the criteria on which we have been invited to admire Trafalgar's efforts in 2016 is that they called states correctly, not that they got closest to the actual result.
Which is it, states or margins? Pick one and tell us how Trafalgar did on that metric, because what I see here is a lot of hand-waving, which surely is not how professional statisticians arrive at opinions.
Polling Quote
11-05-2020 , 08:44 PM
Quote:
Originally Posted by Mason Malmuth
Hi Chicago:

In fairness, Nate Silver is not a pollster. He's someone who evaluates the polls and then comes to a conclusion. But where he has an issue today is that some of the polls he gave poor ratings to seem to have done a better job than some of the polls he gave high ratings to. And, in my opinion, that's something that Silver needs to address.

While not directly related to your post, there was a poll, I believe done by the Washington Post, that said Biden had a 17 percent lead over Trump in Wisconsin. I think some of our posters would try to argue that this was a better poll that Trafalgar because it got Wisconsin right while Trafalgar had Trump winning that state.

Best wishes,
Mason
I don't know definitively, but I would imagine the poll ratings are based on prior results. As such, wouldn't they be updated after every major sample (election)? Thus, you'd expect pollsters that did a better job than their current rating indicates to receive a bump.

So if Trafalgar Group did a good job this go-around, you'd expect to see them get a boost in Silver's modeler moving forward, correct?

Prior to today, I would imagine that their rating is based on prior results. If that's incorrect, someone please correct that.
Polling Quote
11-05-2020 , 09:01 PM
It should be noted, that Trafalgar bases there polling on the fact that people specifically won't say they are voting for Trump because of specifics about Trump. But there was no reason to think that after 2016, and after 2020, they not only missed the whole election, that everyone else got right, they understated Biden by 5+ in key swing states.

The biggest polling misses of this cycle was under polling house and senate republicans, not Trump. Congressional republicans almost universally out performed Trump, so shy Trumpers doesn't explain the error in the good polls.

Last edited by ecriture d'adulte; 11-05-2020 at 09:07 PM.
Polling Quote
11-05-2020 , 09:13 PM
Quote:
Originally Posted by ChicagoRy
I don't know definitively, but I would imagine the poll ratings are based on prior results. As such, wouldn't they be updated after every major sample (election)? Thus, you'd expect pollsters that did a better job than their current rating indicates to receive a bump.

So if Trafalgar Group did a good job this go-around, you'd expect to see them get a boost in Silver's modeler moving forward, correct?

Prior to today, I would imagine that their rating is based on prior results. If that's incorrect, someone please correct that.
Hi Chicago:

I don't know how Silver came up with his ratings. But I suspect that if Trafalgar was using methods that Silver felt were not good, he would lower than rating and consider past good results as more fluke than good substance.

As an example, and I won't mention names, but there are certain poker tournament players who have good results, yet in my opinion, at that time, I didn't think they played very well and thought their results were more fluke than substance. But now I understand that there may be certain things they were doing which were very effective against bad players, and only bad players, which led to their good results.

Best wishes,
Mason
Best wishes,
Mason
Polling Quote
11-05-2020 , 09:42 PM
Quote:
Originally Posted by Mason Malmuth
Hi Chicago:

I don't know how Silver came up with his ratings. But I suspect that if Trafalgar was using methods that Silver felt were not good, he would lower than rating and consider past good results as more fluke than good substance.
I didn't think Trafalgar let their methods be known, or do you mean methods as in online vs cell phone vs landline type methods?

However, on this topic, I've looked into the 538 pollster ratings page. The ABC news polling group (that Nate Silver gives top grades to) that people keep talking about having that +17 poll, their simple error is 2.8 pts on average over 73 polls. Trafalgar Group's simple error is 5.6 pts over 48 polls.

Source - https://projects.fivethirtyeight.com/pollster-ratings/

Now, there seems to be a lot more that goes into the grade than just those simple errors, so you can spend a lot of time figuring out why one group is rated more highly than another. If that's you, this link has a lot more info too - https://github.com/fivethirtyeight/d...llster-ratings

Silver seems pretty transparent about this though. I don't think he changes the ratings based on grudges or anything.

And I don't think 538 is beyond criticism, it just seems that most of it on this forum is too narrowly looking at 538 when criticizing (not directed at Mason on that comment).

But to you Mason, I'm not sure what you're getting at with Trafalgar Group. They seem perfectly average to below average on results, but they use private, unique methodology and make a lot of noise in the media. Would you argue that they deserve a B or an A on Silver's poll ratings based on the data in those links?
Polling Quote
11-05-2020 , 09:48 PM
Quote:
Originally Posted by d2_e4
If I say there is a 99% chance it will rain tomorrow and it doesn't, was I wrong?
Probably. It's so much easier to be wrong
Polling Quote
11-05-2020 , 11:49 PM
Quote:
Originally Posted by Mason Malmuth
Hi Chicago:

I don't know how Silver came up with his ratings. But I suspect that if Trafalgar was using methods that Silver felt were not good, he would lower than rating and consider past good results as more fluke than good substance.

As an example, and I won't mention names, but there are certain poker tournament players who have good results, yet in my opinion, at that time, I didn't think they played very well and thought their results were more fluke than substance. But now I understand that there may be certain things they were doing which were very effective against bad players, and only bad players, which led to their good results.

Best wishes,
Mason
Best wishes,
Mason
America is rejecting trump, and his buffoonery.
Polling Quote
11-06-2020 , 08:16 AM
By the time all the votes are counted, with the majority of remaining uncounted ones being from the massively blue California, the popular vote margin will be over 4% for Biden.
Polling Quote
11-06-2020 , 10:43 AM
Quote:
Originally Posted by Mason Malmuth
Hi Chicago:

I don't know how Silver came up with his ratings. But I suspect that if Trafalgar was using methods that Silver felt were not good, he would lower than rating and consider past good results as more fluke than good substance.
Silver dings Trafalgar for not documenting if they call cell phones as part of their polls and for not being as transparent as other polling firms because they are not part of either the National Council on Public Polls or the American Association for Public Opinion Research Transparency Initiative.
Polling Quote
11-06-2020 , 02:12 PM
Circling back on this now that we have more results in - there's a few states where Trafalgar looks good, a couple where we need a little more info, and a larger list of states where their polling looks quite bad.

Good:

Wisconsin:
RCP: Biden +6.7
Trafalgar: Biden +1
Actual: Biden +1

Florida:
RCP: Biden +1
Trafalgar: Trump +2
Actual: Trump +3

Ohio:
RCP: Trump +1
Trafalgar: Trump +5
Actual: Trump +8

Need more info:

Nevada:
RCP: Biden +2
Trafalgar: Trump +1
Actual: Biden +??

North Carolina:
RCP: Trump +0.2
Trafalgar: Trump +2
Actual: Trump +??

Bad:

Minnesota:
RCP: Biden +4
Trafalgar: Biden +3
Actual: Biden +7

Georgia:
RCP: Trump +1
Trafalgar: Trump +5
Actual: Biden +??

Pennsylvania:
RCP: Biden +1
Trafalgar: Trump +2
Actual: Biden +??

Michigan:
RCP: Biden +4
Trafalgar: Trump +2
Actual: Biden +3

Arizona:
RCP: Biden +1
Trafalgar: Trump +3
Actual: Biden +??



It's not exactly a surprise that in every single instance the Trafalgar polling is biased towards Republicans from the average. Seems like Trafalgar can tell you what the race will look like *if* it's gonna be a good election for the GOP, but they don't seem to be very good at measuring whether it will be.

Anyway, looking forward to the 2024 version of this thread where more fawning RCP articles whitewash all this year's misses away.

On the bright side, their 303-235 prediction was pretty good if you just ignore which names go next to those numbers!
Polling Quote
11-06-2020 , 02:43 PM
My Dear Friend and Fellow Poster Goofy:

You seem to have again failed to think like a professional statistician. While traditional polls and liberal aggregators such as Nate Silver handle uncertainty mathematically with well understood and easy to follow calculations, Trafalgar uses a different approach; nonsense conspiracy theories about voter fraud. We will need more data to decide which method is superior.

Best to You and Yours,
EdA
Polling Quote
11-06-2020 , 03:16 PM
A pollster that is biased towards one party or the other can still be quite useful, as long as the amount of bias is consistent. I believe 538 refers to it as the “house effect.”

If a pollster regularly leans Republican by about 2 points, then as long as you are aware of that and take that lean into account, they are still incredibly useful.

Note: In no way should this be construed as making any kind of comment concerning the possible worth or validity of Trafalger or any other specific pollster; it’s just a general observation concerning bias as not necessarily making a poll useless.


Sent from my iPhone using Tapatalk
Polling Quote
11-06-2020 , 05:13 PM
Quote:
Originally Posted by Mason Malmuth
Hi River:

But you don't know either of these conjectures. However, over time, when more information comes out relative to the exact differences between the polling techniques, we might have a better idea as to the answer, and that would have to include the possibility that Trafalgar was doing things better than their competition.

Best wishes,
Mason
But you could ask kindergarten students to make predictions and some would do a great job

Trafalgar is biased garbage
Polling Quote
11-07-2020 , 05:09 PM
Winners and Losers From 2020 Election

According to this article:

https://www.msn.com/en-us/news/polit...cid=uxbndlbing

the polling industry is due for a major overhaul of their methodology.

<begin>

The polls

The polls got a battering after the 2016 election — in some ways unfairly, in my estimation. They missed in some crucial states, but overall (and nationally) they weren’t that bad, and the decisive states didn’t have much quality polling.

The polls in the 2020 election, though, have no such excuses. The missed especially badly in the Midwest (again) in Iowa, Ohio and Wisconsin. But they also missed Florida by about five points and badly missed Sen. Susan Collins’s (R-Maine) clear win. Collins trailed in virtually every poll; as of now, she not only won, but she also avoided an instant runoff by taking more than 50 percent and leading Democrat Sara Gideon by more than eight points. Texas’s presidential race and the South Carolina Senate race also weren’t nearly as close as we were led to believe.

It’s time for a reckoning when it comes to how these poll are conducted. It’s difficult when political coalitions are changing, yes. But it’s getting to a point in which even leads that are outside the margin of error in many cases can’t be trusted.

All of this comes with the caveat, as in 2016, that national polls weren’t nearly so off. Biden led in them 51.8 percent to 43.4 percent, according to the final FiveThirtyEight poll average. Biden currently leads by about four points, and that’s expected to grow, especially with California always counting its votes late. The margins could also creep somewhat closer to the polls in key states, given most of the late-counted votes are friendly for Biden.

But the poll-doubters have been vindicated, to a significant degree. And any coverage in the future should reflect that increasing uncertainty.

<end>

Personal Comment/Observation

I'm lousy at math - which probably explains why I have difficulty understanding how statistics work. (All those Greek letters confuse me.) This lack of comprehension and understanding probably explains why I've never been feared at the poker table. )

Be that as it may, I wonder if the real problem [with polling] is that - in really close elections with a sharply divided electorate - a poll needs a much larger sample size in order to obtain a more accurate result? Not being an expert on any of this, it just seems to me that a sample size of 1,076 registered voters (or "likely" voters) is woefully inadequate when it comes to predicting the behavior of a million (or several million) actual voters. (Is this what some commentators are referring to when they state that a certain demographic - such as Cuban-Americans in South Florida - were under sampled?)

Could this be the real problem?
Polling Quote

      
m