The science of polling explained

Hugh Hewitt interviewed the director of the Marist poll:

HH: The Marist poll out of Ohio that showed President Obama with a 7% lead among likely voters over Governor Romney used a sample that had 10% more Democrats than Republicans — a 2% greater advantage than occurred in 2008 at the height of Obamamania! I’ll ask the poll’s director, Professor Lee Miringoff, for an explanation of the sample…

LM: Among land line respondents, our numbers were 49-45 favoring Obama over Romney, a four point difference, which is within a point or two away, I think you were saying, the Rasmussen and ARG polls were doing, both of whom, I should point out, are generally seen as not using thorough methods, but that’s a different question entirely for a different time. Obama’s ahead by 14 points among cell phone only. If they aren’t doing cell phones, then they’re missing that group. Our number is four points difference between the two candidates. And I think also, you know, let’s look at, these are not done in isolation from each other and from the times in which they’re done. This poll is clearly done coming off of the Democratic convention. All the polls, I think, with few exception, all the reliable polls are showing that there’s been a significant bounce as a result of the Democratic convention. Some are showing two points, some are showing five points. The Gallup poll has been showing Obama now with a six or seven point lead nationally. No reason to think that the battleground states would be less than what the national numbers are. Does the bounce last? We’ll find out in a couple weeks. Is the bounce real right now? Almost all the polls are showing it. To look at four years ago and say that was Hope and Change, and that was the peak, well, he’s getting less right now, Obama is, than he got four years ago. It’s that Romney is getting a lot less than McCain got. So you’re talking about a point spread. If you were in my class, I’d tell you don’t just look at the margin. Look at what the incumbent is getting, getting 50%…

HH: Okay, last question, what percentage do you think Democrats will vote over Republicans in Ohio this year?

LM: Oh, probably about 8 percent, plus or minus three or four.

HH: So it could be four percent?

LM: And it could be twelve.

We’re not quite sure we understand what the professor is saying, but it appears to be that random sampling of cellphone-only respondents produces a heavily Democratic sample and they just let it be, which makes sense from a certain perspective. Hugh Hewitt, on the other hand, believes that turnout by party in 2012 is going to be a lot more like 2004 than 2008, which also makes sense. They are not really talking about the same things. We’re skeptical of the professor’s approach, and note that, when 91% of those contacted don’t participate in polls anymore, some big anomalies probably lie ahead. However, we don’t see anything wrong per se in the professor’s approach, and in fact it was rather enlightening in that there appear to be sound methodological reasons for pollsters to arrive at results that appear counter-intuitive to us.

2 Responses to “The science of polling explained”

  1. gs Says:

    A few months ago I got a phone call purporting to be a Rasmussen survey of economic sentiment. As the questions progressed, they became so intrusive that I hung up.

    It is plausible to me that people avoid polls because of growing privacy concerns. Add to that the thuggishness of some Obama supporters.

    Nevertheless, it is disquieting to watch Obama surging on Intrade while the jobs news is bad, the overseas news is worse, and gas prices are near highs. It’s possible that a deep-pocketed superPAC is manipulating Intrade; it’s possible that Intrade speculators are mistaken or irrationally exuberant; and it’s possible that the conservative blogosphere, by ignoring Intrade, is ignoring a warning that the race has elements of which conservatives are not taking adequate account.

  2. chris16 Says:

    I tried to listen to Hewitt discuss the polling difference with Prof. Miringoff but Hewitt refused to accept why pollsters could have different results. Endlessly, the Prof. tried to explain sampling that include cell phones could vary. Samples using percentages based on exit numbers result in difference numbers because it’s base is defferent.
    I finally gave up. It was like listening to a student that refused to accept the grade of C when he felt he deserved an A.

Leave a Reply