By Dave Andrusko
Last week, when NRL Political Director Karen Cross wrote about the November 3 victory of pro-lifer Matt Bevin in the Kentucky gubernatorial contest, she mentioned that, once again, the polls were way off.
Those off-by-a-mile polls were, unfortunately, not an anomaly. Author and political analyst Michael Barone wrote about that today for the Wall Street Journal, and in doing so offered examples and possible explanations.
It’s important to us, as pro-lifers, because so often the candidate whose support is underestimated is one of ours. Not always, but often. And the prospect of losing can, and often does, depress turnout.
Barone begins with Kentucky where Bevin defeated Democrat Jack Conway 53-44–nine points. Just a few days earlier, The Real Clear Politics average of polls showed Conway leading Bevin by three points: 44%-41%.
Likewise, “The RCP average of polls just before Kentucky’s 2014 Senate race showed Mitch McConnell ahead of Democrat Alison Grimes, 49%-42%. Mr. McConnell won, 56%-41%.” Those are massive discrepancies–but, as Barone immediately notes, they are not unique to Kentucky.
Barone offers several possible explanations why the polls are off. Remember, the whole point is to obtain a representative sample and it is increasingly difficult to do so.
For example, there are fewer landlines (in 2003 only 2% of Americans did not have a landline phone but the number is now about 40%) and fewer people answer calls on them.
And when people are contacted by pollsters, many, many more than used to refuse to answer. Barone tells us that according to the Pew Research Center the response rate–already a dismal 36% in 1997–was down to only 9% in 2012. What else?
Some pollsters are placing more cellphone calls but federal law requires that a real live person must place the call, not automatically by machine, increasing the cost. Barone continues
Some polls are conducted partially or entirely over the Internet, which risks producing unrepresentative samples because Internet usage is higher among certain groups than others and because Internet respondents are in effect volunteers. Pollsters typically weight their results so that demographic groups underrepresented in most samples—the young, racial minorities—constitute the percentage they will form of the actual electorate. But that’s inevitably an educated guess, since one thing pollsters have always had trouble projecting is turnout.
As Barone notes, small pollsters are–or seem inclined to–throw in the towel. For instance, Gallup won’t conduct presidential-primary polls this cycle.
So perhaps what we can ask of polls is less “who is ahead” but what’s on the voter’s mind, he suggests.
“Poll numbers, despite their seeming precision, are not hard data,” Barone writes. “They are clues to the mysteries that lie in human hearts.”
Particular poll numbers are like daubs of pigment on an Impressionist’s canvas, which by themselves don’t convey a sense of reality but which, taken together with many others, can give the aesthetically sensitive viewer a more vivid sense of the underlying reality than the most accurate photograph. Technological change may be making polls less scientifically reliable, but reading polls has never been entirely a science; it has also been an art and seems to be getting more so.