The “quants” have taken quite a hit this year, most notably Nate Silver’s mea culpa. I’m not going to summarize the discussions song and verse, I’ll just refer people to the excellent commentators at Huffington Pollster (political science PhD!) and Monkey Cage, among others. Where they differ from many media outlets is that they almost never trumpet the result of a single poll. The results of a single poll are seldom newsworthy and are much more prone to error.
What is certainly wrong is the kind of muddled, ostrich head in the sand by Virgil and Carl, who have decided that since they stuck their fingers in the air (and by way, were clearly reading newspaper coverage and polls) and did a better job this one year in a few primaries than Nate Silver, that therefore all polls and all quantitative analysis of elections is pure bunk.
That’s hogwash, but what worries me is how many of my friends and colleagues seem to believe this kind of clap trap, and mainly because they are acting like regular old human beings. They remember the polls this year that were off–and therefore newsworthy–without remembering the vast majority that were right on target.
Which brings me to a mild defense of my good friends at DHM Research and a mild criticism of my good friend at the Willamette Week, Aaron Mesh.
Aaron lists DHM as one of the “losers” in the May primary:
DHM Research
The Portland pollsters not only failed to predict Sen. Bernie Sanders’ win in the Oregon Democratic primary—they missed it by a whopping 28 percentage points.
Yep, that poll result for the Clinton/Sanders race was a real boner, and John Horvick of DHM deserves credit for being up front about the bad estimate.
But just like one poll is not the best way to predict a race, one race within a larger poll is not the best way to evaluate a firm. If you look across all the candidate races that DHM asked about in their May poll, things look a lot different. Mesh focused on the tree–the presidential contest–while ignoring the forest.
In the GOP contests for President, Sec’y of State and Governor’s race, the average “miss” was between 1.7% and 2.4% (all estimates shown below allocate the “don’t knows” proportionally–thus understating any last minute shifts in sentiment). In the mayor’s race, even with a large pool of candidates and a high percentage of “don’t knows,” the average miss was just 1.5%, and Wheeler’s margin was off by just 3%.
Something was going on in the Clinton/Sanders race, but the pattern of other results indicate that it was probably something about that contest, about respondents willingness to provide answers, or volatile sentiments more than it was something about response rates, survey methodology, or firm bias.
More importantly, coverage of the poll points out a weakness in Oregon’s political and media environment–we really have just a single dominant polling firm with only a scattered set of other polls being conducted, mostly by national firms using robo-calls, without many of the detailed questions that a local or regional pollster would ask.
We’d all be better informed, and less likely to focus on a single result, if there were a few more players in the field.
Anyone who wants to spreadsheet used to create these figures, just drop me a line.
The “quants” have taken quite a hit this year, most notably Nate Silver’s mea culpa. I’m not going to summarize the discussions song and verse, I’ll just refer people to the excellent commentators at Huffington Pollster (political science PhD!) and Monkey Cage, among others. Where they differ from many media outlets is that they almost never trumpet the result of a single poll. The results of a single poll are seldom newsworthy and are much more prone to error.
What is certainly wrong is the kind of muddled, ostrich head in the sand by Virgil and Carl, who have decided that since they stuck their fingers in the air (and by way, were clearly reading newspaper coverage and polls) and did a better job this one year in a few primaries than Nate Silver, that therefore all polls and all quantitative analysis of elections is pure bunk.
That’s hogwash, but what worries me is how many of my friends and colleagues seem to believe this kind of clap trap, and mainly because they are acting like regular old human beings. They remember the polls this year that were off–and therefore newsworthy–without remembering the vast majority that were right on target.
Which brings me to a mild defense of my good friends at DHM Research and a mild criticism of my good friend at the Willamette Week, Aaron Mesh.
Aaron lists DHM as one of the “losers” in the May primary:
DHM Research
The Portland pollsters not only failed to predict Sen. Bernie Sanders’ win in the Oregon Democratic primary—they missed it by a whopping 28 percentage points.
Yep, that poll result for the Clinton/Sanders race was a real boner, and John Horvick of DHM deserves credit for being up front about the bad estimate.
But just like one poll is not the best way to predict a race, one race within a larger poll is not the best way to evaluate a firm. If you look across all the candidate races that DHM asked about in their May poll, things look a lot different. Mesh focused on the tree–the presidential contest–while ignoring the forest.
In the GOP contests for President, Sec’y of State and Governor’s race, the average “miss” was between 1.7% and 2.4% (all estimates shown below allocate the “don’t knows” proportionally–thus understating any last minute shifts in sentiment). In the mayor’s race, even with a large pool of candidates and a high percentage of “don’t knows,” the average miss was just 1.5%, and Wheeler’s margin was off by just 3%.
Something was going on in the Clinton/Sanders race, but the pattern of other results indicate that it was probably something about that contest, about respondents willingness to provide answers, or volatile sentiments more than it was something about response rates, survey methodology, or firm bias.
More importantly, coverage of the poll points out a weakness in Oregon’s political and media environment–we really have just a single dominant polling firm with only a scattered set of other polls being conducted, mostly by national firms using robo-calls, without many of the detailed questions that a local or regional pollster would ask.
We’d all be better informed, and less likely to focus on a single result, if there were a few more players in the field.
Anyone who wants to spreadsheet used to create these figures, just drop me a line.