My good friend Tova Wang sent me this headline from the Columbus Post Dispatch:
Early Voting Hasn’t Boosted Ohio Turnout
In support of this headline, reporter compares turnout in only three elections, only statewide, and only in presidential contests. This is analysis is as unrevealing–and potentially misleading–as imaginable.
The key to understanding a complex process like voter turnout is to try to maximize, to the degree feasible, variation and covariation among all the important causes (variables). Political scientists often consider dozens or more different influences on turnout and estimate highly sophisticated multivariate models.
But even a relatively simple exploration can be done far better than the one conducted by the Dispatch.
Let’s start with the presidency. There are obvious reasons that the nation, and the world, focuses on the American presidential election. It is almost always the most consequential election held in this country for the most powerful and influential political leader in the world.
But all these reasons are why the presidential contest may be the worst election in order to discern the turnout effects of something like early voting. In the face of a billion or more dollars in campaign spending, blanket media coverage, and organizational mobilization, the impact of early voting is going to be small. We may be able to uncover turnout effects, but the context makes it difficult.
At a bare minimum, compare midterm and presidential contests, and if at all possible, include off-cycle elections.
Next, even if limited to a study within one state, there is no good reason not to compare trends across counties. In a large, heterogeneous state like Ohio, not only do the conditions for voting change across the state, but the voters change as well.
The reporter seems to recognize that African Americans in 2008 responded differently to the Obama/McCain contest in 2008 than they did to the Kerry/Bush campaign in 2004. And the reporter notes that, due to legal uncertainties, the hours and days of early voting varied across counties in 2012.
So why not compare turnout effects across counties? By not doing so, the reporter–whether realizing it or not–assumes that the all voting rules and procedures in the state of Ohio are identical and more importantly, that all Ohioans are identical insofar as they responded in different years to different candidates and to different election laws and procedures.
Two esteemed political scientists are quoted in the article and seemed to try to educate the reporter on these points.
Paul Beck’s quote starts with a general point which I think does not accurately reflect the state of the literature on early voting at this stage, but more important is the end of Beck’s quote, where he highlights the most consequential reasons that turnout may be higher or lower:
“People who vote early are people who are typically going to vote anyway,” said Paul Beck, a political science professor at Ohio State University. “So, early voting hasn’t really succeeded in turning out more people to vote. We’ve made it a lot easier to vote, but on the other hand, some people are very discouraged about politics and might not care how easy it is to vote.”
John Green’s quote, on the other hand, is exactly on point in my view:
“If all things are equal, early voting would increase voter turnout, but all things aren’t equal,” said John Green, a political science professor at the University of Akron. “But there are many factors in each election: the closeness of the race, the excitement to vote for a candidate or the degree of anger in the electorate.”
I could not have put it better. Early voting may not have increased turnout in Ohio, but without at least considering these other factors, the title and thrust of the story are not accurate.
Apropos of Doug Chapin’s recent posting on the proposed change to a “postmark” deadline in California, this study by the Washington Policy Center may be apropos.
The Center looked at the five largest counties in Washington and Oregon, both of which have full vote by mail with drop boxes, but Washington allows ballots to arrive if postmarked by Election Day, while Oregon requires ballots to arrive by 8 pm on Election Day.
The results are reproduced below, and show higher late ballot rates in Washington.
I will have more to say about this later, but at first glance, it appears that an Election Day deadline may serve voters better, prompting more to return their ballots on time.
It has been a peaceful morning of balloting in Kherson, Ukraine. I am here monitoring elections as part of an international mission. I’ve met hundreds of other observers from the United States, Canada, Germany, and many other countries. All are hard working and dedicated individuals who are interested in helping to cement democratic development in the country.
Kherson is in the south of the country, and is best known as the dying place of John Howard, famous British prison reformer. (I haven’t visited the pub named after Howard just yet.)
Because Kherson is located just west of Crimea and has more than 50% of the population who report Russian as their native language, you’d think that this region would be tense. We had to sit through extra security briefings before we were deployed to the area.
But the two words that would describe the election thus far are busy and calm. The election is busy because the lines are long and voter interest is high. These lines aren’t helped by the economic crisis in the country which has resulted in understaffed polling places and too few voting booths. Things aren’t so different in the United States!
Nonetheless, voters seem to be in good spirits, perhaps helped by the beautiful, warm, sunny summer Sunday, and generally calm–except when they’ve had to wait for an hour to vote!
I hope for a free and fair outcome, one that may help the country move forward. I’m sure everyone here hopes for the same.
The MonkeyCage features a nice by Pippa Norris, Richard Frank, and Ferran Martinez I Coma on new research coming out of the Electoral Integrity Project. The post reports on a recent international survey of election experts ranking 66 countries on a variety of measures of election conduct and administration.
Unfortunately, someone made an ill-advised choice to tag the post “election fraud.”
It may be the Pippa and her colleagues indirectly invited this provocative tag. The first line of their posting reads:
In many countries, polling day ends with disputes about ballot-box fraud, corruption and flawed registers.
Followed in the next paragraph by:
Where there are disputes, however, which claims are accurate? And which are false complaints from sore losers?
The report does not really evaluate the validity of election disputes, nor does it provide a measure of election fraud, however. What is being reported by the EIP is innovative and valuable: evaluations by expert observers of the perceptions of electoral integrity (this is the accurate title of the dataset available from Harvard’s Dataverse) by 855 election experts.
This is not the same thing as “election fraud,” and the report at the EIP website says this (emphasis added):
To address this issue, new evidence gathered by the Electoral Integrity Project compares the risks of flawed and failed elections, and how far countries around the world meet international standards.
EIP shows that there is a strong correlation between expert assessments and liberal democracy (measured by Freedom House and Polity V indicators), thus validating the measure. But it’s important to be clear what the measure is, and is not. For instance, the US ranks relatively low because international experts (and the ODIHR) don’t like the way we draw our district lines or our system of campaign finance.
Neither do many American observers, but I’ve never seen any claims that our no-holds-barred campaign finance system translates into election fraud. Our highly politicized redistricting system distorts the translation of public preferences into legislative seats, but it similarly does not, to my mind, have any relationship to fraud.
This is not a criticism of the EIP or of MonkeyCage. It simply brings to mind Rick Hasen’s description of the ongoing disputes over election fraud and voter suppression in The Voting Wars.
Both grab the headlines and fire up activists, but there is little empirical evidence of either occurring much in the United States.
The recent EIP report says a lot about “election integrity,” “election administration,” and simply “elections” (the appropriate tags), but “election fraud”? The answer to that lies in the future.
Jeff Mapes of the Oregonian writes about a watershed political moment in Oregon: more than 30% of Oregonians now do not affiliate with one of the two major parties on the voter registration rolls.
http://www.oregonlive.com/mapes/index.ssf/2014/01/oregons_new_landmark_more_than.html
I’m often asked, particularly by junior faculty, about how social scientists get involved in litigation. A good way to learn how statistical reasoning gets used in legal cases is to review cases. The “All About Redistricting” website maintained by Justin Levitt at Loyola Law School, and the “litigation page” at the Moritz College of Law at The Ohio State University, for example, are treasure troves of case materials.
The recent Pennsylvania trial court ruling striking down the state’s voter ID law is only the most recent instance where a court relied heavily on evidence produced by social scientists and statisticians. (All the page numbers referenced below refer to this decision.)
The full set of documents pertaining to the case can be found at the Moritz site (unfortunately many of the documents are low quality scans and can’t be searched).
The Findings of Fact are a good place to start (pg. 54 of the decision) because they summarize the source of the evidence submitted to the court and can often used to quickly identify expert witnesses.
The Moritz site is pretty comprehensive for this case, including most of the expert witness reports. The social scientists used in the case were:
- Dr. Bernard Siskin did most of the statistical analysis for the plaintiffs. Siskin was a long time professor of statistics at Temple and now appears to be a full time expert witness, mainly working on employment discrimination.
- Dr. William Wecker is another statistician who works exclusively as an expert witness; previously he was a tenured professor of business at the University of Chicago. Unfortunately, I could not find Wecker’s report on the website.
- Dr. David Marker, a senior statistician at Westat, a statistics and data collection firm that I believe started in the Research Triangle area of North Carolina, but which has grown worldwide. Marker was hired solely to evaluate the survey methodology used by Dr. Matthew Barreto in research that has often been cited as demonstrating racial and ethnic disparities in access to voter ID in Pennsylvania.
- Dr. Lorraine Minnite is an associate professor at Rutgers-Camden and a well-known expert on vote fraud. Minnite was brought in by the plaintiffs to examine the content of the legislative debates regarding the need for voter ID and the prevalence (or not) of voter fraud in Pennsylvania. Her report starts at page 20 here and is an entertaining read for anyone interested in legislative intent. The court determined, for instance, that there appeared to be a “legislative disconnect from reality” (pg. 41 of the decision), and Minnite shows that, whatever the merits of voter ID, the speculations of legislators often outstripped reality.
- Dr. Diana Mutz is a professor of political science at University of Pennsylvania best known for her work on political communication, political psychology, and public opinion. I was surprised to find Mutz’s name among the witnesses; her testimony was used by the plaintiffs to try to show that the state had insufficiently advertised how citizens could obtain an ID.
Aspiring expert witnesses can learn at least two lessons from this case.
Learn the tools: Siskin’s report is a virtual manual for matching complex databases to estimate racial and ethnic disparities. A key piece of evidence was Siskin’s estimate of the number of PA citizens who did not have valid photo IDs. The work involved fuzzy set matching of Penn DOT and voter registration databases (pg. 62 “Scope of Need”; pg. 17 of the expert witness report); he used “BISG” methodology to estimate racial disparities even though his data sources did not contain racial or ethnic identifiers (pg. 20 of the report); and relied on “Open Street maps” data to estimate drive times for residents without IDs to the closest drivers license office (pg 27 of the report).
You don’t necessarily need to use the most advanced technology (Siskin uses SPSS for all of his statistical estimation), but your methodology must be scientifically sound.
Honor scientific standards of evidence: Wecker was hired by the defendants solely to, in the words of the Court, “refute Dr. Siskin’s work.” The court’s treatment of Wecker’s evidence is illustrative of what happens if your evidence can be criticized for not following conventional scientific practice. The court refers to the testimony as “flawed and assumption laden”.
Compare this to the court’s treatment of Marker, and by implication Barreto, both of whom followed valid scientific standards.
A nice introduction to expert witness work was penned by Dick Engstrom and Mike McDonald in 2011. It’s a nice exercise to read Engstrom and McDonald’s useful essay and then review expert witness reports in this and other cases.
Michael Hanmer, Antoine Banks, and Ismail White have a new paper in Political Analysis that returns to a longstanding problem in voting and survey research: overreporting bias among survey respondents.
From the abstract:
Voting is a fundamental part of any democratic society. But survey-based measures of voting are problematic because a substantial proportion of nonvoters report that they voted. This over-reporting has consequences for our understanding of voting as well as the behaviors and attitudes associated with voting. Relying on the “bogus pipeline” approach, we investigate whether altering the wording of the turnout question can cause respondents to provide more accurate responses. We attempt to reduce over-reporting simply by changing the wording of the vote question by highlighting to the respondent that: (1) we can in fact find out, via public records, whether or not they voted; and (2) we (survey administrators) know some people who say they voted did not. We examine these questions through a survey on US voting-age citizens after the 2010 midterm elections, in which we ask them about voting in those elections. Our evidence shows that the question noting we would check the records improved the accuracy of the reports by reducing the over-reporting of turnout.
What is neat about this paper is that the authors suggest a relatively simple way to reduce (but not eliminate–see the attached graphic) the bias.
It’s also notable that the research comes out of the TESS (Time Sharing Experiments in Social Science), an innovative and low-cost project funded by the Political Science Program of the National Science Foundation (Congress: are you listening?).
The rest is here: http://www.pamplinmedia.com/wlt/96-opinion/226666-89111-dont-extend-time-for-turning-in-oregon-ballots