Courtesy of California Civic Education Project (regionalchange.ucdavis.edu)

Courtesy of California Civic Education Project (regionalchange.ucdavis.edu)

Courtesy of Los Angeles County Clerk/Recorder Dean Logan’s twitter feed, researchers at the University of California, Davis’s California Civic Engagement Project has released a fascinating analysis of vote by mail usage in the Golden State.

Some of the patterns are not surprising to anyone who has followed vote by mail for a while: by-mail voters tend to be older and white and Asian.  The report pays particularly close attention to lower Hispanic usage rates of VBM, but I’m a bit disappointed that there is no report of African American usage, which Charles Stewart and I have shown has grown enormously in Florida and other southeastern states.

Party differences are, as always, complex.  A greater proportion of Republican affiliators use vote by mail, but because Democrats hold such an enormous registration advantage in the state, a larger proportion of the vote by mail electorate overall is Democratic (43%) vs. Republican (33%) and No Party preference (18%).

The MonkeyCage features a nice by Pippa Norris, Richard Frank, and Ferran Martinez I Coma on new research coming out of the Electoral Integrity Project.  The post reports on a recent international survey of election experts ranking 66 countries on a variety of measures of election conduct and administration.

Unfortunately, someone made an ill-advised choice to tag the post “election fraud.”

It may be the Pippa and her colleagues indirectly invited this provocative tag.  The first line of their posting reads:

In many countries, polling day ends with disputes about ballot-box fraud, corruption and flawed registers.

Followed in the next paragraph by:

Where there are disputes, however, which claims are accurate? And which are false complaints from sore losers?

The report does not really evaluate the validity of election disputes, nor does it provide a measure of election fraud, however.  What is being reported by the EIP is innovative and valuable: evaluations by expert observers of the perceptions of electoral integrity (this is the accurate title of the dataset available from Harvard’s Dataverse) by 855 election experts.

This is not the same thing as “election fraud,” and the report at the EIP website says this (emphasis added):

To address this issue, new evidence gathered by the Electoral Integrity Project compares the risks of flawed and failed elections, and how far countries around the world meet international standards.

EIP shows that there is a strong correlation between expert assessments and liberal democracy (measured by Freedom House and Polity V indicators), thus validating the measure.  But it’s important to be clear what the measure is, and is not.  For instance, the US ranks relatively low because international experts (and the ODIHR) don’t like the way we draw our district lines or our system of campaign finance.

Neither do many American observers, but I’ve never seen any claims that our no-holds-barred campaign finance system translates into election fraud.  Our highly politicized redistricting system distorts the translation of public preferences into legislative seats, but it similarly does not, to my mind, have any relationship to fraud.

This is not a criticism of the EIP or of MonkeyCage. It simply brings to mind Rick Hasen’s description of the ongoing disputes over election fraud and voter suppression in The Voting Wars.

Both grab the headlines and fire up activists, but there is little empirical evidence of either occurring much in the United States.

The recent EIP report says a lot about “election integrity,” “election administration,” and simply “elections” (the appropriate tags), but “election fraud”?  The answer to that lies in the future.

Nice posting by Nate Persily on Monkey Cage: http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/01/22/american-elections-need-help-heres-how-to-make-them-better/

The Presidential Commission on Election Administration, also known as the Bauer-Ginsberg Commission, has issued its final report.  Rick Hasen, waking and working before all of us, has already provided a great summary of findings and recommendations.  I’m particularly excited to see the Election Toolkit produced by the Voting Information Project.

I testified before the Commission in Denver, accompanied by Jacob Canter (exp. ’14).  Our work last summer was partially supported by the Alta S. Corbett Summer Research Program of Reed College.

Congratulations to Nate, Charles, Tammy, Ann, Chris, Ben, Bob, Trey, and all the commission members and staff!

Jeff Mapes of the Oregonian writes about a watershed political moment in Oregon: more than 30% of Oregonians now do not affiliate with one of the two major parties on the voter registration rolls.

http://www.oregonlive.com/mapes/index.ssf/2014/01/oregons_new_landmark_more_than.html

Courtesy of Pennsylvania DMV

I’m often asked, particularly by junior faculty, about how social scientists get involved in litigation.  A good way to learn how statistical reasoning gets used in legal cases is to review cases.  The “All About Redistricting” website maintained by Justin Levitt at Loyola Law School, and the “litigation page” at the Moritz College of Law at The Ohio State University, for example, are treasure troves of case materials.

The recent Pennsylvania trial court ruling striking down the state’s voter ID law is only the most recent instance where a court relied heavily on evidence produced by social scientists and statisticians.  (All the page numbers referenced below refer to this decision.)

The full set of documents pertaining to the case can be found at the Moritz site (unfortunately many of the documents are low quality scans and can’t be searched).

The Findings of Fact are a good place to start (pg. 54 of the decision) because they summarize the source of the evidence submitted to the court and can often used to quickly identify expert witnesses.

The Moritz site is pretty comprehensive for this case, including most of the expert witness reports.  The social scientists used in the case were:

  • Dr. Bernard Siskin did most of the statistical analysis for the plaintiffs. Siskin was a long time professor of statistics at Temple and now appears to be a full time expert witness, mainly working on employment discrimination.  
  • Dr. William Wecker is another statistician who works exclusively as an expert witness; previously he was a tenured professor of business at the University of Chicago.  Unfortunately, I could not find Wecker’s report on the website.
  • Dr. David Marker, a senior statistician at Westat, a statistics and data collection firm that I believe started in the Research Triangle area of North Carolina, but which has grown worldwide.   Marker was hired solely to evaluate the survey methodology used by Dr. Matthew Barreto in research that has often been cited as demonstrating racial and ethnic disparities in access to voter ID in Pennsylvania.
  • Dr. Lorraine Minnite is an associate professor at Rutgers-Camden and a well-known expert on vote fraud.  Minnite was brought in by the plaintiffs to examine the content of the legislative debates regarding the need for voter ID and the prevalence (or not) of voter fraud in Pennsylvania.  Her report starts at page 20 here and is an entertaining read for anyone interested in legislative intent.  The court determined, for instance, that there appeared to be a “legislative disconnect from reality” (pg. 41 of the decision), and Minnite shows that, whatever the merits of voter ID, the speculations of legislators often outstripped reality.
  • Dr. Diana Mutz is a professor of political science at University of Pennsylvania best known for her work on political communication, political psychology, and public opinion.  I was surprised to find Mutz’s name among the witnesses; her testimony was used by the plaintiffs to try to show that the state had insufficiently advertised how citizens could obtain an ID.

Aspiring expert witnesses can learn at least two lessons from this case.

Learn the tools: Siskin’s report is a virtual manual for matching complex databases to estimate racial and ethnic disparities.  A key piece of evidence was Siskin’s estimate of the number of PA citizens who did not have valid photo IDs.  The work involved fuzzy set matching of Penn DOT and voter registration databases (pg. 62 “Scope of Need”; pg. 17 of the expert witness report); he used “BISG” methodology to estimate racial disparities even though his data sources did not contain racial or ethnic identifiers (pg. 20 of the report); and relied on “Open Street maps” data to estimate drive times for residents without IDs to the closest drivers license office (pg 27 of the report).

You don’t necessarily need to use the most advanced technology (Siskin uses SPSS for all of his statistical estimation), but your methodology must be scientifically sound.

Honor scientific standards of evidence:  Wecker was hired by the defendants solely to, in the words of the Court, “refute Dr. Siskin’s work.”  The court’s treatment of Wecker’s evidence is illustrative of what happens if your evidence can be criticized for not following conventional scientific practice.  The court refers to the testimony as “flawed and assumption laden”.

Compare this to the court’s treatment of Marker, and by implication Barreto, both of whom followed valid scientific standards.

A nice introduction to expert witness work was penned by Dick Engstrom and Mike McDonald in 2011.  It’s a nice exercise to read Engstrom and McDonald’s useful essay and then review expert witness reports in this and other cases.

Courtesy of Oxford University Press

Courtesy of Oxford University Press

Michael Hanmer, Antoine Banks, and Ismail White have a new paper in Political Analysis that returns to a longstanding problem in voting and survey research: overreporting bias among survey respondents.

From the abstract:

Voting is a fundamental part of any democratic society. But survey-based measures of voting are problematic because a substantial proportion of nonvoters report that they voted. This over-reporting has consequences for our understanding of voting as well as the behaviors and attitudes associated with voting. Relying on the “bogus pipeline” approach, we investigate whether altering the wording of the turnout question can cause respondents to provide more accurate responses. We attempt to reduce over-reporting simply by changing the wording of the vote question by highlighting to the respondent that: (1) we can in fact find out, via public records, whether or not they voted; and (2) we (survey administrators) know some people who say they voted did not. We examine these questions through a survey on US voting-age citizens after the 2010 midterm elections, in which we ask them about voting in those elections. Our evidence shows that the question noting we would check the records improved the accuracy of the reports by reducing the over-reporting of turnout.

What is neat about this paper is that the authors suggest a relatively simple way to reduce (but not eliminate–see the attached graphic) the bias.

It’s also notable that the research comes out of the TESS (Time Sharing Experiments in Social Science), an innovative and low-cost project funded by the Political Science Program of the National Science Foundation (Congress: are you listening?).

Just kidding.  

Doug Chapin at the Election Academy highlights a report out of Ohio showing that, of the 210 cases described, 40 (all from Franklin County” were “referred for more investigation” and only 2 resulted in any prosecution, one for a man who voted for President in another state but for local initiatives in Ohio, and a second for a petition gatherer who falsified names on a petition.

The latter case, of course, does  not constitute voting fraud.

The results is 1/210, or .004 of the cases, constituted actual voter fraud.  Zero cases of voter impersonation at the polls.  Zero cases of illegal immigrants voting.  Zero cases or organized voter fraud at all.  As one Republican county prosecutor put it: “There’s a couple of isolated incidents of people making bone-headed decisions.”

I don’t expect to see many news stories helping to educate skeptical Americans that vote fraud is not, in fact, rampant in Ohio or in other states.