Tracking the Early Voter

Michael McDonald has a new column reporting early voting rates in 2012, compared to previous years, using the Current Population Survey’s Voting and Registration Supplement. The trends in early voting have been picked up by Rick Hasen and Doug Chapin, but Doug rightly highlights this caveat from Mike’s column:

[I]t is instructive to keep in mind that the Census Bureau statistics are drawn from a survey. We will get more information later this summer with the United States Election Assistance Commission reports early voting statistics from election officials. An interesting difference between these two sources is that election officials do not report when a voter cast their mail ballot; in some states, voters can return their mail ballots on Election Day. So, election officials will likely report a higher early voting rate than the Census Bureau as their statistics in some states include persons who voted by mail on Election Day.

Doug (and Mike) speculate that the differences may be due in part to no-excuse absentee voters who turn in their ballots on Election Day and tell the CPS that they voted “on Election Day”.

This is possible, but the postings highlight the ongoing challenge of collecting consistent and reliable information on the American elections system, especially information on things like early voting that are were not part of elections reporting systems just a decade ago.

Tracking the early vote has been a part of EVIC’s mission since our founding in 2004.  A decade ago, few states or jurisdictions tracked no-excuse or early in-person ballots, and most polling organizations didn’t pay much attention either.  Early voting crept up quietly on survey organizations; in 2000, our best estimate is that 16.26% of citizens cast a ballot prior to Election Day.  This figure jumped nearly 50% in 2004, when  22.7% of citizens cast a ballot prior to Election Day.  Anyone interested in American elections had to pay attention to the early vote.

There are three separate sources of information on the early vote.  The good news is that the sources correlate highly, both across states and over time.  The bad news is there is a persistent gap on the low-end, using the Current Population Survey’s Voting and Registration Supplement (VRS) and on the high end using data drawn from the Associated Press’s Election Services Unit.

In the table below, we report early voting totals from these three different sources for 2008 and 2010 (the CPS is used by McDonald in his column).  Next we report data from the Election Assistance Commission’s Election Administration and Voting Survey.  The third column reports information graciously provided by the Associated Press’s Elections Tabulation and Research Unit.

CPS EAC AP
2008 28.45% 32.40% 34.00%
2010 24.95% 28.50% 30.50%

 

The CPS is a survey, one of the largest and best available, and the only way we have to track voter participation using different modes of balloting prior to 2004, when almost no other survey organizations were asking about alternative modes of balloting.

Prior to 2004, the CPS asked the question in a different way.  PES4 (2002 and prior) asked the respondent whether they “voted in person on election day or before, or by mail.”   From 2004 onwards, the CPS asked (PES5) “Did you vote in person or did you vote by mail?” and followed up with (PES6)“Was that on election day or before election day?”

McDonald’s and Chapin’s comments zero in on the number of respondents who cast a “by-mail” ballot but returned it on Election Day.  In some jurisdictions, the number of citizens who cast a ballot this way are not inconsequential.  According to Dean Logan, Registrar-Recorder/County Clerk of Los Angeles County, 202,584 vote by mail ballots were delivered in person to polling places in 2012.  These totaled 6.3% of all ballots and 20.8% of no-excuse absentee ballots.  Nationwide, however, the totals are far lower.  In the 2012 CPS, only 458 or 59713 respondents, or 0.76% of all voters said the “voted by mail” and “voted on Election Day.  To Doug’s point about California, twice as many Californians gave this response than nationally (8.27% vs. 4.83% of vote by mail respondents).

It doesn’t appear that the question wording made much of a material difference in how citizens reported voting (see the graphic below), although the terrain was changing so rapidly that it would be nearly impossible to separate question wording effects from actual changes in rates.

Source: Current Population Survey Voting and Registration Supplement

Source: Current Population Survey Voting and Registration Supplement

Our second source is the EAC.  This is generally the best source for early voting returns, especially since 2008 when state response rates have improved dramatically.  There is only one minor quibble with the EAC, and it’s not with their data but their reports.  The reports calculate statistics exactly as the states report them, even though the agency continues to wrestle with non-response problems.  The careful user needs to be attentive to this fact.  For instance, in 2010, Table 28A in the EAVS has 4678 reporting jurisdictions and 90,810,679 total voters.  The same table reports that 8.2% of votes were cast early in-person, basing that result on 2318 jurisdictions.

The means that the EAC data probably slightly underreport the number of early in-person and no-excuse absentee voters.

Our third source is the AP.  The Associated Press’s Elections Services Unit is an interesting entity.  The AP is a no-profit cooperative of news organizations, and the Elections Unit begins to compile data from states and counties on early in-person and no-excuse ballots as soon as this information is available.  They use this information on election night to help supplement their vote tabulation work.  Post-election, they shift into a different mode, trying to assemble official results on total ballots cast, total vote, early votes cast/counted, mail ballot absentees cast/counted, provisional cast/counted, including digging down into precinct level certified results to compile some information not available from counties or states (including valuable breakouts of “advance” vs election day votes by candidate for key races).

In my past work, I’ve relied heavily on the AP’s data, because, unlike the EAC, they have reported results in a relatively consistent fashion back to 2000, and unlike the CPS, the data are ideally based on certified election returns and not on survey results.

These different data sources reflect different perceptions of the same reality.  The Census Bureau data is invaluable because it tells us what voters did behaviorally and over 20 or more years.  The EAC provides us insight into how states and local jurisdictions have categorized their ballots three to six months after the election.

If I were to make one suggestion, it would be to look closely at what is being done by the AP.  The AP has a strong incentive to continue to go back to local jurisdictions and make sure that its final figures our correct.

The same incentives should be applied to whatever entity continues to collect the data for the EAVS.  The EAC currently is focused on producing a congressionally mandated series of post-election reports.  After that point, they are little incentive–and no funding–to go back and correct or amend their data.  It would not be hard to provide a data collection and funding model that would incentivize not just timely reporting, but also ongoing data maintenance.

 

 

Share this page