Share
Another day, another poll in California.
by Ben Christopher
CALmatters
This latest batch of numbers comes from UC Berkeley’s Institute of Governmental Studies, which echoes other recent independent polls in showing Democratic Lt. Gov. Gavin Newsom leading Republican businessman John Cox by a healthy margin in the governor’s race, U.S. Sen. Dianne Feinstein fending off a challenge from fellow Democrat state Sen. Kevin de León, and defeat in store for ballot measures to repeal the gas tax hike and allow more rent control. National polls largely suggest the likelihood that the U.S. House will flip to the Democrats and the U.S. Senate will remain in Republican control.
Whether those data points lift your spirits or fill you with political dread might influence how seriously you take such polls—and what lessons you drew from the jaw-dropping surprises of the 2016 election. Regardless, brace yourself for news about more polls in the final countdown to election day: phone polls and online polls, independent polls and hired-by-one-side polls, red polls and blue polls.
You may rightly wonder: What kind of statistical black magic is performed behind the scenes to produce any given poll? Keep these tips in mind:
Tip 1: Consider the Pollster
Not all pollsters are equal.
Generally speaking, political pollsters who have been hired by candidates have plenty of reasons to make their clients look good or say what they want to hear. If you can’t tell who paid for a poll, ask yourself if you’ve ever heard of the organization and whether the numbers they’re providing seem roughly in line with others you may have seen. If not, said Dean Bonner an associate survey director at the Public Policy Institute of California, those are “red flags.”
Transparency is also a good sign, he added. See if you can find the survey’s crosstabs—the detailed numerical breakdowns of how different subgroups answered each question. And look for the actual questions that respondents were asked, because it can be easy to get the answer you want with some clever framing. Which brings us to…
Tip 2: Read the Questions
This month, the polling firm SurveyUSA found that 58 percent of likely voters support Proposition 6, the ballot measure to repeal a recent 12-cents-per-gallon boost in the state gas tax, at a cost of roughly $5 billion in lost revenue per year.
Contrast those results with the recent Institute of Governmental Studies poll, which found that only 40 percent support the repeal. That’s an 18 percentage point gap between the two surveys—the difference between an anti-tax landslide and a resounding defeat.
What explains the contradictory results? See if you can spot the difference in how each pollster asked about the ballot measure:
SurveyUSA: Proposition 6, a constitutional amendment which would repeal gasoline and diesel taxes, and vehicle fees, that were enacted in 2017 and would require any future fuel taxes be approved by voters. A YES vote on Prop 6 would repeal fuel tax increases that were enacted in 2017, including the Road Repaid and Accountability Act of 2017. A NO vote on Prop 6 would keep the fuel taxes imposed in 2017 by the California legislature in place, and would allow the legislature to impose whatever fees and taxes it approved in the future, provided 2/3 of the CA House and 2/3 of the CA Senate approved. On Proposition 6, how do you vote?
IGS: Proposition 6: Eliminates certain road repair and transportation funding. Requires certain fuel taxes and vehicle fees be approved by the electorate. Initiative constitutional amendment. Repeals a 2017 transportation law’s taxes and fees designated for road repairs and public transportation. Fiscal impact: Reduced ongoing revenues of $5.1 billion from state fuel and vehicle taxes that mainly would have paid for highway and road maintenance and repairs, as well as transit programs.
Half of the UC Berkeley poll respondents were also given the hint that Prop. 6 is also frequently called the “gas tax repeal initiative.” Even so, the dramatically different framing seems to have steered voters towards different opinions on the issue. While the Berkeley poll mostly sticks to the language of the proposition itself (which emphasizes how the measure would take away transportation funding), the SurveyUSA poll describes the proposition primarily as a “repeal of gasoline and diesel taxes.”
Last September, the Public Policy Institute of California was able to produce a similar split in opinion among the same group of people by framing Prop. 6 as either a gas tax repeal or a funding cut.
Tip 3: Don’t Sweat Big Differences Among Small Groups (or Small Differences Among Big Groups)
Anyone who really decided to dig into recent polling from the Public Policy Institute of California would have come across a startling fact.
In September, 34 percent of naturalized citizens surveyed reported that they support the Republican candidate for governor, Cox, over his opponent, Newsom. But just a month later, Cox’s support among naturalized citizens had fallen to 24 percent. Why would one-third of Cox-backing immigrants abandon their candidate in his time of need?
The thing is, they probably didn’t. As Bonner points out, the sampling error in a poll—how much you might reasonably expect the estimates in a survey to be off—increases as the number of people surveyed shrinks. A 5 percent decline among all Californian voters, for example, could be meaningful. Devastating even. But a reported 5 percent decline among Asian-American voters over the age of 75 living in Imperial Valley—not so much. The pollster likely doesn’t have very many people in the sample who fit all of those demographic descriptions, so the odds of getting one with a statistically out-of-character opinion who throws off the average is pretty high.
Likewise, it’s easy to over-interpret very small changes, even with very big samples. Last month, 39 percent of likely voters said that they would be voting for Cox, according to PPIC. This month it was 38 percent. Was there a real change in public opinion? There’s no good reason to think so. This slight difference in how surveyed voters responded probably just comes down to random chance.
Tip 4: Ask What’s Being Measured
One takeaway from the 2016 election is that the victory of President Trump represented a catastrophic failure of political polling.
But it didn’t. Not really, anyway. Polls try to measure popular sentiment and on election day the polling average on the political website FiveThirtyEight put Hillary Clinton roughly 3.5 points ahead of President Trump. In fact, she won the popular vote by about 2 points. Not bad.
President Donald Trump won the election, of course, because he won the majority of Electoral College votes (he had, in other words, fewer votes total, but his were in the right places). The polls were only monumentally wrong if you assess their performance by a metric that most weren’t measuring.
It’s an easy mistake to make. Take the most recent poll from the Public Policy Institute, which found that 49 percent of likely voters in the state’s 11 most competitive congressional districts plan to vote for a Republican candidate, compared to 44 percent who are leaning toward the Democrat. One would be tempted to conclude from that information that Democratic hopes of flipping red seats blue is doomed. But that would be wrong. The result isn’t a measure of any given race, but an average across a dozen, possibly very different, ones.
Those kinds of aggregate measures “says absolutely nothing about what’s going to happen in Duncan Hunter’s district, Dana Rohrabacher’s district or Devin Nunes’ district,” said Jane Junn, a political science professor at the University of Southern California and an expert on polling methodology.
Tip 5: Be Aware of the Conjuring Behind The “Likely Voter”
Polling isn’t all about crunching numbers and interrupting strangers while they try to finish dinner. Being able to envision who is actually going to bother to vote this year and then sifting through your results until your data matches that vision? That takes imagination.
The problem, of course, is that nobody—not even stat geeks at polling outfits—can predict the future.
Coming up with a workable turnout model, said Junn, is “sort of like throwing spaghetti against the wall.” You mess around with mathematical weights until your sample of likely voters “comports with what you think it’s supposed to look like…It’s more like an art than a science—and it’s a very ugly art,” she said.
But some pollsters make that art look pretty scientific.
The Public Policy Institute of California, for example, calls state residents at random and then filters their responses through a likely voter algorithm. That sorting process is based on how they answer a series of questions about past voting behavior, their intention to vote and other factors that have, historically, been pretty good predictors of electoral participation.
Similarly, the Institute of Governmental Studies determines its “likely voter” pool not by guessing at the demographic composition of the electorate beforehand, but by applying a formula which takes into account whether a person says they plan on voting, how often they’ve voted before and how interested they are in the upcoming election. Then they cross-check the results with commercially available databases of registered voter data (called voter files), so they can tell if respondents have already cast their ballots (making them the most likely voters of all) and which ones are lying.
But those two approaches—systematic and grounded in political science research though they are—still amount to a “judgment call on the part of the pollsters,” said Mark DiCamillo, director of the Berkeley poll. It still comes down to deciding who counts as a voter and who doesn’t, without knowing for sure.
Another approach is to predict many possible outcomes at once. In a recent poll of California’s 25th congressional district, the New York Times published the polling results of seven different “turnout scenarios,” ranging from an electorate composed of “people who say they are almost certain to vote, and no one else” (in this version of reality, GOP Rep. Steve Knight of northeast Los Angeles leads Katie Hill, a Democrat, by 3 points) to “the types of people who voted in 2014” (in which case Knight is up by 9). And of course, other surveys using other turnout models show Hill leading him.
Why offer so many results? Radical transparency might be one explanation. Pollsters operate in a sea of uncertainty, so why not express that to the reader?
Another explanation, from DiCamillo: “they’re hedging their bets…that’s how you could do it if you didn’t want to stick your neck out.”
Tip 6: Don’t Freak out About Early Vote Numbers
In California, roughly 7-in-10 voters are registered to cast their ballot by mail. In some particularly lopsided races, that might mean the election is effectively over before November 6.
Paul Mitchell, vice president of Political Data Inc. and the man responsible for this compulsively addictive interactive absentee vote tracker, wrote this week about the perils of using early voting results as a prognostication tool. For one, he said, most pollsters already ask respondents if they’ve already voted. So if, for example, you see a surge of Democrats in the early vote in a district where most polls had the two major parties tied, that isn’t necessarily new information. Odds are those Democrats were already counted by the pollsters, and the Republicans are probably on their way.
Likewise, skyrocketing (or lackluster) turnout in the early days doesn’t necessarily say anything about overall turnout. It’s possible voters are just a little bit quicker or a little slower to the punch this year. Or maybe certain county registrar officers are speedier than others.
So enjoy watching the numbers coming in, as Mitchell wrote, but “viewers of this data should take it with a grain of salt and not fall into the trap of over-analyzing it.”
For a deeper dive into what’s on your California ballot, check out the CALmatters voter guide here.
CALmatters.org is a nonprofit, nonpartisan media venture explaining California policies and politics.