PPP, Public Policy Polling, has never made a secret of its Democrat leanings and progressive affiliation. This inherent political bias has been balanced, in large part, by a generally accurate polling record. By one measure, PPP was the most accurate polling firm in the 2012 elections. A good record across a wide spectrum of polls, however, doesn’t preclude serious problems with individual polls.
In late 2013, PPP published surveys of the Senate and Governor race in Georgia that diverged sharply from other polls. PPP, which was conducting the poll on behalf of progressive groups in the state, found Democrat candidates in surprisingly competitive position in the deeply red state. (Both would lose by large margins the following year.)
Nate Cohn of the New Republic, the month before, had done a deep analysis of earlier PPP polls and found that the pollster actively weighted each poll to reflect its subjective assessment of the probable racial breakdown of the electorate. It didn’t, though, weight the poll to conform to a single estimate of the racial demographic mix, but often varied widely from poll to poll.
These shifts are basically impossible to justify, and even harder to explain. PPP has the discretion to make judgments about the projected composition of the electorate, which they achieve through a highly unusual, one-of-a-kind, even “baffling” process known as “random deletion.”1 PPP “randomly” deletes respondents until the remaining respondents fall into “target ranges” for race, gender, and age based on Census data and prior exit polls.
Cohn found that his action wasn’t confined to Georgia, but actually occurred in a number of battleground states. More worrying, Cohn found that each individual poll seemed to have its sample weighted in such a way that would bring its results in line with other published polls.
In other words, PPP apparently was weighting its polls in such a way to ensure the results were generally in line with other poll results. A study by Vanderbuilt University found that the type of polls PPP conducts were more accurate when other firms polled a race first.
All of this gets into discussions about methodology and weighting that quickly becomes dry and interesting only to statistics nerds. PPP is unique, however, in that they use their polling, and their perceived record of accuracy, to actively push a Democrat agenda.
The finding in Georgia came at a critical time when outside groups were planning their campaign activity for the upcoming year. The poll of Georgia was also framed in the context of the recently concluded partial federal government shutdown. The political implication was that the shutdown had so damaged the Republican party that even a state like Georgia was competitive.
In an August 2013 PPP survey of Georgia, white voters made up 71% of the electorate in the sample. In the October 2013 sample, which showed Democrats tied or ahead in the Senate and Governor races, white voters only made up 63% of the sample electorate. Only PPP can answer why it believed the racial composition of the electorate had changed so much in just two months. Whatever the justification, it had the benefit of producing a result that showed Democrat support surging in the state.
In August 2012, PPP published a survey showing Missouri GOP Sen. candidate Rep. Todd Akin leading Democrat Sen. Claire McCaskill by 1 point. This surprising result came after Akin had been widely condemned for offensive remarks about rape and suggesting that women couldn’t get pregnant from an actual rape. The poll was credited with convincing Akin to stay in the race. He would go on to lose the race by 16 points just three months later.
It was also revealed in 2013 that PPP had suppressed a poll finding a Democrat state Senate recall candidate losing her election by 12 points. That candidate did in fact lose the recall election and PPP was widely criticized for not publishing its survey before the election. No doubt publishing the survey showing the Democrat trailing didn’t fit with whatever objective that prompted PPP to poll the recall.
On Tuesday, PPP made its first effort to try to affect the GOP primary contest, rather than simply measuring its state of play. Its poll of the GOP race largely conformed with other pollster’s results, finding Trump, Carson, Bush and Fiorina in the first four slots.
It ventured into the world of messaging by also asking voters whether or not they thought Barack Obama was a Muslim or if they believed he was born in the United States. This can produce a kind of “fun with cross tabs” where a certain candidates voters are more or less likely to believe outrageous things.
For example, a 2011 poll by PPP found that Hispanic voters were three times more likely than White voters to believe the moon landings were fake. Very liberal voters were more than twice as likely as very conservative voters to believe Neil Armstrong’s moon landing was fake. Liberals were also more likely than conservatives to believe that the government added fluoride to the water for nefarious reasons.
A Scripps-Howard survey in 2006 found that more than half of Democrats believed the government either took part in or allowed the 9-11 attacks to happen. Conduct enough surveys and one can find almost any result if you look hard enough.
The thing to keep in mind, though, is that PPP polls aren’t simply snapshots of any particular election. Often, especially in specific instances, they are part of an effort to establish a particular political narrative. That this narrative is always against Republicans is simply part of PPP’s mission.
If the mission calls for it, PPP can always tweak the sample as needed.