Cross-posted at PolitickerNJ
I recently wrote that a publicly released campaign poll memo was from a message testing survey, with the results presented out of context. I’ve had some experience with message testing polls, specifically working with non-profit organizations on crafting communication strategies.
My most recent experience with message testing polls, though, was as a respondent. A few weeks ago, I was called on my home phone to participate in a message testing poll conducted on behalf of a local campaign. With a plethora of campaign polls now underway, this recent experience provides a good lesson on what goes into a message testing survey – and why the media should be wary of reporting any results from an internal campaign poll. [It also provides a good lesson on the difficultly of avoiding at least a little bias creeping into partisan polls.]
The first question is how my name got chosen for this poll. Simply, I vote in every general election and so I am very likely to turn out for this off-year election. Furthermore, as an unaffiliated (i.e. independent) voter, I’m part of the “persuadable” electorate for whom campaign messages are specifically crafted.
After establishing that I didn’t work for a political or media organization, the poll interviewer’s first question was whether I thought my local area is headed in the right direction or on the wrong track. This was followed by a generic horse-race question, i.e. whether I was likely to vote Democratic or Republican for the local offices up for election this November. This is a standard question to establish a baseline, since most voters use party ID cues as their primary vote decision tool. It was also the first of three times I would be asked to state my vote intention during the course of the interview – a key characteristic of message testing polls.
The next set of questions asked me whether I have heard of the incumbent officeholders up for re-election and what my overall opinion of them was. Again, this is standard stuff – incumbent elections are typically referenda on the current officeholders. The next question then presented head-to-head matchups for each office, but this time naming the two candidates for each office. This was my second shot at expressing a vote choice, because any change from the generic party ballot question asked earlier could indicate underlying strengths or weaknesses of the named incumbents.
The next questions asked me to name my top local issue and assess my local government’s performance. The purpose of these items is to uncover any unknown issues before the poll measures the impact of potential messages already drafted by the campaign.
We then moved on to the meat of the matter. The interviewer read some fairly long positive descriptions, i.e. messages, about both candidates for each of the offices on the ballot. After which I was asked again about my vote choice – for the third time.
Two things are important to note here. First, an internal poll “memo” which releases the results of this third question without mentioning the context would be misrepresenting actual vote intention of the existing electorate – because the poll respondents had more information about the candidates than typical voters have – and that information was coming one side only.
Second, this is the point where I figured out who sponsored the poll (i.e. the challengers). As hard as this pollster tried to be balanced in wording the positive descriptions for both party’s candidates, the descriptions for one slate of candidates had just a little more “zing” in the wording. This subtle difference could have an unintended impact on the results of the third vote choice question.
To be fair, the word choice may not have been the pollster’s. I’ve worked with partners who insist that a particular word or phrase “needs” to be included in the question. Sometimes, you are successful in talking them out of it, and sometimes you just go along in order to move the project forward.
Question wording is at the core of the art of polling. It deserves as much scrutiny as the demographic composition of a sample and the poll’s margin of error. This is why reputable pollsters release the full wording of all the questions they ask. And it’s why the media should never report a poll where the pollster refuses to release the complete questionnaire.
Back to the survey interview. The final set of questions – before closing with basic demographic information – presented some negative information about the incumbents (confirming my suspicions about the sponsoring party). I was asked whether knowing this information would influence my vote. Again, this is standard stuff.
Interestingly, very few messages were tested in this poll. In a competitive high-profile race, each campaign will test a variety of pro and con statements to narrow down their communication strategy to the most effective messages. In this instance, only one or two messages about each incumbent were tested. This indicates a race where the decision may not be which messages to choose, but whether spending any resources will be worthwhile and, if so, how to identify the most pliable segments of the electorate.
By the way, this was a pretty good message testing poll given the election in question. The interviewer was of very high quality and the questionnaire was well-crafted, my observations about the positive candidate description imbalance notwithstanding.
There is also an interesting side note to this story. I confirmed the identity of the poll sponsor through an Internet search of the firm name and a review of Election Law Enforcement Commission expenditure reports. When I called representatives of both the pollster and party organization to corroborate, they were noticeably flustered. One said he’d call me back, but never did. The other answered my questions mainly with “um” or “er.”
Their reaction underscores the fact that campaigns tend to treat their internal polls as state secrets. Typically, they don’t want anyone outside the campaign organization to know what their poll results reveal. Indeed, they usually don’t want anyone to get wind of the fact they are polling at all. All of which makes any publicly released internal poll immediately suspect.
So, my advice to the media is if a campaign is suddenly eager to release poll results to a wider audience of “interested parties,” consider the motive. And then just file it away.
[Note: I wish campaign pollsters would be more forthcoming with their contact information at the end of the interview, since their conduct reflects on the whole profession. However, I decided not to identify the sponsor of this poll since their practices were sound and the primary purpose of this article is to foster a more critical eye toward the public release of internal campaign polls rather than “out” any particular campaign.]