Download New Attempts to Reduce Overreporting of Voter

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
International Journal of Public Opinion Research Vol. 26 No. 2 2014
ß The Author 2013. Published by Oxford University Press on behalf of The World Association
for Public Opinion Research.
This is an Open Access article distributed under the terms of the Creative Commons Attribution
License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted reuse,
distribution, and reproduction in any medium, provided the original work is properly cited.
doi:10.1093/ijpor/edt010 Advance Access publication 13 June 2013
RESEARCH NOTE
New Attempts to Reduce Overreporting of Voter
Turnout and Their Effects
Eva Zeglovits and Sylvia Kritzinger
Department of Methods in the Social Sciences, University of Vienna, Vienna, Austria
Introduction
Information on voter turnout is often used to draw conclusions on the state of democracy and democratic representation and is thus a crucial measure when studying
electoral behavior in liberal democracies. Thus, turnout is an important variable in
electoral surveys. However, electoral researchers are often confronted with the problem of ‘‘overreporting’’: The proportion of respondents who report that they voted is
higher than the actual turnout in the election (e.g. Traugott & Katosh, 1979). In many
countries, the only possible way to gather knowledge on electoral participation is by
relying on reported turnout, as validated turnout is simply not accessible. The challenge
is therefore to improve the survey questions to reduce potential sources of error in
reporting turnout. In this article, we focus on the problem of misreporting. First, we
test different new question formats capturing both memory failure and social desirability bias. Second, we analyze the impact a reduction of overreporting has on followup questions. Thus, we look at possible spillover effects. As the individual probability
for a nonvoter to overreport turnout is higher in countries with high turnout (Karp &
Brockington, 2005), for our tests we choose a country featuring this characteristic,
namely Austria. Austria has been known for its comparatively high turnout rates,
which were >90% until the mid-1980s but declined in the 1990s as in many other
countries (Franklin, 2004).
All correspondence concerning this article should be addressed to Eva Zeglovits, Department of Methods
in the Social Sciences, University of Vienna, Rathausstr. 19/1/9, 1010 Vienna, Austria. E-mail:
[email protected]
RESEARCH NOTE
225
The Challenge: Finding a More Valid Question for Voter Turnout
Our aim is to obtain a more valid question for reporting turnout in surveys. This is
necessary, since we know, for instance, that overreporting is not distributed equally
across the electorate: Overreporting correlates with individual characteristics (e.g.
Bernstein, Chadha, & Montjoy, 2001; Cassel, 2003; Hill & Hurley, 1984; Presser &
Traugott, 1992) and contextual variables (e.g. Karp & Brockington, 2005); moreover,
the impact of individual characteristics is sensitive to context as well (Gòrecki, 2011).
Conclusions drawn from the analyses of self-reported turnout are thus biased.
Scholars have identified a number of causes for these deviations in reported turnout
in surveys (Holbrook & Krosnick, 2010): Errors that are related to sampling, such as
noncoverage or survey nonresponse (Traugott & Katosh, 1979); effects of pre-election
interviews (Greenwald, Carnot, Beach, & Young, 1987); measurement or reporting
accuracy errors, such as memory errors or unintentional misreporting (Belli, Moore, &
VanHoewyk, 2006; Belli, Traugott, Young, & McGonagle, 1999; Stockè, 2007; Stockè
& Stark, 2007); and, finally, intentional misreporting owing to social desirability
(Bernstein et al., 2001; Silver, Anderson, & Abramson, 1986; Stockè & Stark, 2007).
In this article, we focus on these latter two aspects, memory failure and social
desirability bias, as they might not be independent from each other: Respondents
might choose to remember their past behavior in a more socially desirable way
(Abelson, Loftus, & Greenwald 1992). There have been several attempts to reduce
the misreporting problem: (1) Introducing new ways of wording questions; (2) diversifying response options; and (3) using indirect ways of asking.1 As a result, most
electoral studies do not just simply and directly ask for turnout in the previous
elections, but connect the question with some stimulus meant to reduce misreporting.
For instance, the European Social Survey (round 5) uses the statement ‘‘Some people
don’t vote nowadays for one reason or another,’’ while the American National Election
Study (ANES) asks ‘‘In talking to people about elections, we often find that a lot of
people were not able to vote because they were not registered, they were sick, or they
just didn’t have the time.’’ However, findings suggest that this does not reduce
overreporting compared with the simple and direct question on turnout (Abelson
et al., 1992).
Belli et al. (1999) and Belli et al. (2006) developed additional instruments to simultaneously reduce intentional false answers as well as memory failure. They changed
the stimulus by including a long explanatory statement to assist respondents in remembering the election of interest and by diversifying the response options to report
nonvoting, listing different face-saving answers. Importantly, both measures reduced
overreporting compared with the standard question in the ANES.
We extend these findings and explore whether this latest attempt (Belli et al., 2006)
reduces overreporting in other contexts as well. This seems all the more important, as
this instrument has been tested successfully in the US context but failed in Israel
(Waismel-Manor & Sarid, 2011). Additionally, we develop and test a new form of
diversification of response options. This new form tries to capture the ‘‘likelihood’’ of
See, for example, the item count technique (Holbrook & Krosnick, 2010).
1
226
INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH
turnout after the election by listing memory-failure options. It provides response
options that allow respondents to say that they simply cannot remember.
Overreporting is sensitive to context (Karp & Brockington, 2005). In countries
where turnout is high, reported turnout is also high. On the one hand, for the
single individual, it might be more difficult to admit nonvoting, on the other hand,
respondents might have been more likely to have thought about voting resulting in
higher proportion of misremembering. Thus, we select a county with high levels of
turnout, where the risk of overreporting for every nonvoter is high, namely Austria.
As validated turnout data are not available in Austria, testing reported turnout is a
main challenge. However, it becomes all the more important, as the improvement of
survey questions is the only possibility to obtain more valid turnout responses. Thus,
we follow Holbrook and Krosnick (2010, 2013) and rely on indirect evidence in testing
these new attempts: We assume that lower levels of reported turnout in general and in
comparison with the actual turnout in particular can be interpreted as a reduction of
overreporting.2
Providing respondents with a number of response options where they can express
more easily both their memory failure and their ‘‘unsocial’’ behavior might also have
an impact on follow-up questions. Questions that make respondents think about their
past behavior can shift responses focusing on future behavior. The content of earlier
questions gives access to information and behavior (e.g. possibilities to admit nonvoting) that will then affect the following questions (e.g. Sudman, Bradburn, & Schwarz,
1996). These latter responses might be ‘‘a function of the questions presented earlier
in a survey’’ (Kaplan, Luchman, & Mock, 2013). Indeed, Presser (1990) found that
the possibility to report socially desirable behavior in the past reduces the need to
present oneself as a good citizen when talking about the most recent election.
Therefore, if new question forms help respondents admit that they did not vote in
the previous election, this might increase the probability that they also admit that they
will not vote in a future election. For the design of pre-election surveys, where the
focus is on gathering the most probable turnout in the upcoming elections, this will be
of great importance. Thus, we look at possible spillover—or sequencing—effects on
other questions (Transue, Lee, & Aldrich, 2009) which, to our knowledge, has not yet
been researched in the field of turnout question.
Development and Comparison of Question Wording
Testing Different Turnout Questions
To test our different question versions, we set up a survey experiment testing three
different treatments. The experiment ran in January 2011, which is >2 years after the
most recent federal election took place in Austria. Memory failure is generally known
to increase with time (Saris and Gallhofer, 2007), and time delays are said to increase
instances of overreporting and add to memory failures in the turnout question
2
Self-reported turnout is known to consist mainly of voters and overreporters, while underreporting is a
minor problem (e.g. Traugott & Katosh, 1979; Abelson, Loftus, & Greewald, 1992; Belli, Traugott, Young,
& McGonagle, 1999; Belli, Moore, & VanHoewyk, 2006; Selb & Munzert, 2013). This is why we interpret
the share of self-reported voters as the sum of true voters and overreporters.
RESEARCH NOTE
227
(Abelson et al., 1992; Belli et al., 1999; Stockè, 2007; Waismel-Manor & Sarid 2011).
To minimize memory failures, we made sure people were thinking of the correct
election. We added an explanatory statement for all treatment groups that introduced
the topic and reminded respondents of the election in 2008 by emphasizing first the
political actors who were involved then, and second, by pointing to the surprising
early collapse of the coalition. Cognitive testing3 confirmed that with this introduction
respondents remembered the requested election.
Introduction: ‘‘The following question deals with the federal elections that took
place in September 2008, after the grand coalition of Gusenbauer and Molterer
collapsed with the words ‘It’s enough’, and that resulted in Werner Faymann being
chancellor.’’4
Treatment A included the standard question version of most election studies, and
thus formed our control group:
Treatment A: Introduction plus ‘‘In this election a lot of people could not vote or
chose not to vote for good reasons. What about you? Did you vote or not?’’
In Treatment B, we used the approach developed by Belli et al. (2006) with a
response scale including four possible answers, three of them offering response options
to report nonvoting of which two included face-saving options. Although minor
changes were necessary in the question wording, we kept the response options
identical:
Treatment B: Introduction plus ‘‘In talking to people about elections, we often find
that a lot of people were not able to vote because they were sick, did not have the
time, or were just not interested. Which of the following statements best describes
you? [READ ALOUD]
1.
2.
3.
4.
5.
I did not vote in the federal election in Sept 2008.
I thought about voting this time but didn’t.
I usually vote but didn’t this time.
I am sure I voted in the federal election in Sept 2008.
[DO NOT READ ALOUD; VOLUNTEERED] I voted by absentee ballot.’’
Meanwhile, Treatment C included a new array of response options tackling in
particular memory failure. We derived the idea from the propensity to turnout,
which is usually asked in pre-election surveys, and measures the likelihood of electoral
participation in the upcoming elections. For Treatment C, we asked on a 4-point scale
whether the respondent was sure that she voted. Thus, the response options offer
3
Cognitive testing describes a set of techniques that is used to gain insight in the process of responding to
a survey question, covering the stages of comprehension, recall, decision, and choosing a response option
(Willis, 2005). We conducted 20 cognitive interviews in 2011; 10 respondents were assigned to the split in
which the turnout question was asked. One person was not eligible to vote in 2008 and thus did not answer
the question.
4
In this article, we present the English translations of the questions. All questions were asked in German,
which has the side effect that this article contributes to enlarge the applicability of questions to a nonEnglish speaking country. The German versions are available on request.
228
INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH
memory failure and face-saving options. Again, in cognitive testing respondents did
not report any difficulties in understanding the stimulus or the response options.
Treatment C: Introduction plus ‘‘In this election, a lot of people could not vote or
chose not to vote for good reasons. This election is some time ago now. Which of
the following statements describes you best? [READ ALOUD]
1.
2.
3.
4.
I
I
I
I
am
am
am
am
sure I did not vote in the federal election in September 2008.
not sure if I voted but I think it is more likely that I did not.
not sure if I voted but I think it is more likely that I did.
sure that I voted in the federal election in September 2008.’’
To sum up, Treatment A was the standard question and forms the reference point;
Treatments B and C represented question versions that offer further response options.
We expected that both of the alternative question wordings (B and C) should lead to
fewer instances of misreporting, and therefore to lower rates of reported turnout than
the standard question.
Data
We conducted the survey experiment in a telephone survey.5 In our experiment, we
only included respondents who were eligible to vote in the last election: these are
respondents with Austrian citizenship who were at least 16 years old in 2008. The
randomized assignment to the treatment groups led to slightly different subsample
sizes (A: 291, B: 268, C: 290).
We checked whether the subsamples had equal distributions according to the most
common socio-demographic variables. There were no significant differences for the
distribution of age, region, and gender (Chi-square goodness of fit test, all p > .05).
However, education and employment status (in a job yes/no) were not distributed
equally across treatment groups. As age is correlated with both education and employment status, we calculated a post-stratification weight6 that equally distributed
education, employment status, and age across the treatment groups. By doing so, all
treatment groups resembled the population.
Results of the Survey Experiment: Overall Levels of Reported Turnout
In a first step, we performed a descriptive analysis of reported turnout in the different
treatment groups. Using the standard question (A), 82.2% of respondents reported
5
The survey was conducted by the Institute for Panel Research (Vienna), from January 17, 2011 to
February 11, 2011. The population was people living in Austria aged 16 years with sufficient knowledge of
German language to participate in a survey. The sampling procedure was regionally stratified random
sampling of telephone numbers from the Austrian phone book, including all registered landline and cell
phones. Invalid numbers were dropped and replaced by valid ones. The proportion of completed interviews
(1,510) given the total number of valid phone numbers (3,000) was 50.3%. The experiment included two
more treatments, not reported here.
6
For all combinations of education, employment status, and age groups, the distribution within each
treatment group was adapted to the distribution in the overall sample. This weight was used as a probability
weight in the Stata procedures used for later analyses. We truncated the weight at 0.5 and 2.
RESEARCH NOTE
229
having voted, whereas in Treatment B, 78.4% reported turnout. Nearly 5% of the
respondents chose one of the two face-saving response options; 14.0% declared openly
they did not vote. In Treatment C, 74.6% declared themselves to be sure that they
voted, 14.8% that they did not, which leaves >5% who reported that they were not
sure whether they voted or not (see Table A1).
Following Holbrook and Krosnick (2010, 2013), first we compared the real aggregate turnout in the federal elections 2008 (78.8%) with the aggregate levels of directly
reported turnout for Treatments A, B, and C. As pointed out, this approach was
chosen, as cross-validating respondents’ reported turnout with their real turnout is
legally not possible in Austria. We are aware that by doing so, we cannot distinguish
sampling or coverage errors; however, our respondents resembled the electorate quite
well by excluding all people not eligible to vote in the election of interest and carefully
comparing socio-demographic distributions. Second, we analyzed whether the different alternative treatments lead to different reported turnout compared with the standard question.
We compared proportions of reported voters using a simple one-sided z-test,
excluding all respondents who refused to answer the question. For Treatment C,
we chose a conservative approach and counted respondents who said ‘‘they were
not sure but probably voted’’ as voters, whereas respondents who said ‘‘they were
not sure but probably did not vote’’ are counted as nonvoters.
Regarding the actual behavior, our findings showed that the standard question
overestimates turnout: The overestimation is significant despite the small sample
sizes (see Table 1). Most importantly, though, the total reported turnout in
Treatments B and C was not particularly different from the actual turnout of
78.8%. Thus, in both alternative question versions, the phenomenon of overreporting
is no longer statistically detectable when compared with actual turnout.
Comparing the different alternative treatments with the standard question,
our analyses revealed that although self-reported turnout in Treatments B and C
was lower than in A, these differences were barely not significant (choosing an alpha
of 0.05), although both p-values are smaller than .10.7 Most likely, we did not
achieve statistical significance owing to the small sample sizes in our treatment
groups.
As Treatments B and C both performed better when taking the actual turnout into
account and had lower self-reported turnout than the standard question, we have first
indications that their question wordings and diversified response options may be
preferable to the standard question if we want to reduce overreporting. Our data
give us indirect evidence that 6 out of 21 nonvoters reported voting in Treatment
A (85% reported turnout compared with 79%), whereas in Treatments B and C less
than 2 out of 21 nonvoters did so.
7
One might add that in cases with small n, one could also choose an Alpha of 0.1, as there might not be a
negative consequence of wrongly rejecting the null hypothesis and thus assuming that the new question
performs better than the standard question (e.g. Black 2009, p.397). Moreover, if we count all the respondents who chose ‘‘they were not sure but probably voted’’ also as nonvoters, the p-value is 0.02, indicating a
difference between treatment A and C.
230
INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH
Table 1
Results of the Survey Experiment
Estimated Turnout and Z-tests
A
B
Voted (P*100)
85.16
80.57
Did not vote
14.84
19.43
Valid n
280
260
Standard error of P
0.021
0.025
Z-test comparing treatments to overall turnout of 78.8%
2.97
0.72
ZP(Treat)real ¼ jP(Treat) .788j/SEP(Treat)
p-value
.002
.237
Z-test comparing treatments to standard question (Treatment A)
1.40
ZP(A)P(treat) ¼ jP(A) P(Treat)j/SEP(A)P(Treat)
p-value
.080
C
80.73
19.27
276
0.025
0.77
.221
1.34
.090
Note. Weighted data; (SEP(A)P(Treat))2 ¼ SEP(A)2 þ SEP(Treat)2.
Consequences on Follow-up Questions: Sequencing Effects
Experimental treatment might not only affect the questions within the survey
experiment but could also have consequences—in the sense of spillover effects—
on later survey questions (Gaines, Kuklinski, & Quirk, 2007; Transue et al., 2009).
In our survey, the question following the treatments was about party choice
in the 2008 election but only for those who declared themselves as voters. The
next question referred to the next election: ‘‘Imagine if there would be a Federal
election next Sunday, which party would you vote for?’’ It was the first question after the experimental treatments for those who had admitted nonvoting.
Response options were not read aloud. Interviewers assigned the responses to
one category of a given list, including the names of all parties represented in
the Austrian National Council, a category ‘‘other party,’’ a category ‘‘would not
vote/no party,’’ and a category ‘‘don’t know / no answer.’’ Table 2 shows the
percentages of party voters, nonvoters, and undecided voters in the three treatment
groups.
Using z-scores again, we find that reported nonvoting in upcoming elections was
affected by the different stimuli in the retrospective turnout question. Compared with
respondents of Treatment A, a significantly higher proportion of respondents in
Treatments B and C openly declared themselves to be nonvoters when asked which
party they would vote for: It appears that respondents were less concerned of admitting that they would not turn out to vote, if (face-saving) options for reporting
nonvoting were offered previously.
We therefore conclude that reducing overreporting in the turnout question
also has spillover effects on later questions. Once the question makes it easier to
report nonvoting in the last election, this effect seems to persist, as respondents are
then more likely to declare that they will also not turn out in an upcoming
election.
231
RESEARCH NOTE
Table 2
Spillover Effects on the Follow-up Question
Prospective Turnout and Z-tests
A
B
C
Declared for a party (%)
58.16
56.54
53.50
Declared nonvoter (%)
8.27
13.95
16.09
Declared undecided (neither nor) (%)
33.57
29.51
30.40
n
291
268
290
Standard error of p(nonvoter)
0.016
0.021
0.022
Z-test comparing share of nonvoters in treatments to standard question (Treatment A)
2.13
2.90
ZP(A)P(treat) ¼ jP(A) P(Treat)j/SEP(A)P(Treat)
p-value (one-sided)
.016
.002
Note. Weighted data, percentages in response categories, rounded numbers.
Discussion
Our experiment presented new evidence that alternative question wordings and response options help to reduce overreporting of turnout to some extent also in a
country with high turnout, such as Austria. Both, the question version adapted
from Belli et al. (2006), and our newly developed question version provided levels
of overall reported turnout that did not differ significantly from the real aggregate
level of turnout. Both question versions successfully offered enough options to induce
people to say that they did not vote, be it because of memory failure or because of
social desirability. Compared with the standard questions, both new forms slightly
reduced reported turnout, but the difference failed to be statistically significant by a
small margin.
Based on these results, we argue that the question wordings of Treatments B and C
would improve the measurement of voter turnout, as potential sources of overreporting
are reduced. Should they be used in different circumstances? This question is difficult to
answer, as both question versions were tested at the same time and in the same country.
We, however, speculate that Treatment B might be a better measure for post-election
surveys, which are usually conducted immediately after an election, where memory
errors will still be minimal but social desirability will be high: This kind of misreporting
is very well captured in the response options of Treatment B. Meanwhile, the diversification of response options in Treatment C might be particularly useful for recalling
turnout in previous elections as often asked in pre-election surveys where misreporting
owing to memory failure will be naturally higher while social desirability might have
decreased. Future comparative research taking into account how much time has passed
since the last election should shed more light on this first speculative distinction.
Offering more (face-saving) options for reporting nonvoting had also a spillover
effect on questions regarding upcoming elections. The turnout questions in
Treatments B and C led to different response behaviors in the following connected
question. This even emphasizes the importance of our finding concerning alternative
turnout questions, as also the bias in subsequent questions will be reduced.
232
INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH
Our experiment has revealed interesting insights into the research of overreporting.
First, we have tested successfully a new question wording that relies on propensity
measures and might be of particular relevance in pre-electoral contexts. Second, we
have shown that recent attempts to reduce overreporting (Belli et al., 2006) work in
multiparty systems with high turnout as well, and third, that sequencing effects might
be particularly relevant in research on reported turnout.
There are some limitations to our results that have to be considered. First, we
cannot validate the survey responses, but only compare them with the standard question. Second, misreporting is sensitive to survey mode (Tourangeau & Smith, 1996),
and overreporting of turnout is, in particular, higher in interviewer-administered than
in self-administered surveys (Stockè, 2007; Holbrook & Krosnick, 2010). Our findings, therefore, cannot necessarily be transferred to a different mode of questioning.
Third, results might be sensitive to the time span in question. Finally, our results
apply to the Austrian context, a German-speaking environment with high levels of
turnout. Nevertheless, working on more valid questions for capturing voter turnout in
countries where turnout validation is not an option is worth the effort, as our experiment has shown.
Acknowledgments
This research is conducted under the auspices of the Austrian National Election
Study (AUTNES), a National Research Network (NFN) sponsored by the Austrian
Science Fund (FWF) (S10903-G11) and was supported by a Grant from the
Netherlands Institute for Advanced Study in the Humanities and Social Sciences
(NIAS). The authors would also like to thank Dr Richard Költringer, Institute for
Panel Research (Vienna, Austria), for generously conducting our experiment.
Appendix
Table A1
Self Reported Turnout—Treatments A, B, and C
Treatment A
Treatment B
82.2% Voted
78.4%
Usually vote, but
3.6%
this time no
Thought of voting,
1.4%
but no
Did not vote 14.3% Did not vote
14.0%
Don’t know
0.8% Don’t know
0.2%
Refused
2.7% Refused
2.5%
n
291
268
Voted
Note. Weighted data, rounded numbers.
Treatment C
Voted (sure)
Not sure, but
think I voted
Not sure, but
think no vote
Did not vote (sure)
Don’t know
Refused
74.6%
2.1%
3.5%
14.8%
0.4%
4.7%
290
RESEARCH NOTE
233
References
Abelson, R. P., Loftus, E. F., & Greenwald, A. G. (1992). Attempts to improve the
accuracy of self-reports of voting. New York, NY: Russell Sage Foundation.
Belli, R. F., Moore, S. E., & VanHoewyk, J. (2006). An experimental comparison of
question forms used to reduce vote overreporting. Electoral Studies, 25, 751–759.
doi:10.1016/j.electstud.2006.01.001.
Belli, R. F., Traugott, M. W., Young, M., & McGonagle, K. A. (1999). Reducing
vote overreporting in surveys. Social desirability, memory failure, and source monitoring. Public Opinion Quarterly, 63, 90–108. doi:10.1086/297704.
Bernstein, R., Chadha, A., & Montjoy, R. (2001). Overreporting voting. Why it
happens and why it matters. Public Opinion Quarterly, 65, 22–44. doi:10.1086/
320036.
Black, T. (2009). Doing quantitative research in the social sciences. An integrated approach
to research design, measurement and statistics. London, Thousand Oaks, New Dehli,
Singapore: Sage.
Cassel, C. A. (2003). Overreporting and electoral participation research. American
Politics Research, 31, 81–92. doi:10.1177/1532673X02238581.
Franklin, M. N. (2004). Voter turnout and the dynamics of electoral competition in
established democracies since 1945. Cambridge, New York, Melbourne, Madrid,
Cape Town: Cambridge University Press.
Gaines, B. J., Kuklinski, J. H., & Quirk, P. J. (2007). The logic of the survey
experiment reexamined. Political Analysis, 15, 1–20. doi:10.1093/pan/mpl008.
Greenwald, A. G., Carnot, C. G., Beach, R., & Young, B. (1987). Increasing voting
behaviour by asking people if they expect to vote. Journal of Applied Psychology, 72,
315–318. doi:10.1037/0021-9010.72.2.315.
Gòrecki, M. (2011). Electoral salience and vote overreporting: Another look at the
problem of validity in voter turnout studies. International Journal of Public Opinion
Research, 23, 544–557. doi:10.1093/ijpor/edr023.
Hill, K. Q., & Hurley, P. (1984). Nonvoters in voters’ clothing: The impact of voting
behaviour misreporting on voting behaviour research. Social Science Quarterly, 65,
199–206.
Holbrook, A. L., & Krosnick, J. A. (2010). Social desirability bias in voter turnout
reports. Public Opinion Quarterly, 74, 37–67. doi:10.1093/poq/nfp065.
Holbrook, A. L., & Krosnick, J. A. (2013). A new question sequence to measure voter
turnout in telephone surveys. Results of an experiment in the 2006 ANES pilot
study. Public Opinion Quarterly, 77(Special Issue), 106–123. doi:10.1093/poq/
nfs061.
Kaplan, S. A., Luchman, J. N., & Mock, L. (2013). General and specific question
sequence effects in satisfaction surveys: Integrating directional and correlational
effects. Journal of Happiness Studies, 14, 1443–1458. doi:10.1007/s10902-012-9388-5.
Karp, J. A., & Brockington, D. (2005). Social desirability and response validity: A
comparative analysis of overreporting voter turnout in five countries. Journal of
Politics, 67, 825–840. doi:10.1111/j.1468-2508.2005.00341.x.
Presser, S. (1990). Can changes in context reduce vote overreporting in Surveys?
Public Opinion Quarterly, 54, 586–593. doi:10.1086/269229.
234
INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH
Presser, S., & Traugott, M. (1992). Little white lies and social science models:
Correlated response errors in a panel study of voting. Public Opinion Quarterly,
56, 77–86. doi:10.1086/269296.
Saris, W. E., & Gallhofer, I. N. (2007). Design, evaluation, and analysis of questionnaires
for survey research. Hoboken, NJ: Wiley.
Selb, P., & Munzert, S. (2013). Voter overrepresentation, vote misreporting, and
turnout bias in postelection surveys. Electoral Studies, 32, 186–196. doi: 10.1016/
j.electstud.2012.11.004.
Silver, B. D., Anderson, B. A., & Abramson, P. R. (1986). Who overreports voting?
American Political Science Review, 80, 613–624.
Stockè, V. (2007). Response privacy and elapsed time since election day as determinants for vote overreporting. International Journal of Public Opinion Research, 19,
237–246. doi:10.1093/ijpor/edl031.
Stockè, V., & Stark, T. (2007). Political involvement and memory failure as interdependent determinants of vote overreporting. Applied Cognitive Psychology, 21,
239–257. doi:10.1002/acp.1339.
Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The
application of cognitive processes to survey methodology. San Francisco, CA: JosseyBass.
Tourangeau, R., & Smith, T. W. (1996). Asking sensititve questions: The impact of
data collection mode, question format, and question context. Public Opinion
Quarterly, 60, 275–304. doi:10.1086/297751.
Transue, J. E., Lee, D. J., & Aldrich, J. H. (2009). Treatment spillover effects across
survey experiments. Political Analysis, 17, 143–161. doi:10.1093/pan/mpn012.
Traugott, M. W., & Katosh, J. P. (1979). Response validity in surveys of voting
behavior. Public Opinion Quarterly, 43, 359–377. doi:10.1086/268527.
Waismel-Manor, I., & Sarid, J. (2011). Can overreporting in surveys be reduced?
Evidence from Israel’s municipal elections. International Journal of Public Opinion
Research, 23, 522–529. doi:10.1093/ijpor/edr021.
Willis, G. B. (2005). Cognitive interviewing. A tool for improving questionnaire design.
Thousand Oaks, CA: Sage.
Biographical Notes
Eva Zeglovits is a post doctoral researcher at the Department of Methods in the
Social Sciences, University of Vienna. She holds a PhD in Political Science and an
MSc in Statistics and is working in the Austrian National Election Study. Her research focuses on electoral behavior, political socialization, and survey methods.
Sylvia Kritzinger is Professor and Head of the Department of Methods in the Social
Sciences, University of Vienna. She holds a PhD in Political Science and is one of the
principal investigators of the Austrian National Election Study. Her research focuses
on electoral behavior, public opinion formation, democratic representation, and empirical methods.