Probing of "Don't Know" responses in surveys

Date
Category
NCRM news
Author(s)
Jouni Kuha, London School of Economics and Political Science (LSE); Sarah Butt, City, University of London; Myrsini Katsikatsou and Chris Skinner, LSE

When a respondent’s answer to a survey question is “Don’t know” (DK), we often regard this not as a valid answer but as a form of nonresponse. It is then a problem which we would like to reduce. One possible way of doing this is DK-discouraging questioning or probing. This means that the survey interviewer, rather than accepting a DK answer immediately, asks the question again and with gentle encouragement for a non-DK response. In our study, for example, probing took the form of the statement “We are interested in your views. If you are not sure please give the answer that comes closest to what you think”, followed by the question being repeated once.  

But is this a good idea? Probing can reduce the proportion of DK responses substantially, but this gain comes at a cost. Probing increases the length of the interview and the burden to the respondents. Perhaps more importantly, it can also affect the quality of the survey measurement, if answers obtained through probing are of a different quality than ones obtained without it. Ideally, probing should just provide enough encouragement for an initial DK-respondent to give a well-considered substantive response. It is, however, also possible that probing will pressure the respondent to give an ill-considered answer just to satisfy the insistent interviewer.

We compared the responses obtained with and without probing, to eight survey questions (on attitudes to welfare) asked of 4770 respondents in Bulgaria, Hungary and Portugal using the European Social Survey Innovation Sample. In each country, 75% of the respondents were randomly assigned to the treatment group where each DK response was probed, and 25% to the control group where probing was not used.

Probing converted around half of initial DK responses to substantial answers. Comparing the responses themselves, probed answers were typically more likely to have non-extreme values – such as the neutral “Neither agree nor disagree” – than were unprobed answers. But what does this tell us? There are broadly two possibilities. It could be that the observed differences are due to a measurement effect of probing, for example that some of those neutral responses are hasty replies which do not agree with a probed respondent’s true views on the question. But they could also be a sign of a selection effect, where respondents who need probing are genuinely different - here more neutral in their true views - from those who respond immediately. Probing is beneficial if there is a selection effect, but undesirable if there is a measurement effect.

Measurement and selection effects can be distinguished only if we analyse answers to several questions together. The eight questions in our study are used as two multiple-item scales for two latent attitudes, of the kind which are typically analysed using a latent variable model such as factor analysis. We combined this model with a second latent variable model for how a response was obtained (unprobed, probed, or not at all), in a way which allows us to examine the effects of probing. For example, a measurement effect is present if the latent-variable measurement model for a survey item is different for probed and unprobed responses to that item.    

The results of the analysis showed that there was indeed a measurement effect. In other words, the observed differences between probed and unprobed responses were not explained only by the fact that these responses came from individuals with different levels of the attitude being measured, but also (and more importantly) by differences in how the responses behaved as measures of the attitude.  The magnitude of this measurement effect varied between different items and the three countries, but in most cases it was such that responses obtained after probing were weaker measures than were unprobed responses.  
When there are measurement effects, the costs of probing are likely to outweigh the gains from less nonresponse. For this reason, our results provide evidence against the use of probing of “Don’t know” responses in surveys, at least for the kinds of attitudinal items and respondents considered in this study.

Notes
This work was part of the project Item nonresponse and measurement error in cross-national surveys: Methods of data collection and analysis which was by funded by NCRM under its programme of Methodological Innovation Projects. The Fieldwork was conducted using the ESS Innovation Sample as part of the The European Social Survey: Data in a Changing Europe project (ESS DACE) supported by the European Union under Framework Programme 7 (Research Infrastructures), GA number 262208.
For a full description of the study, see the article Kuha, J., Butt, S., Katsikatsou, M., and Skinner, C.J. (2018). The effect of probing “Don’t Know” responses on measurement quality and nonresponse in surveys. Forthcoming in Journal of the American Statistical Association.