NCRM International Visitor Exchange Scheme (IVES)
Uses and Applications of Qualitative Research Methods in Policy Evaluations Using Observational Designs: a Review and Synthesis (2017 - 2018)
Dr Alasdair Jones (London School of Economics) (A.Jones@lse.ac.uk) is visiting Professor Jennifer Curtin and Professor Peter Davis (University of Auckland).
The main objectives of the proposal are:
- To study the application of social research methodologies for public policy evaluation using observational research designs (and in particular the role that qualitative research has played and can play in such evaluations);
- To review and synthesise published approaches to research design for public policy evaluation that integrate a qualitative research component, and to use this work as the basis of a research output;
- To engage in capacity development in an applied research methodology/public policy context;
- To use the research conducted during this visit as starting point for developing a related training workshop to be delivered in association with NCRM;
- To use this visit as an opportunity to instigate a collaborative research proposal for an empirical policy evaluation study that firmly and purposively integrates qualitative methods in the study design.
Evidence-based policy-making is high on the political agenda these days. Rather than proceed on the basis of a hunch that a policy will result in the intended effects, the argument goes that policy-makers should use evidence about the effects of existing and previous policies as the basis for future decisions. To this end, many social scientists have become interested in developing ways of evaluating the effects of government policies on their intended (and unintended) outcomes for society. There has been an increased emphasis on seeking to generate evidence about what policies do and do not work, and why.
Given the complexities of social life, and multiple competing explanations for changes in social behaviour (from an individual’s changing circumstances to global economic forces), such research is not straightforward. Experimentation is rarely practicable in the social world, and instead social scientists have to carry out their research in real world (or what are known as ‘observational’) settings. In such settings, researchers are unable to control, for instance, who is affected by a policy and who is not.
As an example, in a previous study (entitled ‘On the buses’) colleagues and I investigated the public health impacts of a policy that granted children (12-17 year-olds) in London free bus travel (e.g. how this affected children’s levels of physical activity, the extent to which they were victims of crime and the likelihood they would be involved in a road accident). To measure these things we used various sets of official statistical data (e.g. police crime data and regional travel data). As the free bus pass policy applied to all young people in London we could not compare these measures for young people who received free bus travel against young people who did not. Instead, we considered how levels of physical activity (for instance) changed for children when they received their free bus pass compared to adults in London (25-59 year-olds) who did not receive free bus travel.
These comparisons revealed a number of differences in health-related activities and outcomes for these groups after the introduction of the policy (September 2005). However, these differences did not in themselves reveal why, for instance, levels of accidents started to decline at a greater rate for children than for older people after the policy came into being. Moreover, there were some puzzles in our findings that warranted further investigation – for instance contrary to the views of many at the time, distances walked by young people did not decline when the bus pass was introduced.
To help move us closer to explanation, the study used interviews with young people who used the free bus passes as a means to try to better understand how their use of these passes affected their health-related behaviour and outcomes. These interviews were very revealing and suggested, for instance, that the amount of distance young people walked before and after they got their bus pass did not decline (as might at face value be expected) because the bus pass became a means to undertake more activities and visit more places (which often involved additional walking).
Intriguingly, attention to the contribution that more ‘qualitative’ (e.g. interview-based) research can make to our understandings of the effects of policy has been rather lacking to date. In light of this, in the proposed study I want to do three main things, therefore. First, working with an international expert in health policy evaluation in real-world (‘observational’) settings, I want to review existing policy evaluations incorporating qualitative methods, in order to describe how researchers have used these methods to inform the findings of their work. In turn, I want to set out some principles for how researchers might in future better incorporate qualitative methods into their evaluation studies from the outset, so that they can harness these methods’ explanatory potential. Finally, I want to develop and deliver a training programme for other social researchers out of this work.
Related resources:
Integrating quasi-experimental and inductive designs in evaluation: A case study of the impact of free bus travel on public health, J. Green, H. Roberts, M. Petticrew, Evaluation, 2015
Data Inference in Observational Settings, P. Davis