What is ... the problem with 'statistical significance'?
Speaker(s):
Peter Martin, University College London
Abstract:
Many published research findings may be false. Attempts to replicate results from high-profile scientific studies too often contradict the original findings. One source of the problem is the way that statistical hypothesis tests are commonly used in contemporary research. In particular, many scientists misunderstand p-values. The 'p < 0.05' threshold was originally intended to protect researchers from over-interpreting random variation. But 'statistical significance' is now often wrongly perceived as indicating the scientific robustness of a finding. When a journal editor rejects a manuscript because 'the result is not significant', publication bias is the logical consequence.
This session will clarify the concepts at the heart of the debate about 'statistical significance'. We'll shine light on common mis-uses of p-values and explore how to avoid them. We'll also discuss proposed solutions to the problem, such as open science and registered research reports, as well as the idea of abandoning 'statistical significance' altogether.