#FOAMPubMed 5: Significance

newspaper-943004_960_720.jpg

SIGNIFICANCE MEANS SOMETHING DIFFERENT IN RESEARCH THAN IN LAY LANGUAGE

Often in the media we hear the results of a new trial showing a ‘significant’ result. A company may market a new drug or product that ‘significantly lowers your cholesterol’ for example. Or ‘such and such significantly increases your risk’ of something.

The trouble is for most of us that means that the effect must be large. The drug or product will make your cholesterol drop by a lot. That’s what ‘significantly lowering’ means to us.

SIGNIFICANCE IN RESEARCH MEANS YOU’VE REDUCED THE CHANCE OF FALSELY REJECTING YOUR NULL HYPOTHESIS

It means you’ve designed your study and recruited enough subjects to reduce the effect of chance. Usually the more significant we want our results to be the larger our sample size needs to be.

In a previous blog we looked at how Type I Error means falsely rejecting the null hypothesis through too many false positives. We looked at how we show we’ve minimised that chance with a p value. The gold standard is p<0.05 which means there is a less than 5% chance of falsely rejecting the null hypothesis.

p<0.05 MEANS OUR RESULTS ARE SIGNIFICANT

That’s what statistical significant means. It’s fairly arbitrary. In reality there’s very little between a p value of 0.049 and a p value of 0.051. Except the former allows you to use the magic words “my results are significant” and the latter does not.

SIGNIFICANCE DOES NOT DESCRIBE THE SIZE OF THE EFFECT

I could study a new drug for blood pressure and find the average reduction in my volunteers is only 1mmHg. That doesn’t sound a lot. But if the p-value is p<0.05 that is statistically significant. I could therefore describe my drug as statistically significantly reducing blood pressure.