First things first, no piece of research is perfect. Every study will have its limitations.
One way we try to make research better is through understanding error.
If we find that the new drug works when it doesn’t that’s called a false positive. We can’t eliminate false positives; some patients will get better even if given placebo. But too many false positives and we will find an effect when one doesn’t actually exist. We will wrongly reject our null hypothesis.
Type I Error comes about when we wrongly reject our null hypothesis.
This will mean that we will find our new drug is better than the standard treatment (or placebo) when it actually isn't.
Type I Error is also called alpha
A way I like to look at Type I Error is the influence of chance on your study. Some patients will get better just through chance. You need to reduce the impact of chance on your study.
For instance, I may want to investigate how psychic I am. My null hypothesis would be ‘I am not psychic.’
I toss a coin once. I guess tails. I’m right. I therefore reject my null hypothesis and conclude I’m psychic.
You don’t need to be an expert in research to see how open to chance that study is and how one coin toss can’t be enough proof. We’d need at least hundreds of coin tosses to see if I could predict each one.
You see how understanding Type I Error influences how you design your study, including your sample size
More of that later. The next blog will look at how we actually statistically show that we’ve reduced Type I Error in our study.