Research is a great thing (insert shameless plug for my fantastically brilliant Research Review service here!), but it’s often misunderstood. Hopefully this article will form part of a mini-series looking at how to read and interpret research in order to get the most from it. This first piece will examine the concepts of statistical significance and practical relevance.

**Scientific method**

Scientific method is based upon statistics and probability. It goes through three main steps:

- The research question is formed
- An experiment is designed
- A statistical model is used to analyse the results

**The null hypothesis**

It is important to understand that science assumes the null hypothesis; think innocent until proven guilty. Science works from the start-point that there is no relationship between X and Y. The experiment needs to show, beyond ‘reasonable doubt’, that there is indeed a relationship.

**Statistics and reasonable doubt**

The concept of reasonable doubt is where our statistics come in. Once the experiment is complete, data is analysed with an appropriate statistical model. This statistical model should provide two things:

- An effect size to show the magnitude or size of the relationship between X and Y
- A P value that describes the likelihood of a true relationship between X and Y

**P values**

P values (also known as alpha values) are an estimate of probability; they tell you the likelihood that a relationship between variables is due to chance. A P value of 1.00 means that there is a 100% chance a relationship is due to chance, i.e. there is no relationship. A P value of 0.01 means there is a 1% chance the relationship is due to chance, i.e. there is an almost certain relationship.

**Statistical significance**

For a relationship to be termed significant, the P value must fall below a pre-determined P value (known as the alpha level). The majority of sciences, sport and exercise included, apply an alpha level of P ≤ 0.05. This means that for the null hypothesis to be rejected, and therefore experimental hypothesis accepted, the statistical test must calculate a P value of 0.05 or less.

So, to summarise the boring part:

- P > 0.05 means not significant
- P ≤ 0.05 means not significant

**Significance versus relevance**

All significance can tell you is the likelihood that a relationship exists, it can’t tell you whether or not it’s important. In the real world the important question to ask is, ‘so what?’ Are these findings and relationships practically relevant?

**Magnitude and effect size**

The magnitude of the effect is a key consideration when evaluating the relevance of a study. For example, a relationship can be statistically significant but infinitesimally small at the same time.

**Significant but irrelevant**

Let’s consider the following mock example:

‘Beta-alanine significantly improved power output…’

Read this on its own and I’m sure you’d be straight down the beta-alanine shop to order a case or two. However, add the following…

‘… by an average of 0.6%’

…and I’m sure you’d not be quite so amazed.

**Absence of evidence is not evidence of absence**

Traditional statistical approaches are, by nature, conservative. Essentially, they set out to disprove the research question and they need to be 95% convinced before accepting otherwise. This means a relationship might very well exist, but our statistical tests won’t allow us to make such a conclusion. Think of it like being pregnant but ‘failing’ a pregnancy test because it’s only 75% sure that you’re pregnant. We call this a ‘false negative’ or a type II error.

**Non-significant but highly relevant**

Let’s go through another mock ‘finding’:

‘Beta-alanine did not significantly improve power output’

Read this in isolation and you’d assume that there’s nothing going here. But what if you discover this in the results…

‘Power output following beta-alanine ingestion was improved by an average of 10% (P = 0.14).’

Hang on a minute, is this potential for a 10% improvement not relevant? When you’re reading research remember that authors need to make conclusions based upon the results of the statistical tests. You may need to delve into the results of a study to see the whole picture.

**Wrapping up**

So there’s a little primer for you on statistical significance and practical relevance. Far from an exhaustive discussion, but hopefully we’ve highlighted that these are two very different beasts and it’s really important to differentiate between the two.

Remember to be research savvy when you’re reading studies – don’t blindly follow the conclusions of the authors!