So, we’ve looked at the background behind the use of jump testing to monitor neuromuscular fatigue in part one and the use of the reactive strength index (RSI) as an assessment tool in part two.
This is all very well in theory, but how does it work in practice?
The aim of this article is to show you how I use jump-based monitoring with myself and my athletes.
Who do I use it with?
I use regular jump-based monitoring with only a very small percentage of my more experienced (in terms of either training age or competitive status) athletes. The bottom line is that unless the results would make a difference to my programming, it’s not worth testing. This only qualifies about 2-3% of my current athletes.
For the majority of athletes I still rely on feel to dictate the session. We have a plan in mind before the session and this will normally come with an intended dose. Generally if I’s a good day the dose is higher, a bad day the dose is lower. Not really rocket science.
I think of the RSI as the cherry on the cake. This is a tool to fine-tune within-session programming and is therefore a strategy that few athletes need. Focus on putting the right ingredients into the cake first. Mix them well and bake it properly. Now for the icing. Then, finally, the pop the cherry on top.
What does RSI monitoring look like?
Following the pre-performance prep, athletes perform two sets of three drop jumps from a height of ~18 cm. Hands are positioned on hips throughout (see video).
The prep work will always have included a) landing focus elements, b) ankle stiffness elements, and c) global power/priming elements. These are all important factors to consider prior to testing as they demonstrate the potential to affect the RSI and therefore influence the reliability of your method.
How reliable is it?
If a test is not reliable then it’s meaningless. For the athletes we use the RSI with, we first establish a baseline over a period of two-three weeks.
- Within-session reliability
First off, we look at the within-session reliability over the three best jumps from the six jumps performed. Using just the best three helps to reduce some of the noise.
We typically see an average coefficient of variation (CV%) of somewhere between 1% and 3% after just a few sessions – meaning there’s about 1-3% variation on the same day.
My CV is just under 2%. Marwick et al. (2015) report ~3% in professional basketball players.
- Between-session reliability
If the RSI is a sensitive representation of fatigue then it would be expected to fluctuate a considerable amount over the training week and over the training cycle. Because of this, it makes determining a value for between-session reliability a little challenging. However, as we’ll start using the RSI during an out of competition period, we can try to get an idea of the reliability over a few weeks of fairly consistent GPP-style training to see how well it’s working.
My own RSI data
The figure below shows my RSI over the last six-seven months of training.
The blue markers represent the mean score from the best three jumps, the red markers represent the best score. The lines represent the four-point moving average.
It’s clear that the RSI certainly isn’t flat-lining, there are a large number of peaks of troughs as we move through the year. Even with just a quick eyeball of the graph, we can see these peaks and troughs follow a fairly predictable pattern, typically occurring every three-four weeks or so.
This mirrors my general training cycle of three weeks on, one week deload pretty closely – fatigue accumulates over the three week cycle and the RSI subsequently drop and then, following deload week, the RSI picks up again.
My own jump height data
I also monitor my own CMJ height along with my RSI, mainly as a general gauge of lower body power and ‘effectiveness’ of my current programme.
A single set of two maximal CMJs, with hands on hips, are performed after the drop jumps.
The case for RSI
Have a look at how the RSI and CMJ height compare. The RSI is definitely more sensitive to change. For this reason it’s probably a better choice if you’re looking to monitor the accumulation of fatigue over multiple training cycles.
However, I’ve found that when my RSI has really skyrocketed or plummeted, my CMJ score has tended to go with it.
So what about jump height?
If I’m going to use any parameter to adjust training load on a given then I’ll need to see a pretty decent change. And when I say decent change that doesn’t just mean anything above the ‘smallest worthwhile change’, I’m talking at least 10%.
With this in mind, at least from my own example, if I were only using this type of monitoring with as an indicator of ‘red-light’ sessions – times where the intended session needs to be changed substantially from the original plan- the CMJ would probably suffice.
Would the broad jump do the same thing as the CMJ? Possibly, although it’s not something I’ve looked at. This may even be a more attractive prospect than the CMJ as it’s even easier to set-up.
In the same ballpark, could something like the 3 Hop Test do for horizontal jump testing what the drop jump does for vertical testing? Certainly an interesting thought if the SSC is as important as it seems to be.
Really interested to see what people’s thoughts and experiences are with this. Please feel free to pop or comment below or drop me an email.
Marwick WJ, Bird SP, Tufano JJ, Seitz LB, Haff GG. The intraday reliability of the reactive strength index calculated from a drop jump in professional men’s basketball. International Journal of Sports Physiology and Performance. 2015: 10: 482-488.