I recently had the pleasure of discussing HRV in football and rugby on the Rugby Renegade Podcast. Soundcloud and iTunes links below.
I recently had the pleasure of discussing HRV in football and rugby on the Rugby Renegade Podcast. Soundcloud and iTunes links below.
I’ve recently had the pleasure of peer-reviewing a few very well-written and carried out studies investigating duration requirements for stabilization preceding HRV recordings by different research groups. I look forward to seeing the published versions as the quality of the papers was very high.
In reviewing these papers it prompted me to reconsider what we all have been using as the criterion period. My colleagues and I have published 5 papers using a 5-min R-R sample preceded by a 5-min ‘stabilization’ period (10 min total duration) as the criterion (as has other groups), which is in line with traditional procedures. But I think we failed to address an important limitation of these procedures…
The issue is that the ‘traditional procedures’ were not devised for the purposes of establishing LnRMSSD specifically (rather, they needed to accommodate spectral analysis), nor were they devised for reflecting fatigue and adaptation to training programs. Therefore, for these specific purposes, it can be argued that the traditional procedures may not be as relevant, or at the very least, calls into question whether the 5-10 min period following the 0-5 min stabilization is in fact a criterion within this context.
Some things to consider:
To be clear, I still think that research evaluating stabilization requirements and comparing to the ‘criterion’ is absolutely meaningful and an important starting point. This was not intended to be critical, but rather to open discussion on future research directions.
During spring training camp, we found that Linemen demonstrate the greatest reductions in LnRMSSD at ~20 h post-training, followed by Mid-Skill and Skill, possibly reflecting inadequate cardiovascular recovery between consecutive-day sessions for the larger players, despite lower PlayerLoad values. (Full-text available here)
Our first follow-up study during the early part of the competitive season found the same position-based trend, where Linemen demonstrated the greatest reductions in LnRMSSD at ~20 h post-training, followed by Mid-Skill and Skill. However, the magnitude of the reductions in LnRMSSD during the in-season were smaller relative to spring camp. We speculate that both reduced PlayerLoad values (15-22% lower than spring camp) and adaptation to intense preseason training in the heat and humidity during the preceding weeks account for the smaller LnRMSSD reductions observed during the early part of the competitive season. (Full-text available here)
Cardiac-Autonomic Responses to In-Season Training Among Division-1 College Football Players.
Despite having to endure a rigorous in-season training schedule, research evaluating daily physiological recovery status markers among American football players is limited. The purpose of this study was to determine if recovery of cardiac-autonomic activity to resting values occurs between consecutive-day, in-season training sessions among college football players. Subjects (n = 29) were divided into groups based on position: receivers and defensive backs (SKILL, n = 10); running backs, linebackers and tight-ends (MID-SKILL, n = 11) and linemen (LINEMEN, n = 8). Resting heart rate (RHR) and the natural logarithm of the root-mean square of successive differences multiplied by twenty (LnRMSSD) were acquired at rest in the seated position prior to Tuesday and Wednesday training sessions and repeated over three weeks during the first month of the competitive season. A position × time interaction was observed for LnRMSSD (p = 0.04), but not for RHR (p = 0.33). No differences in LnRMSSD between days was observed for SKILL (Tuesday = 82.8 ± 9.3, Wednesday = 81.9 ± 8.7, p > 0.05). Small reductions in LnRMSSD were observed for MID-SKILL (Tuesday = 79.2 ± 9.4, Wednesday = 76.2 ± 9.5, p < 0.05) and LINEMEN (Tuesday = 79.4 ± 10.5, Wednesday = 74.5 ± 11.5, p < 0.05). The individually averaged changes in LnRMSSD from Tuesday to Wednesday were related to PlayerLoad (r = 0.46, p = 0.02) and body mass (r = -0.39, p = 0.04). Cardiac-parasympathetic activity did not return to resting values for LINEMEN or MID-SKILL prior to the next training session. Larger reductions in LnRMSSD tended to occur in players with greater body mass despite having performed lower workloads, though some individual variability was observed. These findings may have implications for how coaches and support staff address training and recovery interventions for players demonstrating inadequate cardiovascular recovery between sessions.
Our next paper, currently in production, will feature HRV responses among positions throughout the entire preparatory and competitive season.
Our latest methodological study suggests that the SDNN:RMSSD ratio derived from a 60-sec recording may be a more convenient alternative to the traditional LF:HF ratio in athletes.
Purpose: The primary purpose of this study was to determine the accuracy of the standard deviation of normal-to-normal intervals (SDNN) to root mean square of successive normal-to-normal interval differences (RMSSD) ratio from 1-min recordings (SDNN:RMSSD1−min) compared to criterion recordings, as well as its relationship to low-frequency-to-high-frequency ratio (LF:HF) at rest and following maximal exercise in a group of collegiate athletes.
Method: Twenty athletes participated in the study. Heart rate variability (HRV) data were measured for 5 min before and at 5–10 and 25–30 min following a maximal exercise test. From each 5-min segment, the frequency-domain measures of HF, LF, and LF:HF ratio were analyzed. Time-domain measures of SDNN, RMSSD, and SDNN:RMSSD ratio were also analyzed from each 5-min segment, as well as from randomly selected 1-min recordings.
Result: The 1-min values of SDNN, RMSSD, and SDNN:RMSSD provided no significant differences and nearly perfect intra-class correlations (ICCs ranged from 0.97 to 1.00, p < 0.001 for all) to the criterion measures from 5-min recordings. In addition, SDNN, RMSSD, and SDNN:RMSSD from the 1-min segments provided very large to nearly perfect correlations (r values ranged from 0.71 to 0.97, p < 0.001 for all) to LF, HF, and LF:HF, respectively, at each time point.
Conclusion: The findings of the study suggest that ultra-shortened time-domain markers may be useful surrogates of the frequency-domain parameters for tracking changes in sympathovagal activity in athletes.
Full-text available on Research Gate: Link
For our first study with football, we wanted to determine if cardiovascular recovery from training varied among positional groups (i.e., Skill, Mid-Skill and Linemen). We also looked at some longitudinal relationships between cardiac-autonomic and training load parameters throughout spring camp.
We found that Linemen take longer to recover between training sessions than the other positions. This may have important implications for the competitive season because despite differences in recovery time among positional groups, football teams train on a fixed schedule. This may make Linemen more susceptible to developing signs and symptoms of overtraining, getting hurt or sick, etc. Fortunately, we captured data from the competitive season, too. That paper is forthcoming.
Here’s a look at our latest methodological paper in collaboration with Dr. Fabio Nakamura and colleagues investigating the suitability of ultra-short (1-min) HRV measures in athletes.
Previous studies of ours on this specific topic are linked below:
Adequacy of the Ultra-Short-Term HRV to Assess Adaptive Processes in Youth Female Basketball Players
Heart rate variability has been widely used to monitor athletes’ cardiac autonomic control changes induced by training and competition, and recently shorter recording times have been sought to improve its practicality. The aim of this study was to test the agreement between the (ultra-short-term) natural log of the root-mean-square difference of successive normal RR intervals (lnRMSSD – measured in only 1 min post-1 min stabilization) and the criterion lnRMSSD (measured in the last 5 min out of 10 min of recording) in young female basketball players. Furthermore, the correlation between training induced delta change in the ultra-short-term lnRMSSD and the criterion lnRMSSD was calculated. Seventeen players were assessed at rest pre- and post-eight weeks of training. Trivial effect sizes (-0.03 in the pre- and 0.10 in the post- treatment) were found in the comparison between the ultra-short-term lnRMSSD (3.29 ± 0.45 and 3.49 ± 0.35 ms, in the pre- and post-, respectively) and the criterion lnRMSSD (3.30 ± 0.40 and 3.45 ± 0.41 ms, in the pre- and post-, respectively) (intraclass correlation coefficient = 0.95 and 0.93). In both cases, the response to training was significant, with Pearson’s correlation of 0.82 between the delta changes of the ultra-short-term lnRMSSD and the criterion lnRMSSD. In conclusion, the lnRMSSD can be calculated within only 2 min of data acquisition (the 1st min discarded) in young female basketball players, with the ultra-short-term measure presenting similar sensitivity to training effects as the standard criterion measure.
Here’s a brief overview of our latest study capturing daily HRV and wellness ratings throughout overload and tapering in collegiate sprint swimmers preceding a championships competition.
The majority of research in the area has primarily focused on endurance athletes. It’s been a goal of mine for a while now to examine HRV responses in athletes participating in anaerobic events such as short-distance swimming.
The athletes completed wellness questionnaires and recorded HRV daily via smartphone and validated pulse-wave finger sensor (seated position) after waking. The observation period lasted 6-weeks which included 1 week of baseline, 2 weeks of overload and a progressive 3-week taper. The overload was characterized by a substantial increase in training intensity while overall volume varied by up to only 20%. Of the group, 2 athletes went on to compete in the 2016 Olympic summer games.
The purpose of this study was to evaluate cardiac-parasympathetic and psychometric responses to competition preparation in collegiate sprint-swimmers. Additionally, we aimed to determine the relationship between average vagal activity and its daily fluctuation during each training phase.
Ten Division-1 collegiate sprint-swimmers performed heart rate variability recordings (i.e., log transformed root mean square of successive RR intervals, lnRMSSD) and completed a brief wellness questionnaire with a smartphone application daily after waking. Mean values for psychometrics and lnRMSSD (lnRMSSDmean) as well as the coefficient of variation (lnRMSSDcv) were calculated from 1 week of baseline (BL) followed by 2 weeks of overload (OL) and 2 weeks of tapering (TP) leading up to a championship competition.
Competition preparation resulted in improved race times (p<0.01). Moderate decreases in lnRMSSDmean, and Large to Very Large increases in lnRMSSDcv, perceived fatigue and soreness were observed during the OL and returned to BL levels or peaked during TP (p<0.05). Inverse correlations between lnRMSSDmean and lnRMSSDcv were Very Large at BL and OL (p<0.05) but only Moderate at TP (p>0.05).
OL training is associated with a reduction and greater daily fluctuation in vagal activity compared with BL, concurrent with decrements in perceived fatigue and muscle soreness. These effects are reversed during TP where these values returned to baseline or peaked leading into successful competition. The strong inverse relationship between average vagal activity and its daily fluctuation weakened during TP.
While group responses are certainly meaningful, the individual responses provide more meaningful information to practitioners. The figure below shows the individual trends from 3 athletes that exemplify 3 common training responses I’ve observed in a variety of athletes.
Subject B (middle) has the smallest CV at baseline and subsequently handles the overload very well, with minimal reductions in lnRMSSD. This indicates that Subject B is in great shape and could probably handle greater loads.
Subject C (bottom) displays what I would consider to be a very expected response to overloading. There is a considerable increase in daily lnRMSSD fluctuation (i.e., increased CV) and progressive but small decrease in the trend. I interpret this type of response to indicate that the loads are sufficient enough to provoke the fatigue/recovery process but not so high that HRV becomes suppressed. This is possibly indicative of a load/dose of training that is high but within the overall recovery capacity of the athlete.
Subject A (top) has the highest CV of the group at baseline and subsequently responds the least favorably to the overload. lnRMSSD pretty much crashes almost immediately and remains suppressed for several days (boxed data points). The coach pulled back on subject A due to high fatigue, reduced performance, decrements in pulse-rate recovery between sets, etc. The trend immediately improves until about 1-week out from competition at which point loads again were further reduced. Ultimately, this athlete improved upon previous best times at competition from that year, suggesting that the interventions were effective.
The main take-home would be that the typical response to intensified training includes a reduction and greater daily fluctuation in HRV, along with decrements in wellness scores. Athletes demonstrating different responses (i.e., minimal change in HRV trend or conversely chronic suppression of HRV) may be coping better or worse than expected. Coaches should then investigate and address factors contributing to the poor response.