A H.I.T.-less At-Bat, Part 3: My Response to Carpinelli’s Review of My Strength Meta-Analysis

This post represents part 3 of a multi-part series where I respond to Dr. Ralph Carpinelli's critique of my meta analysis on single versus multiple sets for increasing muscular strengthIn part 1, I gave you some background on Dr. Carpinelli and my meta-analysis.  In part 2, I showed you how Carpinelli misleads the reader by omitting important information about my paper.  In this section, I show you how Dr. Carpinelli's objections to my inclusion of a study by Rhea et al. are not justified.

Wrong on Rhea

Carpinelli lists a number of reasons as to why he believes Rhea should have been excluded from my analysis, reasons that are not justified based on closer inspection:

  • No control group.  Carpinelli fails to give any reason why the lack of a control group in the Rhea study is an issue in my analysis, and how it would affect the outcomes of my paper.  I addressed the lack of a control group in some studies in the methods section of the paper, something Carpinelli fails to mention.  I stated:

Becker recommended the ES for the control group be subtracted from the experimental group ES; however, numerous studies in this analysis did not include a control group.  Because it is important to define the ES in a standard way across all studies, the control ES was assumed to be 0 in all studies and was not subtracted from the experimental ES.  To test this assumption, the mean control ES was calculated among all studies that had a control group; the mean ES was -0.04 ± 0.04, which was not significantly different from 0 (p = 0.38) when compared using a 1-sample t-test.

There were also other included papers in my analysis that did not include a control group, such as Hass et al., and Carpinelli did not take issues with those papers.  Hass et al. did not report significant differences between the single and multiple set groups.  Perhaps Dr. Carpinelli only has an issue with the lack of a control group when he doesn't agree with the results of the paper.

  • Small sample size.  Carpinelli fails to give any reason as to why a small sample size is a justification for excluding a paper from a meta analysis.  Carpinelli also does not state what he feels would be a necessary minimum sample size to be eligible for inclusion, and why he feels that would be the minimum.  In addition, if Carpinelli had taken the time to understand the statistical methods involved in the paper (something he admittedly did not do, which I will get to in a future post), he would have understood that studies were weighted based on the inverse of their sampling variance.  I state clearly on page 1892:

Observations were weighted by the inverse of the sampling variance.

Studies with small sample sizes and/or large variability had large sampling variances, and thus received less weight in my analysis.  In fact, Rhea had the largest sampling variance of all the studies included in the analysis.  Thus, it received the lowest weight; in fact, many other studies in the analysis received nearly 5-10 times the weight that the Rhea paper did.

  •  No control for repetition duration during the 1RM testing or training.  Carpinelli fails to give any reason as to how controlling for repetition duration during 1 RM testing would have impacted the results.  Perhaps Carpinelli has never performed a 1 RM on his own; if he had, he would realize that trying to control for repetition duration (and hence movement speed) during a 1 RM would adversely impact a subject's ability to perform the lift to the best of his ability, and thus would not be a true 1 RM.  When you perform a 1 RM, you are trying to push or pull the weight as hard as you can; because of the force/velocity curve, the weight will move slowly.  If the subject were to purposely move the weight slower, he would no longer be producing maximal force and it would not be a 1 RM attempt.  And the subject cannot move the weight any faster than the resultant speed due to the force/velocity relationship.  I should also note that there were other papers in my analysis that did not control for movement speed/repetition duration during 1-RM testing, including Hass et al.  Hass et al. did not report significant differences between the single and multiple set groups, and Dr. Carpinelli never raised any issues with that paper.  Perhaps Dr. Carpinelli does not have an issue with 1-RM movement speed/repetition duration when he agrees with the results of a study.
  • No statistical comparison between groups of post-training 1 RM, which appeared to be similar.   Dr. Carpinelli is incorrect with this statement.  Rhea performed a repeated measures analysis of variance (ANOVA), which compares 1 RM between the groups at each time point, including after training.  This is a basic statistical analysis that is taught in any entry level graduate statistics course.  Even if Dr. Carpinelli's statement regarding the lack of statistical comparison was true, it is irrelevant to my meta analysis, because my analysis did not require a statistical comparison of the groups after training.  My analysis involved calculation of effect sizes, which only required pre- and post-training 1-RM, and the standard deviation for the pre-training 1-RM.
  • Unprecedented effect sizes.   Dr. Carpinelli takes issue with the fact that Rhea reported extremely large effect sizes, dramatically larger than you would see in any other resistance training study.  Rhea was not clear as to how he calculated effect sizes, and I agree with Dr. Carpinelli that his reported effect sizes are extremely large and difficult to reconcile with other research.  However, it is a moot point, because I did not use the effect sizes that Rhea reported.  I calculated effect sizes differently from Rhea, and my calculated effect sizes ranged from 0.5 to 1.66, which are much more realistic.  Dr. Carpinelli also raises an issue with the fact that no confidence intervals were reported with the effect sizes; again, this is irrelevant to my paper because I did not use the effect sizes reported by Rhea.
  • No explanation for the large strength gains in relatively strong experienced trainees who showed no significant change in lean body mass.  I'm not sure where Dr. Carpinelli gets the idea that these trainees were "relatively strong."  The average starting bench press in the subjects was 141 pounds in the single set group and 147 pounds in the multiple set group.  This is for a 1-RM; I would not consider these subjects "relatively strong".  Perhaps they are relatively strong compared to someone who has never lifted a weight in his life, but they are certainly not relatively strong compared to more advanced trainees.  These were recreationally trained lifters who had lifted 2 days per week for 2 years.  While certainly not newbies, they also are not what I would call advanced.  Thus, the strength gains observed in these subjects are not out of the ordinary, particularly if they had never been part of an organized lifting routine ("recreationally trained" lifters, particularly college-aged ones, may not have particularly organized lifting routines) as was used in this study.

The fact is, Carpinelli's objections to the inclusion of the Rhea paper are not valid.  The Rhea paper had very little influence on the outcome, anyway.  The study did not have much weight in the analysis compared to other studies, and this was very apparent when you consider the sensitivity analysis that I reported on Table 3.  In this sensitivity analysis, I removed each study from the analysis one at a time, and then re-analyzed the data.  This was to determine if any one study was dramatically influencing my results.  It turned out that the Rhea paper had practically no impact on the outcomes of my paper.  Dr. Carpinelli fails to mention any of this in his critique.

In Part 4 of this series, I will address Dr. Carpinelli's objections to the Kemmler paper.  Click here to read Part 4.


Get the latest science on muscle gain and fat loss every month

Keeping up with the research is tough, so let us do the work for you. Consider signing up for the Weightology Research Review. We cover 8 studies per month and break everything down for you, so you don't need a PhD to interpret the data. You also get access to an archive of nearly 300 video and written reviews, evidence-based guides, Q&A's, and more. Click here to learn more.

Want some sample content before you buy?

Get Instant Access to Free Research Reviews!

 
0 0 votes
Article Rating
Subscribe
Notify of
guest
3 Comments
Inline Feedbacks
View all comments
Craig
Craig
11 years ago

It seems like comparisons of the effectiveness of different weight training protocols can become exceedingly complex very quickly due to the existence of many different variations on how exercise programs are done. Not only do you have differing recommendations on single versus multiple sets, you have lots of differences in terms of how these protocols are executed: – Are the repetitions done explosively, employing a lot of stretch – reflex action, or done in a slow and controlled fashion, to limit momentum and stretch reflex effects? – How long do you wait between sets, a measured 60 or 120 seconds,… Read more »

Kevin T. Cleary
Kevin T. Cleary
11 years ago

Great work! I have never seen anyone eviscerated so eloquently. Well written.

3
0
Would love your thoughts, please comment.x
()
x