A H.I.T.-less At-Bat, Part 4: My Response to Carpinelli’s Review of My Strength Meta-Analysis

This post represents part 4 of a multi-part series where I respond to Dr. Ralph Carpinelli’s critique of my meta analysis on single versus multiple sets for increasing muscular strengthIn part 1, I gave you some background on Dr. Carpinelli and my meta-analysis.  In part 2, I showed you how Carpinelli misleads the reader by omitting important information about my paper.  In part 3, I showed you how Carpinelli's objections to my inclusion of the Rhea paper were not justified.  In this section, I show you how Dr. Carpinelli’s objections to my inclusion of a study Kemmler et al are also not justified.

In-Korrect on Kemmler

Dr. Carpinelli lists 5 specific objections to the inclusion of the Kemmler paper, objections that are not valid based on closer examination.

  • Minimal strength gains (3-5%) after 12 weeks of training.  Dr. Carpinelli fails to give what he feels would be a minimum level of strength gain necessary for inclusion a meta-analysis, and why this would be minimum.  Is it 10%? 15%?  20%?  No rationale is given by Carpinelli other than that the gain was "miniscule."  Removing a study simply because the strength gains are small would be inappropriate and could be considered data manipulation.  Among a body of resistance training studies with a normal distribution, it is actually quite normal to have a small number of studies with either small or large strength gains.  A rough calculation of the study-level effect size (taking an average of all the effect sizes in the paper) for the Kemmler paper gives an effect size of 0.14.  The average effect size across all studies in my paper was 0.67.  The Kemmler paper was actually within 1 standard deviation of the mean overall effect size, indicating that it could not even be considered an outlier.  The Kemmler paper would be within 1 standard deviation of the mean even if it was not included in the average effect size across all studies.  Thus, removal of the Kemmler paper simply because the strength gains are small would not be appropriate.
  • Trainees not encouraged to perform any sets with a maximal effort.  Dr. Carpinelli fails to note that the inclusion criteria for my analysis was, "Single and multiple sets with other equivalent training variables".  Thus, training to failure was not a requirement, as long as both the single and multiple set groups were equivalent in the number of repetitions per set and % 1-RM used.  For example, if the single set group did 1 set of 10 with 60% 1-RM not to failure, then the multiple set group needed to do multiple sets of 10 with 60% 1-RM not to failure.  The paper by Kemmler fulfilled this criteria.  Dr. Carpinelli claims that the level of effort may have been different between the groups.  This is false since the % 1-RM and repetition volume per set were equivalent.  Dr. Carpinelli also claims that the level of motor unit recruitment may have been different due to the different effort, claiming that "intensity of effort is the predominant factor that determines motor unit recruitment [11]."  Dr. Carpinelli references his own critical analysis with this statement rather than original research.  I critically reviewed Dr. Carpinelli's paper in both Alan Aragon's Research Review and Weightology Weekly; Dr. Carpinelli's review was full of flaws, including leaving out a number of papers that conflicted with this conclusions, leaving out important information regarding particular studies that conflicted with his conclusions, and mis-stating the results particular studies.  It should also be noted that effort is a subjective quantity; when a subject gives "maximal effort", you have to take the subject's word for it.  However, there can be no guarantee that a subject is giving maximal effort.  Rate of Perceived Exertion (RPE) can be used to estimate a subject's effort level, but it is still not as objective as utilizing % 1-RM when matching intensity between groups.  There are many cases where effort may not be related to motor unit recruitment; for example, if a subject is ill, perceived effort may be higher than when the subject is not ill, even though motor unit recruitment may be similar.
  • No control for repetition duration during 1 RM testing or training.  This is the same objection that Carpinelli provided for Rhea et al.; my response is also the same.
  • No indication that the trainers or the 1 RM assessors were blinded to the different training protocols.  I am curious how Dr. Carpinelli expects trainers to be blinded to a training protocol when they are the ones administering the training.  Regarding 1 RM assessors, the lack of blinding was incorporated into the study quality metric; the Kemmler paper received 0 points on that particular characteristic.  It should be noted that other studies in my analysis did not blind the 1 RM assessors, including Hass et al.  Hass et al. did not report significant differences between the single and multiple set groups.  Perhaps Dr. Carpinelli only has an issue with blinding when he doesn't agree with the outcomes of a study.  What is ironic is that Dr. Carpinelli has cited the Hass paper as an example of a study that addressess the fact that many resistance training studies are poorly controlled.  Yet, the Hass paper has a number of characteristics that Carpinelli claims are reasons to exclude from a meta-analysis.  It seems that Dr. Carpinelli is OK with a study being included in a meta analysis if he agrees with the conclusions, despite the fact that the study has characteristics that he objects to.
  • Multiple potential confounding variables: additional supervised and unsupervised training sessions that involved similar muscles in the exercises that were assessed for 1 RM.  Such variables are only confounding if they differ systematically between the single and multiple set groups.  However, in the Kemmler study, the supervised and unsupervised training sessions were equivalent between the groups.  Thus, Carpinelli's objection is not valid.

Dr. Carpinelli also claims that I "committed several errors in reporting the study by Kremmler and colleagues."  Ironically, Carpinelli fails to note that he mis-spells Kremmler throughout his paper, when the author's name is Kemmler.  Carpinelli then states, "Although these may be arguably minor errors, they question the accuracy of performing a complex statistical procedure such as a meta-analysis."  However, with the exception of one instance which was a misprint by the journal, what he cites as errors are not errors at all.

First, Carpinelli says that on Table 1, I show Kemmler comparing 1 set and 2 set training.  He then correctly notes that the participants actually performed 2-4 sets of each exercise in the multiple set protocol during the high intensity phase, and 2 sets during the low intensity phase.  The number of sets reported in Table 1 is a rounded average of the number of sets performed by the multiple set group.  The volume in sets per session can be seen in Figure 1 of the Kemmler paper; the volume ranged from 20-28 sets per session for the session 1 protocol, and the mean was around 25.  There were 11 exercises performed in this protocol; thus, the number of sets per exercise was calculated by dividing 25 by 11, which is equal to 2 sets rounded.  It should also be noted that I used categorical predictors in my statistical model; one predictor was "multiple sets", and another was "2-3 sets".  Thus, the Kemmler paper was appropriately categorized in both instances based on the average number of sets performed per exercise.

Second, Carpinelli states that I failed to report the training status of the participants.  This was a misprint by the journal that was not caught during the proofing process; the training status of the participants was accurately categorized in the analysis and in the table that was in the original manuscript submitted to the journal.  This misprint had no impact on the outcomes of my paper.

Third, Carpinelli notes how I reported the effect sizes for the leg press, bench press, and rowing exercises, but did not report the effect sizes for the hip adduction exercise.  This was not an error; in my methodology, I clearly stated that the analysis examined exercises for major muscle groups.  Hip adduction was not considered a major muscle group.  If I had included the results for hip adduction in the paper, it would have only strengthened the superior effect size for multiple sets.

Fourth, Carpinelli notes how I reported the training frequency as one session per week, and then states how the participants actually trained in two supervised sessions per week with different exercises that involved muscle groups used in the 1 RM evaluations.  However, the training frequency stated in table 1 of my paper is for the tested exercises.  For example, the subjects were tested on 1-RM leg press and only performed the leg press once per week.  This is why the training frequency is only listed as once per week in my paper.    Carpinelli goes onto note that, on the second training day, the subjects used exercises that used muscles involved in the tested exercises.  However, Carpinelli fails to realize that my analysis was on sets per exercise, not sets per muscle group.  Carpinelli also fails to inform the reader that I controlled for multiple exercises affecting target muscles in my statistical analysis, a fact stated very clearly on page 1891 under the Data Abstraction section.  In that section, I stated:

A treatment group was classified as having multiple exercises per target muscles if that group performed exercises that targeted any of the prime movers of the tested exercise.  For example, the prime movers of a bench press were considered to be the pectoralis major, the anterior deltoids, and the triceps.  If a treatment group performed another exercise involving at least one of those muscles as a prime mover (e.g., an overhead shoulder press), then that group was considered to have performed multiple exercises per target muscles.

On Table 1, there is a "Yes" in the "Multiple Exercises" column for the Kemmler paper, indicating that I correctly addressed the fact that the subjects in the Kemmler study did have multiple exercises affecting the muscle groups involved in the 1-RM testing.  I then stated in the statistical analyses section that I controlled for this variable in my statistical model.  If Dr. Carpinelli had taken the time to understand the statistics of my paper (something he admittedly did not do, which I will address in a future post), he would have seen this.

Finally, Dr. Carpinelli questions how I gave Kemmler the second highest quality score in my analysis; again, if he had read the papers describing how the studies were scored, he would have known this.

Overall, Dr. Carpinelli's objections to the Kemmler paper are not valid.  Also, like the Rhea paper, the objections are moot.  The sensitivity analysis, showed on Table 3, showed how the removal of the Kemmler paper did not affect the outcomes.  The difference between single and multiple sets was nearly identical when the Kemmler paper was removed.  Carpinelli does not inform his readers of this fact.

Click here to read part 5, where I address Carpinelli's objections to the inclusion of a study by Kraemer.

 


Get the latest science on muscle gain and fat loss every month

Keeping up with the research is tough, so let us do the work for you. Consider signing up for the Weightology Research Review. We cover 8 studies per month and break everything down for you, so you don't need a PhD to interpret the data. You also get access to an archive of nearly 300 video and written reviews, evidence-based guides, Q&A's, and more. Click here to learn more.

Want some sample content before you buy?

Get Instant Access to Free Research Reviews!

 
0 0 votes
Article Rating
Subscribe
Notify of
guest
1 Comment
Inline Feedbacks
View all comments
1
0
Would love your thoughts, please comment.x
()
x