A H.I.T.-less At-Bat, Part 5: My Response to Carpinelli’s Review of My Strength Meta-Analysis

This post represents part 5 of a multi-part series where I respond to Dr. Ralph Carpinelli’s critique of my meta analysis on single versus multiple sets for increasing muscular strengthIn part 1, I gave you some background on Dr. Carpinelli and my meta-analysis.  In part 2, I showed you how Carpinelli misleads the reader by omitting important information about my paper.  In part 3 and part 4, I showed you how Carpinelli's objections to my inclusion of the Rhea and Kemmler papers were not justified.  In this section, I show you how Dr. Carpinelli’s objections to my inclusion of a paper by Kraemer are not justified.

Kancel Kraemer?  Not Kw-ite...

Carpinelli lists 8 specific objections to my inclusion of the Kraemer paper.  Upon closer examination, only one of these objections is justified, but it still would not have affected the outcome of my paper.

  • Resurrected data from at least 15 years prior to publication.  Carpinelli gives no rationale as to why old data should be excluded from a meta-analysis, and does not give any criteria as to what the maximum age of data should be to be included.  The fact is, there is no good reason to exclude a study from an analysis simply because of the age of the data.
  • No control group, no control for repetition duration during 1 RM testing or training, and no indication that the trainers or those who assessed the 1 RM were blinded to the different training protocols.  Carpinelli listed these same objections to the Rhea and Kemmler papers; my responses can be found in my blog posts here and here.
  • Unprecedented 3-7 times difference in strength gains between groups in strong, previously trained (~2 years) Division I football players.  Carpinelli gives no rationale as to what the maximum difference in strength gains between groups should be to be eligible for inclusion in a meta analysis.  In fact, exclusion of a study simply because he feels the difference is too large could be considered data manipulation that would bias the results.  Carpinelli also provides no evidence that the differences in gains are "unprecedented."  The differences, while large, are not completely out of the ordinary.  For example, Marshall et al. recently reported gains that were 2 times larger in an 8-set group compared to a 1-set group in squats.  This was in trained subjects after only 6 weeks; the study by Kraemer lasted 4 weeks longer which would magnify differences even more.  In fact, in the Marshall paper, the 1-set group showed evidence of a plateau in the last 3 weeks of training, while the 8 set group continued to show an upward trend.  It should also be noted that I addressed outlier studies in my paper using a sensitivity analysis, where I removed studies one at a time to determine how much they impacted my results.  Removal of the study by Kraemer had no impact on my outcomes, something Dr. Carpinelli does not mention in his review.
  • Unsubstantiated speculation that the difference in strength gains may have been caused by greater hormonal responses, which were not measured.  This is where Dr. Carpinelli's objections to the inclusion of this study delve into the realm of absurdity.  It is not valid to exclude a study based on the study author's speculations or interpretations of his own data.  In fact, it is quite normal for a researcher, in the discussion section of the paper, to try to explain his results based on observations from other research.  In this case, Dr. Kraemer was attempting to explain his results based on previous hormonal data reported by Gotshalk et al., where greater hormone responses were found with a 3-set protocol as compared to a 1-set protocol.  Perhaps Dr. Carpinelli's lack of experience in publishing original data would explain his odd objection to an author trying to explain his data in a discussion section.
  • The author's claim that the answers to important training questions were determined in his role as a coach before he analyzed the data revealed a strong potential bias for a specific outcome.  Speculation as to whether an author may be biased is not a valid reason to exclude a paper from a meta-analysis.  In fact, is can be an author's biases that lead to the formulation of specific hypotheses.  All scientific researchers can be considered biased in a sense...biased towards what the data suggests to them.  A scientist makes observations, and thinks that the observations tend towards a certain direction...he becomes "biased" because his observations are leading him towards that bias.  He then formulates a hypothesis around that bias and tests it.  Obviously, an honest scientist will change his bias if his hypothesis is ultimately rejected as data is accumulated.  For example, I reported on this blog how I changed my mind on the "metabolic advantage" of low carb diets, and no longer agree with the results of my own 2006 scientific publication due to newer, better controlled data.  Regardless, it is simply inappropriate to reject a paper from a meta-analysis because of a personal suspicion of bias of the original author.  Dr. Carpinelli's belief of bias in the Kraemer paper is much more subjective rather than objective.  In fact, one could argue that Dr. Carpinelli has demonstrated extreme bias in his review of my paper, based on his omissions of important facts, misinterpretations of my data, and errors in his evaluation.
  • Forced repetitions in one group but not in the other group.  Of all of Carpinelli's objections, this is the one objection that could be considered a valid objection, as the two training protocols were not 100% identical between groups.  In the Kraemer paper, the single set group had forced repetitions at the end of a set, while the multiple set group did not.  However, this difference would not have impacted the differences between the groups; it has been shown in other research that there is no difference in strength gains between a group that does forced repetitions and one that does not.  In fact, based on "intensity of effort" as Carpinelli defines it, one might argue that the fact that the single set group did forced reps would have decreased the difference between single and multiple set groups, not increased it.  In other words, the forced repetitions, if they had biased the results, would have biased the results in favor of single sets, if intensity of effort is of utmost importance as Carpinelli believes.  Also, in the sensitivity analysis, I re-analyzed the data without the Kraemer paper, and my results were the same.  Thus, the inclusion of the Kraemer paper had no impact on my outcomes, something Carpinelli does not tell the reader.

Carpinelli also makes an error when discussing my reporting on the Kraemer paper.  He says that I "noted that experiment #2 was an unsupervised program."  I did not note that experiment #2 was unsupervised; I said that it was unspecified whether the program was supervised or not.  In my analysis, I categorized studies into either "supervised" or "unspecified".  Dr. Carpinelli apparently got "unspecified" and "unsupervised" confused.  Carpinelli then goes on to describe an anonymous questionnaire that Kraemer gave to 115 football players regarding their compliance to single set protocols (experiment #5).  The response was that 89% of the players reported using additional multiple set programs at home or at health clubs because they wanted to supplement their single set protocol.  Dr. Carpinelli then states:

If this were true for any of the trainees in Kraemer's experiment #2, it makes the reported differences between the single set and multiple set groups even more questionable.

However, if the subjects in the single set group in experiment #2 were truly sneaking out and doing multiple sets, this would have biased the results towards the single set group and decreased the difference between the groups.  Again, unsubtantiated speculation is not a valid reason to exclude a paper from a meta analysis.

Tossing Carpinelli a Bone

Even though nearly all of Dr. Carpinelli's objections to the inclusions of the Rhea, Kemmler, and Kraemer papers are not valid, let's assume for a moment that they are.  What would the results of my analysis be with the exclusion of all 3 of these papers?  Given that I still have my data tables, I can re-rerun the analysis without those papers.  When I redo the statistical analysis with those papers excluded, the effect size difference between single and multiple set groups only slightly decreases from 0.26 to 0.21, and is still statistically significant.  The mean effect size for single sets slightly increases from 0.54 to 0.57.  The mean effect size for mutiple sets slightly decreases from 0.80 to 0.79.  When I look at the effect sizes for the dose-response relationship, 1 set gets an effect size of 0.57, 2-3 sets gets an effect size of 0.76 (only sligthly down from 0.79), and 4-6 sets gets an effect size of 0.98 (up from 0.89).  In other words, my results are practically the same even if you remove all 3 papers that Dr. Carpinelli objects to.

In summary, nearly all of Dr. Carpinelli's objections to my inclusions of the papers by Rhea, Kemmler, and Kraemer do not hold up upon further scrutiny.  Even if you do remove all 3 papers, my results are still the same.

Click here to read Part 6, where I address Dr. Carpinelli's discussion of my 1-RM inclusive criterion.

 


Get the latest science on muscle gain and fat loss every month

Keeping up with the research is tough, so let us do the work for you. Consider signing up for the Weightology Research Review. We cover 8 studies per month and break everything down for you, so you don't need a PhD to interpret the data. You also get access to an archive of nearly 300 video and written reviews, evidence-based guides, Q&A's, and more. Click here to learn more.

Want some sample content before you buy?

Get Instant Access to Free Research Reviews!

 
0 0 votes
Article Rating
Subscribe
Notify of
guest
4 Comments
Inline Feedbacks
View all comments
Jim Windus
Jim Windus
10 years ago

Dear Jim, as i said before i admire your knowledge base. It is truly beyond my ability to comment on, as is Dr Carpinelli’s. I truly feel the only way to further this conversation, which between the single /multiple set people is beginning to resemble the Israelis and the Palestinian conflict, is to have an editor or a journal ( say, Journal of Exercise Phiosiology Online) print letters from the two groups in alternating months. This way you, and say Dr Kraemer, can present your points , then a Carpinelli or someone else from the single set camp can present,… Read more »

Steven Goumas
Steven Goumas
10 years ago

That was fantastic, and damning. It seems that Jim would be your typical “plant”, I wouldn’t worry about it too much. Thank you for that great critique, I look forward to reading more by you. I’m still reading through “Body by Science” although I find the pitch very similar to Carpinelli and a lot of the scientific “proof” weak in a lot of places. Glad I found this critique.

Jim Windus
Jim Windus
11 years ago

Dear Jim, Sorry for not replying sooner. I am writing this with the understanding that I am NOT a scientist. I am a licensed psychologist and have been a personal trainer since 1979, and I have seen the industry grow, change, and morph in many more ways than most. That being said, I possess not a smidgen of the scientific knowledge you, or Dr Carpinellii possess.n In the field, I would consider myself a practitioner, one who interprets data and disseminates it for the public. The science that you and Dr. Carpinelli work very hard to put forth into a… Read more »

4
0
Would love your thoughts, please comment.x
()
x