A H.I.T.-less At-Bat, Part 2: My Response to Carpinelli’s Review of My Strength Meta-Analysis

In Part 1 of this series, I introduced you to Dr. Ralph Carpinelli, a professor who has made a career out of writing critical analyses of papers in the field of weight training.  Dr. Carpinelli recently wrote a critique of my meta analysis on single versus multiple sets for increasing muscular strength.  In this multi-part series, I expose the numerous flaws and misleading aspects of his critique.  This series will be technical and academic at times, so skim the main points if that type of writing bores you.  In this section, I show how Carpinelli misleads the reader by omitting important information.

Already Off On The Wrong Foot

Carpinelli begins his critique with the following statement:

 The statistical process of a meta-analysis implies that theoretical and empirical science should be done by two different sets of people with different disciplinary abilities; that is, empirical research is performed by scientists and clinicians, but the interpretation of this research is performed by statisticians who decide what inferences should be drawn from the evidence [1].

This opening statement by Carpinelli is wrong in two ways.  First, it implies that only statisticians perform meta-analyses, which is not true.  In fact, while I am quite knowledgeable about statistics, I would not consider myself a statistician.  Many authors of various meta-analyses are not statisticians.  The second flaw is that Carpinelli has set up a false dichotomy; that meta-analysis somehow implies that the research is done by scientists and inferences are done by statisticians.  This is not true; all scientists draw inferences from their data, and statistics are only a formal tool to help draw those inferences.  The role of the statistician is to help select the most appropriate tool for helping to draw those inferences, based on the study design, data structure, and other important factors.

Carpinelli continues with the following statement:

The inclusion or exclusion of the studies in a meta-analysis is entirely based on the discrimination, opinions, and potential inherent bias of the statistician conducting the meta-analysis.

The H.I.T.-pocrisy of such a statement is apparent when you replace just a few words in the sentence:

The inclusion or exclusion of the studies in a narrative review is entirely based on the discrimination, opinions, and potential inherent bias of the author conducting the review.

In other words, Dr. Carpinelli's sentence could easily be applied to himself.  The major difference between reviews like that of Carpinelli, and a meta-analysis, is that the author of a meta-analysis employs a systematic, methodological approach towards gathering and analyzing data, so as an independent researcher could replicate the analysis if desired.  A narrative reviewer does not do these things.  As stated by Finckh and Tramèr:

Narrative reviews are prone to bias and error, while systematic reviews attempt to minimize bias by using a methodological approach.  In narrative reviews the methods for identifying relevant data are typically not stated, thus study selection can be arbitrary (influenced by reviewer bias).

Misleading The Reader

In the 2nd paragraph of his critical review, Carpinelli misleads the reader into believing that I would not be impartial in my meta-analysis.  He does so by quoting a sentence from a completely different narrative review that I published in 2010 in the Strength and Conditioning Journal.  The sentence read:

Thus, the number of sets can have a strong impact on the morphological and performance-based outcomes of a resistance training program.

Carpinelli then states:

Krieger did not cite any resistance training studies to support that statement, which may have been an early indication that readers were not going to get an impartial analysis of the topic.

This is a misleading statement by Carpinelli for the following reasons:

  • Carpinelli fails to tell the reader that this narrative review was published long after the publication of my meta-analysis.  Instead, he implies that I made this statement before I did my meta-analysis, which is not true.
  • Carpinelli fails to inform the reader that this narrative review was filled with references that supported that statement.  Carpinelli took this single sentence out of context.

Making Mountains Out of Anthills

In the next paragraph, Carpinelli takes issue with the fact that I interchanged the terms "meta-regression" and "meta-analysis."  He states that there are subtle differences between the two, and then comments how I never explained these differences.  It seems that Carpinelli is desperately trying to find anything to criticize, no matter how absurd.  A meta-regression is simply a type of meta-analysis, no different from how a regression is a type of statistical analysis.  Carpinelli fails to mention why an explanation of this is necessary, and how such an explanation would have had any impact on the outcomes of my paper.

Arbitrary?  No

Carpinelli then goes on to claim that my inclusion criteria were arbitrary.  This is not true.  There was a reason behind every set of criteria:

  • Must involve at least one major muscle group.   I want this analysis to be applicable to how the majority of people train.  Most people do not do exercises such as wrist curls or neck flexion.  Most people do exercises that target major muscle groups in some fashion, and even the American College of Sports Medicine recommends 8-10 exercises targeting major muscle groups.
  • Minimum duration of 4 weeks.  The duration of the study needed to be sufficient enough to not only observe strength gains, but also allow enough time for differences to arise between single and multiple set programs, if differences exist.
  • Single and multiple sets with other equivalent training variables.  The reason for this is obvious and is explained in the introduction and discussion portions of the paper, which Carpinelli fails to mention.  If you include studies where there are differences between the two groups other than the number of sets, you cannot be certain whether any observed differences are due to the different number of sets, or due to other differences in the training programs.  For example, there are resistance training studies out there which compare single-set, non-periodized programs to multiple-set, periodized programs.  Such studies were excluded because you cannot be certain whether the differences were due to the differences in sets, or the fact that one program was periodized and the other was not.
  • Pre-training and post-training 1-RM.  1-RM is by far the most common metric used to assess maximal dynamic strength, and is again applicable to how most people train.  Most people do not have access to variable resistance or isokinetic training equipment, which is why such studies that assessed strength using such equipment were excluded.
  • Healthy participants.  Orthopedic and musculoskeletal limitations could impact an individual's progress on a resistance training program, which is why studies that involved subjects with such limitations were excluded.
  • At least 19 years of age.  I wanted the study to results to be applicable to adults.
  • Sufficient data to calculate effect sizes.  This is obvious.  If there is not enough data to calculate an effect size, the study cannot be included in the analysis.

Carpinelli Misleads Again By Omitting Critical Information

Carpinelli goes on to make his most critical and glaring omission from his review.  He mentions how I used the sum of two 0-10 scale-based scores to rate the quality of the resistance training studies that I included in the analysis.  He fails to reference the two papers from which I obtained those objective scale-based scores; one paper was by Bågenhammar and Hannson, and the other was by Durall and colleagues.  By omitting this information, Carpinelli misleads the reader into believing that I arbitrarily scored the papers in my study.  He then goes on to offer arbitrary criticisms of three studies that happened to be among the highest quality scores in my paper, questioning how I awarded these studies the scores that they received.  If Dr. Carpinelli had read the references that I provided, he would have understood why these studies received the scores they did.  These two references laid out very specific criteria on how to score a resistance training study, and provided the scores for the studies.  Thus, with the exception of a few studies that were not included in these review papers, I did not score the studies; these studies were scored by the authors of these papers.  For example, here are the criteria set out by Bågenhammar and Hanson, with 1 point assigned to each criteria:

  • Subjects were randomly allocated to groups
  • Allocation was concealed
  • Groups were similar at baseline regarding the most important prognostic indicators
  • There was blinding of all subjects
  • There was blinding of all therapists who administered the therapy
  • There was blinding of all assessors who measured at least one key outcome
  • Measures of at least one key outcome were obtained from more than 85% of the subjects intially allocated to groups
  • All subjects for whom outcome measures were available received the treatment or control condition as allocated or, where this was not the case, data for at least one key outcome was analysed by "intention to treat"
  • The results of between-group statistical comparisons are reported for at least one key outcome
  • The study provides both point measures and measures of variability for at least one key outcome

Dr. Carpinelli misleads the reader to believe that I scored the studies, and questions my objectivity based on his own arbitrary analysis of three studies that I included.  However, the criteria are clearly laid out, and it is easy to determine how the studies received the scores that they did.  For example, studies by Rhea and Kraemer are two studies that Dr. Carpinelli takes issue with, and which Carpinelli cites as poor quality studies that should not have been included in my analysis.  Based on the objective criteria set out by Bågenhammar and Hannson, these two studies received a 5 out of 10.  It appears that Dr. Carpinelli advocates arbitrary decisions as to how to score the quality of resistance training studies, which is inappropriate when performing a systematic, quantitative review like a meta-analysis.

Carpinelli then goes on to discuss three papers that he thinks should have been excluded from my analysis, and summarizes his arbitrary reasons in a table at the end of the paper.  The papers in question are Rhea et al., Kemmler et al., and Kraemer.  In Part 3, I begin to look at Carpinelli's reasons why each paper should have been excluded; I will show you how his reasons range from invalid to absurd.

Click here to read Part 3.

Get the latest science on muscle gain and fat loss every month

Keeping up with the research is tough, so let us do the work for you. Consider signing up for Research Explained in Practical Summaries (REPS). We cover five studies per month and break everything down for you, so you don't need a PhD to interpret the data. Click here to learn more.

Get access to over seven years of past research reviews, video content, and Q&As on training and nutrition

Get access to the Weightology Archives of over 400 video and written research reviews, evidence-based guides, and Q&As. A total of 7.5 years of content! A huge variety of topics related to muscle building, fat loss, nutrition, and fitness are covered. Click here to obtain lifetime access.  
0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Maverick wilson
Maverick wilson
11 years ago

Hello James, a few questions for a curious young student and trainer. can you speak to wether the groups involved in your meta-analyses were brought to full muscular failure during their sets? Or was there a predetermined number of repetitions to complete, based on how many reps the subject should be able to complete at X % load of the subjects 1RM? Can you adress your point of using the 1RM as a measurement for strength improvements? I don’t believe that stating that the 1RM is the most common method for exercise enthusiasts and athletes to measure strength makes it… Read more »

11 years ago
Reply to  James Krieger

Hey James, sorry for the delayed reply to the answers your provided me. I recently read an other critique of your meta-analysis, published in the journal of exercise physiology (online), which left me with even more questions. http://www.asep.org/asep/asep/JEPonlineDECEMBER2012_Fisher2.pdf In response to your reply: “It is a common myth set forth by the H.I.T. camp that 1-RM testing is not an “accurate” means of testing maximal strength. They argue that strength increases are the result of “skill” improvement instead. However, they never provide any evidence for this. In fact, there is no evidence at all that a 1-RM test is somehow… Read more »

James Windus
James Windus
11 years ago

Dear Mr. Krieger, I wonder why, if you feel so strongly that Mr. Carpinelli has misrepresented your data, research, and study, you don’t rebut him in a peer reviewed journal?? Some of your statements sound much more like personal attacks than healthy, point by point responses which should be at least part of scientific debate. This is partially what is wrong with internet responses. Anyone can write heavy handed semi scientific responses which can, and are go unchallenged scientifically. If you are so sure your response to Dr. Carpinelli’s would stand up to peer reviewed rigor, why not do so??… Read more »

James Windus
James Windus
11 years ago
Reply to  James Krieger

Dear Jim, I read your comments and thank you for the reply. I will have more of a chance to talk with you tomorrow, Saturday latest. Again thanks for your time, sincerely Jim Windus

James Steele II
11 years ago
Reply to  James Krieger

Hi James, I thought I’d just leave a quick comment to say I have enjoyed most of your review and am pleased you have posted it here as in hindsight it has me reconsidering some of my present beliefs. I do still have some concerns about your meta-analysis but under further consideration I think you have conducted a very rigorous attempt at it and I wont spend time highlighting those specific concerns here. I just wanted to ask that, if you are interested to do so at some point in the future (PhD is taking most of my time now),… Read more »

Would love your thoughts, please comment.x