H.I.T.-pocrisy, Part 1

It's been quite a while since I've posted a free blog post.  It's certainly not due to a lack of desire; I have a number of blog post ideas on the back burner that, unfortunately, have remained there on "simmer."  If you read my interview with Steve Troutman of Body Improvements a while back, you'll know that I'm spread quite thin.  Even to successfully write this blog post, I've had to do it in small chunks over time.  I have a full time day-job as an analyst for Vivacity, combined with a 30-45 minute commute each way.  I also trade securities part time in the mornings before I go to work (combined with prep time in the evenings before bed), and a new version of my trading website will be launching soon.  Of course, I also need to squeeze some time to go to the gym, although my training time now has been cut way back from what I used to do.  And then I need to spend time with my wife and family.  My wife is 15 weeks pregnant with our first child (via IVF which in and of itself was time intensive), and she has had pretty bad morning sickness so I've had to pick up some of the slack because of it.  We are also closing on a new house at the end of May, and need to prepare our condo to rent out, so that entire process has also been taking a big chunk of time.  I also need to research and write articles for my paying subscribers at Weightology Weekly, and work with my online clients as well.  This is not to mention things like daily chores.  I've been spread so thin that I had to give up writing for Journal of Pure Power, which was hard to do because I had worked with Dan Wagman for 10 years on that publication as well as on Pure Power magazine (by the way, if you're a strength and power athlete and looking for good science-based info, you really should check out JOPP).  My days are so jam packed that I've only been out kiteboarding once in the past 8 months!  And believe me...if I'm not going out kiting then you know I'm swamped!

Because I am spread so thin, my paying Weightology Weekly subscribers are my priority and thus, when I do have time to write, I need to write for them first.  The same goes for people who leave comments on my site or email me...paying subscribers get priority regarding my responses.  Thus, I apologize to the numerous people who have left comments on various blog posts, or have emailed me privately, to which I have been unable to respond.  And for people who complain about my lack of responses, or lack of freely available blog posts, I suggest you read this post by Anthony Colpo.

Neverthless, it's been way too long since I've written a non-members-only blog post, and ever since I came across this review article about a month ago, the words have been sitting in my head.  It's time to get those words on paper...err, I mean on a web server.

Evidence-Based?

The article is titled, "Evidence Based Resistance Training Recommendations", and was published in the latter half of last year in an obscure Romanian sports science journal called Medicina Sportiva.  The authors claim that their recommendations are "evidence-based"; however, being evidence-based implies that one is appropriately evaluating ALL available evidence, something that became quickly clear that these authors were not doing, at least when it came to resistance training volume and number of sets.

On page 155, the authors discuss training volume under the section Volume of Exercise, Frequency and Periodization.  For an evidence-based review, they only spend 3 paragraphs evaluating the evidence on training volume.  That is not what I would consider a very thorough examination of the evidence.  They rightly criticize a meta-analysis by Dr. Matthew Rhea and colleagues on set volume and strength gains, specifically referring to a critique by Dr. Ralph Carpinelli.  In September of 2009, I wrote an article for Alan Aragon's Research Review (AARR) entitled "H.I.T. or miss?  A critical review of Carpinelli and Otto's critical reviews."  In that article, I stated how I agreed with about 85% of the criticisms that Carpinelli made against the meta-analyses by Rhea and colleagues.  In fact, it was the poor study design of those meta-analyses that led me to perform a meta-analysis on set volume and strength. which was eventually published in the Journal of Strength and Conditioning Research.

While the authors were correct to dismiss the results of the Rhea papers, their arguments begin to fall apart soon thereafter.  They go on to state, "In fact, most research to date suggests that there is no significant difference in strength increases between performing single or multiple set programs."  In support of that statement, they reference mostly review articles by Dr. Carpinelli.  This is not what I would consider a very good evaluation of the evidence, as you are essentially trusting the opinion of one author, and assuming that author has been thorough in his review of the literature.  As I pointed out in my AARR article, Carpinelli has been anything but thorough in his critical reviews, failing to reference data that does not support his conclusions.  In fact, in my AARR article, I showed how Carpinelli left out numerous studies out of his reviews, failed to mention important details of his referenced studies that conflicted with his conclusions, and misstated the results of certain studies.  Thus, the fact that these authors referenced Carpinelli's review papers as their evidence, means their own conclusions are questionable.  Their reliance on review articles becomes even more problematic when you realize that, since Carpinelli's 1998 review on single versus multiple sets (which these authors referenced), there have been at least 10 published studies showing superior strength gains with multiple sets (which I referenced in my AARR article).  In fact, nearly all studies published since that 1998 review have shown multiple sets to be superior.  Yet, these authors failed to mention any of those studies in their "evidence based review."   I have a hard time calling something evidence-based when you are leaving out a big chunk of the evidence!  I also have a hard time believing their statement that "most" research shows no significant differences in strength gains between single and multiple sets, when they leave out 10 studies that do!  It seems that the Gary Taubes-ish tendency of selective citation is alive and well in among some H.I.T. authors.

What is also ironic is the H.I.T.-pocrisy of citing Carpinelli's review papers as evidence in an evidence-based review.  Carpinelli himself has criticized prominent strength training researchers in the NSCA and ACSM, such as Dr. William Kraemer, for "circular citation" where individual A references paper B to support a claim, yet paper B itself is not original research, but rather other prominent figures in the industry.  Yet here these authors are doing the exact same thing!

Not So Fast

The authors of this review go on to briefly discuss my meta-analysis on set volume and strength.  They state:

Contrary to this evidence, Krieger [138] published a meta-analysis concluding that "2-3 sets per exercise are associated with 46% greater strength gains than 1 set, in both trained and untrained subjects".  However, Krieger [138] included a study by Kraemer [139] that had previously received heavy criticism by Winett [136] due to methodological inadequacies, as well as articles where groups had not trained to momentary muscular failure [140].  Readers should be wary of meta-analyses that attempt to consider an assortment of differing research and provide a single conclusive statement, as Krieger [138] appears to have done.

That's it.  They completely dismiss my meta-analysis in two sentences.  Oh, but if it were only that easy!  A closer inspection reveals just how little time they put into looking at my meta analysis.

Data Manipulation Is Not OK

First, they make a big deal out of the fact that I included a study by Dr. Kraemer in my analysis.  The study by Dr. Kraemer was, in reality, a compilation of 6 studies.  These authors do not mention the fact that I only included one out of the 6 experiments in the study; the other 5 experiments did not meet my inclusion criteria.  They also reference a criticism of Dr. Kraemer's paper by Dr. Richard Winett; the primary objection to Kraemer's paper is that outcomes for one experiment were 3-14 times greater than another, and that the data was unusual.  Since the paper's results could be considered an outlier, Winett believed that the paper should not be included in a meta-analysis.  This is not the first time Winett has advocated removing outliers from a data set; in a letter to Medicine and Science in Sports and Exercise in 2006, Winett argued that Joanne Munn and colleagues should have arbitrarily removed certain outliers from their data set in a study that showed significantly greater strength gains in a 3-set group compared to a 1-set group.  Munn and colleagues fired back with a detailed analysis of why removing such data points would not be appropriate; they stated,

We disagree that the distribution of data threatens the interpretation of our results and contend that removing data points in such an arbitrary manner is inappropriate.  One convention defines data points falling more than three standard deviations from the mean as outliers (z > 3 rule).  In only one case in our data analysis was the standardized residual greater than 3 (standard residual = 3.4), but this case's change in strength score fell within three standard deviations of their training groups mean (standard residual = 2.1).

When the z > 3 definition is used to identify outliers there is a chance of spuriously classifying observations as outliers.  If the data were randomly drawn from a normal distribution the probability of observing an outlying data point from a sample population the size of ours (N=115) would be 27%.  That is, it is quite plausible that the one apparently outlying data point is not drawn from another population.  Removal of data points as suggested by Dr. Winett could produce serious bias.  We would be concerned if Dr. Winett applies this approach to the analysis of his own data.

The bottom line is that you can't just remove an outlier to make your data look nice, and this applies to removing the study by Dr. Kraemer.  Removing data, just because you think it's an outlier, would be considered data manipulation.  As I stated in my AARR article:

One should only remove an outlier from a data set if there is very good reason to do so, such as evidence of an error in measurement or data collection.  You do not just remove an outlier because the data looks messy.

When it comes to a meta-analysis, you define your inclusion and exclusion criteria before you start gathering studies.  You do not arbitrarily decide which studies to exclude after you have gathered them.  There was no good reason to remove the study by Kraemer from my analysis; it fit my inclusion criteria, and there was no evidence that the data was erroneous.  Regarding the quality of the study, the authors of this review failed to mention that I assigned each study an objective quality score in my analysis.   On page 1891 of my study, I stated:

The study quality score was the sum of 2 scores used in previous reviews to rate the quality of resistance training studies: a 0 to 10 scale-based score used by Bågenhammar and Hansson and a 0 to 10 scale-based score used by Durall et al.

The study by Kraemer received a quality score of 13 out of possible 20 in my analysis.  The scores for all the studies in my analysis ranged from 9 to 15, so the study by Kraemer actually ranked slightly higher in terms of quality compared to some of the others.  Remember, these are objective quality scores, not a subjective interpretation of study quality as these authors seem to advocate.  Study quality was included as a predictor variable in my analysis, but was ultimately dropped from the final model during the model reduction process (where insignificant predictors are removed one at a time from the model until a "tighter" model is established) because it was NOT a predictor of study outcome.  This is the appopriate way to address study quality in a meta-analysis, not the arbitrary fashion in which these authors seem to be in favor of.

The same argument holds true for the Kemmler paper, which they objected to because the subjects did not train to failure.  However, my inclusion criteria did not require that subjects train to failure...it only required that all variables (such as training intensity in terms of % 1-RM) be equivalent between the groups, except for the number of sets.  If one group trained to failure while another did not, the study was excluded.  However, in the Kemmler paper, both the single and multiple set groups did not train to failure, and trained with an equivalent intensity regarding % 1-RM.  Thus, the study was eligible for inclusion.  Again, there was no valid reason to exclude the Kemmler paper from my analysis.  My inclusion criteria were clearly laid out in the methods section, and I find it strange that these authors ignored the fact that the Kemmler paper met the criteria.

Did They Read the Sensitivity Analysis?

Their objection to the inclusion of the Kraemer and Kemmler papers are moot, anyway, because I performed a sensitivity analysis on the data, which was shown quite clearly in Table 3 of my paper.  In this sensitivity analysis, I removed each study from the analysis one at a time, and then re-analyzed the data.  This was to determine if any one study was dramatically influencing my results.  It turned out that no study dramatically influenced my results, and that includes the Kraemer and Kemmler papers.  The fact is, my results were the same even when the Kraemer or Kemmler paper was removed from the analysis.  It is baffling why these authors would miss or ignore such a fact.

Be Wary of Incomplete "Evidence-Based" Reviews

The authors went on to state:

 Readers should be wary of meta-analyses that attempt to consider an assortment of differing research and provide a single conclusive statement, as Krieger [138] appears to have done.

The reality is that there is no need to be wary of meta-analyses, when those meta-analyses are properly conducted in a well-defined, appropriate fashion, as mine was done.  Meta-analyses can provide important information, especially when you have a large body of conflicting, underpowered studies with small sample sizes.  This is true when it comes to the population of studies comparing single to multiple sets; many have inadequate sample sizes to detect differences between groups.  When you fail to statistically detect a difference between two groups, when a true difference exists, this is known as a Type II error.  Unfortunately, the authors of this "evidence based review", as well as individuals such as Dr. Carpinelli, completely ignore the issue of statistical power in resistance training studies.  Statistical power is a big problem in many exercise studies, since many studies have very small samples.  When you have a large number of studies with small samples, many of them will fail to show significant differences just based on random chance alone, even if a difference exists.  Meta-analyses, when done in an appropriate manner, can help give us an idea of the overall trend among a body of studies.  As stated by Finckh and Tramèr in their paper on the strength and weakness of meta-analysis:

Adequate meta-analyses, combining data from many studies and thousands of patients, can enhance the precision of treatment effects and reduce the risk of false-negative results.

Really, what readers should be truly wary of are non-systematic, narrative reviews where the authors leave out important studies and do not provide all the details and evidence, as these authors have done.  As stated by Finckh and Tramèr:

Narrative reviews are prone to bias and error, while systematic reviews attempt to minimize bias by using a methodological approach.  In narrative reviews the methods for identifying relevant data are typically not stated, thus study selection can be arbitrary (influenced by reviewer bias).

More Non-Evidence Based Comments

The authors go on to state:

The assertion that multiple sets are superior to single sets has therefore been made despite the absence of  evidence to support this claim

"Absence of evidence"?  Only when you ignore at least 10 studies published over the past decade, ignore important details of my meta-analysis, fail to consider the issue of statistical power, and ignore research showing superior protein synthesis responses to multiple sets.

The bottom line is that this "evidence-based review" is anything but evidence-based, at least when it comes to set volume and strength gains.  The authors leave out way too much information that conflicts with their most likely pre-formed conclusions.  The fact is, a synthesis of the available evidence indicates that multiple sets will produce superior strength gains, on average, compared to single sets.

Stay Tuned For Part 2

In Part 2, I will discuss more on some of the past writings of these the authors of this review, including a blog post by one of the authors on my meta analysis on set volume and hypertrophy.  Stay tuned...

 


Get the latest science on muscle gain and fat loss every month

Keeping up with the research is tough, so let us do the work for you. Consider signing up for Research Explained in Practical Summaries (REPS). We cover 5 studies per month and break everything down for you, so you don't need a PhD to interpret the data. You also get access to all back content. Click here to learn more.  
0 0 votes
Article Rating
Subscribe
Notify of
guest
2 Comments
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x