When Experts Are Wrong (Guest Post by Jamie Hale)

The following is a guest post by my colleague Jamie Hale.  You can check out his website here.

 

When Experts Are Wrong

by Jamie Hale and Brooke Hale

 

We often consult with experts for advice. Their judgments and predictions are often accepted without question. After all, they are experts, shouldn’t we take their word?

Clinical vs. Statistical Methods

Experts rely one of two contrasting approaches to decision making- clinical vs. statistical (actuarial) methods. Research shows that the statistical method is superior (Dawes, R., et al., 1989). Clinical methods rely on personal experience and intuitions. When making predictions, those using clinical methods claim to be able to use their personal experience and go beyond group relationships found in research. Statistical methods rely on group (aggregate) trends derived from statistical records. “A simple actuarial prediction is one that predicts the same outcome for all individuals sharing a certain characteristic” (Stanovich, 2007, p.176). Predictions become more accurate when more group characteristics are taken into account. Actuarial predictions are common in various fields- economics, human resources, criminology, business, marketing, medical sciences, military, sociology, horse racing, psychology, and education.

It is important to note that clinical judgment does not equate to judgments made by only clinicians. Clinical judgment is used in various fields- basically any field where humans make decisions. It is also important to realize “[a] clinician in psychiatry or medicine may use the clinical or actuarial method. Conversely, the actuarial method should not be equated with automated decisions rules alone. For example, computers can automate clinical judgments. The computer can be programmed to yield the description “dependency traits” just as the clinical judge would, whenever a certain response appears on a psychological test. To be truly actuarial, interpretations must be both automatic (that is, prespecifiied or routinized) and based on empirically established relations” (Dawes, et al., 1989, p.1668).

Decades of research investigating clinical versus statistical prediction have shown consistent results- statistical prediction is more accurate than clinical prediction Dawes et al., 1989; Stanovich, 2007; Tetlock, 2005).

While investigating the ability of clinical and statistical variables to predict criminal behavior in 342 sexual offenders, Hall (1988) found that making use of statistical variables was significantly predictive of sexual re-offenses against adults and of nonsexual re-offending. Clinical judgment did not significantly predict re-offenses.

From Predicting Criminal Behavior (Hale, 2011):

Within the field of dangerousness risk assessment (as it applies to violent offenders), it has been recommended that clincial assessments be repalced by actuarial assessments. In a 1999 book from the American Psychological Association- Violent Offenders: Appraising and Managing Risk- (Quinsey, Harris, Rice and Cormier)-the authors argued explicitly and strongly for the "complete replacement" of clinical assessments of dangerousness with actuarial methods. "What we are advising is not the addition of actuarial methods to existing practice, but rather the complete replacement of existing practice with actuarial methods" (p. 171).

When considering the accuracy of clinical vs. statistical behavior- In regards to predicting criminal repeat behavior- it is quite clear that statistical predictions / methods are superior to clinincal predictions / methods. "The studies show that judgments about who is more likely to repeat are much better on an actuarial basis than a clinical one", says Robyn Dawes (Dawes,1996).

In a statistical analysis of 136 studies Grove and Meehl (1996) found that only 8 of those studies favored clincial prediction over statistical prediction. However, none of those 8 studies were replicated (repeated) studies. In the realm of scientific research studies need to be successfully repeated before they are referred to as sufficient evidence.

In regards to the research showing that actuarial prediction is more accurate than clinical Paul Meehl (1986) stated “There is no controversy in social science which shows such a large body of qualitativley diverse studies coming out so uniformly in the same directions as this one” That is, when considering statistical versus clinical, statistical wins hands down. Yet, experts from various domains still claim their “special knowledge” or intuition overrides statistical data derived from research.

The supremacy of statistical prediction

Statistical data is knowledge consisting of cases drawn from research literature, which is often a larger and more representative sample than is available to any expert. Experts are subject to a host of biases when observing, interpreting, analyzing, storing and retrieving events and information. Professionals tend to give weight their personal experience heavily, while assigning less weight to the experience of other professionals or research findings. Consequently, statistical predictions usually weight new data more heavily than clinical predictions.

The human brain is at the disadvantage in computing and weighing in comparison to mechanical computing. Predictions based on statistics are perfectly consistent and reliable, while clinical predictions are not. Experts don’t always agree with each other, or even with themselves when they review the same case the second time around. Even as clinicians acquire experience, the shortcoming of human judgment can help explain why the accuracy of their prediction lacks improvement. (Lilienfield, Lynn, Ruscio, & Beyerstein, 2010).

When a clinician is given information about a client and asked to make a prediction, and the same information is quantified and processed by a statistical equation the statistical equation wins. Even when the clinician has more information in addition to the same information the statistical equation wins. The statistical equation accurately and consistently integrates information according to an optimal criterion. Optimality and consistency supersedes any informational advantage that the clinician gains through informal methods (Stanovich, 2007).

Another type of investigation mentioned in the clinical-actuarial prediction literature discusses giving the clinician predictions from the actuarial prediction, and then asking them to make any necessary changes based on their personal experience with clients. When the clinician makes changes to the actuarial judgments, the adjustments lead to a decrease in the accuracy of the predictions (Dawes, 1994).

A common criticism of the statistical prediction model is that statistics do not apply to single individuals. This line of thinking contradicts basic principles of probability. Consider the following example (Dawes, et al., 1989):

“An advocate of this anti-actuarial position would have to maintain, for the sake of logical consistency, that if one is forced to play Russian roulette a single time and is allowed to select a gun with one or five bullets in the chamber, the uniqueness of the event makes the choice arbitrary.” (p.1672)

The erroneous assumption statistics don’t apply to the single case is often held by compulsive gamblers (Wagenaar, 1988). This faulty sense of prediction often leads them to believe they can accurately predict the next outcome.

“Even as clinicians acquire experience, the shortcomings of human judgment help to explain why the accuracy of their predictions doesn’t improve much, if at all, beyond what they achieved during graduate school” (Stanovich, 2007; Dawes, 1994; Garb, 1999).

Application of statistical methods

Research demonstrating the general superiority of statistical approaches should be calibrated to recognition of its limitations and need for control. Albeit, surpassing clinical methods actuarial procedures are not infallible, often achieving only moderate results. A procedure that proves successful in one setting should be periodically reevaluated within that context and shouldn’t be applied to new settings mindlessly (Dawes, et al., 1989).

In Meehl’s classic book- Clinical versus statistical prediction(1996)- he thoroughly analyzed limitations of actuarial prediction. Paul illustrated a possible limitation by using what became known as the “broken-leg case.” Consider the following:

We have observed that Professor A quite regularly goes to the movies on Tuesday nights. Our actuarial data support the inference “If it’s a Tuesday night, then Pr {Professor A goes to movies} _ .9.” However, suppose we learn that Professor A broke his leg Tuesday morning; he’s in a hip cast that won’t fit in a theater seat. Any neurologically intact clinician will not say that Pr {goes to movies} _ .9; they’ll predict that he won’t go. This is a “special power of the clinician” that cannot, in principle, be completely duplicated by even the most sophisticated computer program. That’s because there are too many distinct, unanticipated factors affecting Professor A’s behavior; the researcher cannot gather good actuarial data on all of them so the program can take them into account (Grove, W.M., & Lloyd, M., 2006).

However, this example does not lend support to the idea that avoiding error in such cases will greatly increase clinicians accuracy as compared with statistical prediction. For a more detailed discussion on this matter refer to Grove, W.M., & Lloyd, M., 2006.

From Clinical versus actuarial judgment (Dawes, et al., 1989):

When actuarial methods prove more accurate than clinical judgment the benefits to individuals and society are apparent…Even when actuarial methods merely equal the accuracy of clinical methods, they may save considerable time and expense. For example, each year millions of dollars and many hours of clinicians’ valuable time are spent attempting to predict violent behavior. Actuarial prediction of violence is far less expensive and would free time for more productive activities, such as meeting unfulfilled therapeutic needs.

Actuarial methods are explicit, in contrast to clinical judgment, which rests on mental processes that are often difficult to specify. Explicit procedures facilitate informed criticism and are freely available to other members of the scientific community who might wish to replicate or extend research.

The use of clinical prediction relies on authority whose assessments-precisely because these judgments are claimed to be singular and idiosyncratic-are not subject to public criticism. Thus, clinical predictions cannot be scrutinized and evaluated at the same level as statistical predictions. (Stanovich, K., 2007)

Conclusion

The intent of this article is not to imply that experts are not important or do not have a role in predicting outcomes. Expert advice and information is useful in observation, gathering data and sometimes making predictions (when predictions are commensurate with available evidence). However, once relevant variables have been determined and we want to use them to make decisions, “measuring them and using a statistical equation to determine the predictions constitute the best procedure.” (Stanovich, 2007, p.181)

The problem is not so much in experts making decisions (that’s what they are supposed to do), but in experts making decisions that run counter to actuarial predictions.

Decades of research indicate statistical prediction is superior to clinical prediction. Statistical data should never be overlooked when making decisions (assuming there is statistical data in the area of interest- sometimes there is not).

I will leave you with these words (Meehl, 2003):

If a clinician says “This one is different” or “It’s not like the ones in your table,” “This time I’m surer,” the obvious question is, “Why should we care whether you think this one is different or whether you are surer?” Again, there is only one rational reply to such a question. We have now to study the success frequency of the clinician’s guesses when he asserts that he feels this way. If we have already done so, and found him still behind the hit frequency of the table, we would be well advised to ignore him. Always, we might as well face it, the shadow of the statistician hovers in the background; always the actuary will have the final word (p.138).

References

Dawes, R., Faust, D., & Meehl, P. (1989). Science, New series, Vol. 243, 4899, 1668-1674.

Dawes, R. (1994). House of Cards: psychology and psychotherapy built on myth. New York: Free Press.

Dawes, R. (1996). House of Cards: psychology and psychotherapy built on myth. Simon and Schuster.

Garb, H.N. (1998). Studying the Clinician: Judgment research and psychological assessment. Washingotn, DC: American Psychological Association.

Grove, W.M., & Meehl, P. (1996). Comparatvie efficiencey of informal and formal prediction procedures: The clinical-statisical controversy. Psychology, Public Policy and Law, 2, 293-323.

Grove, W.M., & Lloyd, M. (2006). Meehl’s Contribution to Clinical Versus Statistical Prediction. Journal of Abnormal Psychology, Vol. 115, No. 2, 192–194.

Hale, B. (2011). Predicting Criminal Behavior. College term paper.

Hall, G.C. Nagayama. (1988). Criminal Behavior as a Function of Clinical and Actuarial Variables in a Sexual Offender. Journal of Consulting and Clinical Psychology, v56 n5 (1988): 773-775.

Lilienfeld, S., Lynn, S. J., Ruscio, J., & Beyerstein, B.L. (2010). Great Myths of Popular Psychology: Shattering Widespread Misconceptons about Human Behaivor. Malden, MA: Wiley-Blackwell.

Meehl, P.E. (1986). Causes and effects of my disturbing little book. Journal of Personality Assessment, 50, 370-375.

Meehl, P. E. (1996). Clinical versus statistical prediction: A theoretical
analysis and a review of the evidence
. Northvale, NJ: Jason Aronson. (Original work published 1954)

Meehl, P.E. (2003). Clinical versus statistical prediction: A theoretical
analysis and a review of the evidence
. Copyright 2003 Leslie J. Yonce. (Copyright 1954 University of Minnesota)

Stanovich, K. (2007). How to Think Straight About Psychology. 8th Edition. Boston, MA: Pearson.

Tetlock, P.E. (2005). Expert Political Judgment. Princeton, NJ: Princeton University Press.

Wagenaar, W.A. (1988). Paradoxes of Gambling Behavior. Hove, England: Erlbaum.


Get the latest science on muscle gain and fat loss every month

Keeping up with the research is tough, so let me do the work for you. Consider signing up for the Weightology Research Review. I cover 8 studies per month and break everything down for you, so you don't need a PhD to interpret the data. You also get access to an archive of nearly 300 video and written reviews, evidence-based guides, Q&A's, and more. Click here to learn more.

Want some sample content before you buy?

Get Instant Access to Free Research Reviews!

 
0 0 vote
Article Rating
Subscribe
Notify of
guest
2 Comments
Inline Feedbacks
View all comments
jamie hale
9 years ago

Thanks Jim for the comments You have severely misinterpreted this article. The article is not a promotion of Meehl’s expert prediction, but rather Meehl’s reports on the reserch data. And the research of many many others, to many to mention is this brief article. When experts make judgments or predictions that run counter to the statistical data we should disregard their advice the majority of the time. Meehl’s book goes to great lengths to discuss the downfalls of statistical prediction as well as those of clinical prediction. I have read many of the studies from Meehl’s book. He does a… Read more »

Jim
Jim
9 years ago

This argument is very flawed. Just because statistical studies show when a pedophile is going to offend again does not mean that they are going to be the most accurate when it comes to food. Large scale statistical dietary studies all have one giant flaw in common…the self reporting of food intake. The average person has no idea what quantities of food they are actually eating so they aren’t going to report their intake accurately. I find it funny that you talk about when “experts are wrong” and base practically your whole article on the work of an expert in… Read more »

2
0
Would love your thoughts, please comment.x
()
x