100 things Wiki
Advertisement


Meehl was part of an eminent group of University of Minnesota psychologists including B.F. Skinner (thing 9), William Estes, Kenneth MacCorquodale (with Meehl, a noted opponent of thing 17), and Marian Breland Bailey (of thing 9 and 'Misbehavior of Organisms' fame). Meehl was extremely successful (APA Distinguished Scientific Contributions to Psychology award at age 38, APA president at age 42) as, among other things, a clinical psychologist who was highly critical of the things that psychologists do. To list a few criticisms cognitive psychologists should be aware of at all times: 

Meehl's paradox :: Experiments and analyses should be planned such that collecting more and more data should give us a better and better (more and more specific) understanding of the phenomena of interest. However, psychologists rarely behave this way. Instead, more and more data is paradoxically used to make an increasingly trivial inference - that data are inconsistent with a null value. Note that this second inference was assured from the start - as long as you keep collecting data, you will eventually demonstrate, for example, that a coin flipping heads 50.000001% of the time is SIGNIFICANTLY DIFFERENT from chance


Inference 'in the head' :: In attempting to predict behavior, formal methods (objective, principled) outperform informal (subjective, 'in the head', professional opinion) methods. Really a tautology at heart, since professional judgment is subject to all of the limits and biases enumerated in our 100 things and formal methods are not (necessarily). Formal != complicated, since even a central tendency is formal, but instead connotes that aggregating data (be they raw data, or collections of central tendencies / p values / effect sizes) is a job better suited to those without 100 things and without tenure clocks :)


Ad hoc fallacies :: Using addendums to theories to save them from falsification. Bertrand Russell meets philosophy of science.


Crummy criterion fallacy :: An empirically validated measure is still valid even if it didn't say what you think it should have said. It's asinine for psychologists to spend their conference time and discussion sections "explaining away" their data by speculating about what went wrong with their [construct valid, empirically validated] measure of choice. As if any such argument would have been made if the measure *had* said what you thought it should have said...

Meehl, P. E. (1973). Why I do not attend case conferences. Psychodiagnosis: selected papers, 225-302.

Advertisement