Topics discussed include:
1. Reverse CausalitySome eye-opening examples:
2. The Play of Chance and the DICE Miracle
3. Bias: Coffee, Cellphones, and Chocolate
5. Exaggerated Risk
Mistaking what came first in the order of causation is a form of protopathic bias. There are numerous examples in the literature. For example, an assumed association between breast feeding and stunted growth,  actually reflected the fact that sicker infants were preferentially breastfed for longer periods. Thus, stunted growth led to more breastfeeding, not the other way around...The article (correctly) notes that these issues don't mean that medical knowledge is impossible -- but rather we must be diligent in looking for sources of potential error.
One classic example of selection bias occurred in 1981 with a NEJM study showing an association between coffee consumption and pancreatic cancer. The selection bias occurred when the controls were recruited for the study. The control group had a high incidence of peptic ulcer disease, and so as not to worsen their symptoms, they drank little coffee. Thus, the association between coffee and cancer was artificially created because the control group was fundamentally different from the general population in terms of their coffee consumption. When the study was repeated with proper controls, no effect was seen...
[R]ecall bias, occurs when subjects with a disease are more likely to remember the exposure under investigation than controls. In the INTERPHONE study, which was designed to investigate the association between cell phones and brain tumors, a spot-check of mobile phone records for cases and controls showed that random recall errors were large for both groups with an overestimation among cases for more distant time periods. Such differential recall could induce an association between cell phones and brain tumors even if none actually exists...
A 1996 study sought to compare laparoscopic vs open appendectomy for appendicitis. The study worked well during the day, but at night the presence of the attending surgeon was required for the laparoscopic cases but not the open cases. Consequently, the on-call residents, who didn't like calling in their attendings, adopted a practice of holding the translucent study envelopes up to the light to see if the person was randomly assigned to open or laparoscopic surgery. When they found an envelope that allocated a patient to the open procedure (which would not require calling in the attending and would therefore save time), they opened that envelope and left the remaining laparoscopic envelopes for the following morning. Because cases operated on at night were presumably sicker than those that could wait until morning, the actions of the on-call team biased the results. Sicker cases preferentially got open surgery, making the outcomes of the open procedure look worse than they actually were. So, though randomized trials are often thought of as the solution to confounding, if randomization is not handled properly, confounding can still occur. In this case, an opaque envelope would have solved the problem...
(Note: Reading the full text of the article requires free registration.)