For those not familiar with the Bradford Hill Criteria, 50 years ago these nine criteria were published:
1: Strength of Association. The stronger the relationship between the independent variable and the dependent variable, the less likely it is that the relationship is due to an extraneous variable.
2: Temporality. It is logically necessary for a cause to precede an effect in time.
3: Consistency. Multiple observations, of an association, with different people under different circumstances, and with different measurement instruments, increase the credibility of a finding.
4: Theoretical Plausibility. It is easier to accept an association as causal, when there is a rational and theoretical basis for such a conclusion.
5: Coherence. A cause-and-effect interpretation for an association is clearest when it does not conflict with what is known about the variables under study, and when there are no plausible competing theories or rival hypotheses. In other words, the association must be coherent with other knowledge.
6: Specificity in the causes. In the ideal situation, the effect has only one cause. In other words, showing that an outcome is best predicted by one primary factor adds credibility to a causal claim.
7: Dose Response Relationship. There should be a direct relationship between the risk factor (i.e. the independent variable) and people’s status on the disease variable (i.e. the dependent variable).
8: Experimental Evidence. Any additional related research that is based on experiments will make a causal inference more plausible.
9: Analogy. Sometimes a commonly accepted phenomenon in one area can be applied to another area.
Since the time that Sir Austin Bradford Hill (1897–1991) published his extremely influential criteria to offer some guides for separating causation from association, we have accumulated millions of papers and extensive data on observational research that depends on epidemiologic methods and principles. This allows us to re-examine the accumulated empirical evidence for the nine criteria, and to re-approach epidemiology through the lens of exposure-wide approaches.
The [attached article] discusses the evolution of these exposure-wide approaches and tries to use the evidence from meta-epidemiologic assessments to reassess each of the nine criteria and whether they work well as guides for causation. It is argued that, of the nine criteria, experiment remains most important, And consistency (replication) is also very essential. Temporality also makes sense, but it is often difficult to document. Of the other six criteria, strength-of-association mostly does not work and may even have to be inversed: small, and even tiny, effects are more plausible than large effects. When large effects are seen, they are mostly transient and almost always represent biases and errors.
There is little evidence for specificity-in-causation in nature. Biological gradients (especially those seen almost always in multifactorial traits) are often unclear how they should it modeled, and thus they are difficult to prove. Coherence remains usually unclear how to operationalize. Finally, plausibility, as well as analogy, do not work well in most fields of investigation, and their invocation has been mostly detrimental, ––although exceptions may exist.
[this paper was brought to my attention by Changchun Xie]