Biology Reference
In-Depth Information
placebos to control groups reflects prior knowledge of the mechanism of the
placebo effect. Indeed, experimenters generally take extreme measures to match
the experimental and control groups in every way that might possibly confound the
results. They test for missing control conditions by asking whether there is some
possible difference between the two groups that could plausibly account for the
observed changes in the result/effect. Background knowledge about possible
mechanisms is often central to that task.
Consider in a bit more detail about how such experiments work. A standard
experiment for testing a (c)p-law involves intervening into a putative cause vari-
able, C , and detecting from the putative effect variable, E . Mechanistic details are
often crucial for assessing the appropriateness of one's interventions. As discussed
briefly above, one wants to ensure that one's intervention produces the effect in E
(if any) via C and not via some other mechanism. That is, the intervention should
change C . It should not change E directly. It should not change directly the value of
any variable between C and E . Furthermore, the intervention on C itself should not
be correlated with any other variable that is a cause of E (unless it is causally
intermediate between C and E ). In some cases, one wants to ensure that the
intervention severs the causal influences of other variables on C so that one can
attribute any change in E to the intervention alone. All of these assumptions behind
the use of interventions to test (c)p-laws are assumptions about the causal structure,
the mechanisms, involved in the intervention technique and in the system under
study. An adequate philosophy of experimental intervention thus might make
considerable progress by asking how mechanistic knowledge enters into these test
procedures (see Woodward 2003 ; see summary diagram in Craver 2007 , Ch. 3).
What about the detection component of a test for a (c)p-law? Allan Franklin
( 2009 ) has generated a useful list of strategies by which scientists confirm that their
techniques are reliable indicators of phenomena such as E . Many of these strategies
rely crucially on facts about the mechanisms at play. One might, for example, argue
that there could be no other cause of the measured value of E besides the fact that E
has that value. One might show that one's technique reliably registers reliable
artifacts known to be produced under aberrant causal conditions. One might rely
on a theoretical understanding of the mechanism by which the detection technique
works. One might check the results of one's technique against another technique
that relies on causally independent mechanisms (see Franklin 2009 ). In each of
these cases, one relies on knowledge about the mechanisms involved in the system
and in the detection technique to argue that the methods in question provide an
adequate measure of E in these circumstances. In short, even if it is possible to test
(c)p-laws without knowing the mechanisms (and we deny that Leuridan's example
shows as much), one might learn a great deal about how (c)p-laws are tested by
thinking about the mechanisms involved in the test conditions. By casting the
debate as a forced choice between laws and mechanisms, one occludes far more
interesting questions about how mechanistic knowledge contributes to the design
and interpretation of experiments for testing p-laws.
Finally, Leuridan claims that if our ability to test (c)p-laws relies exclusively on
cs-mechanisms, then we face an infinite regress. The regress arises because if
Search WWH ::




Custom Search