Very revealing response to my piece at The Week. The author fails to understand what science is, and then, in his confusion, blames me for the failing.
The author faults me for not including observational sciences as part of science. Well, sorry, but it’s true. The reason we have a thing called “the Scientific Revolution” and the reason why it was a revolution is precisely because it changed the definition of science from the hitherto flawed, and observational Aristotelian model to the Baconian, experimental model. As a sort of courtesy, observational sciences are called “sciences”, but my whole point is that this obscures more than it enlightens, precisely because the Scientific Revolution was the highlighting of the key role of controlled experiment.
With regard to astronomy and evolutionary biology, it should be noted that both actually make a great deal of effort to come up with falsifiable hypotheses that can be tested in a lab. That’s what those multibillion dollar particle accelerators are for.
(I had a section on this in the original version of my column because it was cut during the editorial process.)
Is it really that big a deal that we are confused about what science is? We just have to read the rest of the author’s post to find out.
For example, the author calls me “naïve” for claiming that the Montessori Method of education, having been created through scientific experimentation, represents a positive step from the traditional method. He thinks his point is established by linking to a study; sadly, because this study does not have a true control group, it suffers from omitted variable bias and literally proves nothing—as, scientifically speaking, non-controlled experimentation does. The author exhibits exactly the kind of confusion that my article was meant to clear up. Because the experiment the author points to is unable to separate different causes within what it observes, it cannot establish any cause-and-effect relationship, and so to look at it as doing so is simply erroneous. As the author notes, this is true of many “areas of social science”—indeed it is the case, because most social science is conducted unscientifically, which is a shame. A shame which might be rectified if people understood basic epistemology. Which is why I wrote my column.
We have had far too few serious scientific experiments of the outcomes of Montessori. The ones that we do have have been extremely encouraging, but I wouldn’t be so bold as to say that they “prove” anything, because when it comes to social science, even a few RFTs don’t prove much—you need broad repetition over many experiments and a long period of time.
The fact that we have so few is a damn shame, because there’s so little we know. But the reason we have so few is because few people understand the difference between a randomized field trial, which takes care of omitted variable bias reasonably well, and uncontrolled experiments, which absolutely do not. Because science is misunderstood, the need for more, and more rigorous, experimentation is not felt, and so we remain in the pre-scientific dark. The author’s post is an excellent demonstration of how the phenomenon works.
Anyhow—I believe we need more experimentation in education. The author intones, “rigorous adherence to any one system will not always work” Indeed! Which is why I believe we need more experimentation in education.
It is as if I gave a speech calling for total war, and a gentleman stood up to say that I am wrong, because pacifism is wrong.
(Although, technically speaking, the author doesn’t know at all whether ”rigorous adherence to any one system will not always work” Maybe there is a system, perhaps yet to be invented, rigorous adherence thereto will always work. Maybe not! Nobody knows. Which is my point to begin with.)
Social policy experiments, like psychology experiments, are bound to be more difficult to replicate than physics or chemistry experiments, and for the same reason—the subject of social policy is more irreducibly complex than the subjects of physics and chemistry.
Social policy experiments are indeed very hard to reproduce, because human life is a lot more causally dense than physics. Where the gentleman got the idea that I believe anything else I can only conjecture.
It is precisely because social policy experiments are very hard to reproduce that we should run lots of them. For example, as Jim Manzi points out, there have been a grand total of 122 controlled experiments in criminology in the United States in the past several decades; by contrast, the bank Capital One ran 60,000 controlled experiments on its customers in the year 2000 alone. Precisely because social life is so causally dense, only multiple repeated RFTs can yield insights. Which is why we need to do lots of them. Which was my point to begin with.
Further, the gentleman intones:
More experimentation by state and local governments is a good idea, but as Stuart Butler and David B. Muhlhausen argue in their recent National Affairs essay, “Can Governments Replicate Success,” “[r]ather than simply try to mimic what worked elsewhere, [policymakers] should strive to adapt successful strategies to their own situations.” The value of experimentation in public policy is not that it provides a scientific foundation for policymaking in the way physics provides a scientific foundation for engineering, but rather that it can allow for incremental improvements and adaption of ideas to new circumstances.
This is fair enough, I guess. I completely fail to see what it has to do with anything I wrote.
The author lectures me that “There is no straightforward formula for deciding how much a scientific claim should be trusted”. That is indeed true if one accepts his expanded, incorrect definition of science. Which is the sign that his definition of science is incorrect. Experimentally-validated scientific claims can be trusted according to a very straightforward formula. For other claims, be they “scientific” or not, there is, indeed, no straightforward formula. That is precisely the difference between science, or “Baconian science” if you will, and other forms of human knowledge. These other forms of knowledge are not to be dismissed, but they are to be taken for what they are.
The gentleman closes on this sententious note:
But just as reductionism in science is wrong, reductionism about science—reducing the complex scientific enterprise to a single method—is bound to distort the way we understand science and its place in our intellectual and political culture.
I mean, sure, whatever? I just hope that next time the author writes on his blog on the internet using electricity he spares a moment of thanks for those darned reductionists who “reduc[ed] the complex scientific enterprise to a single method” and thereby gave us the Scientific Revolution.