What does it mean to “believe in science?” To put it more precisely: what, exactly, is the thing believed in here? What is science, anyway? I’ve explored this question before, though only tangentially, and I think there are basically two ways we use the word “science”:
Firstly, and most precisely, “science” refers to the scientific method: science is a way of trying to discover facts about the world, either particular empirical facts (“what is that thing over there?”) or more general rules and structures that govern how specific things exist and function (“how is it that such a thing ended up in a place like that?”) The scientific method has proven itself to be an amazingly powerful tool not only for human discovery about, but also for human manipulation of, the world we live in.1 Indeed, our society today is largely determined, from top to bottom, by the technology we have developed from scientific exploration: from the way we cook our food, to the clothes we wear, and certainly the device upon which you are reading this article. Ours is most certainly a technological society, the very existence of which is a profound argument for the efficacy of the scientific method.
The scientific method is powerful, but at its core it’s also pretty simple: it is a marriage of empiricism and rationalism. On the one hand, the scientific method limits itself to the kind of evidence that can be measured and publicly reproduced. This is what qualifies as properly scientific evidence. At the same time, though, the scientific method also assumes that there is a more-or-less rational, rules-based architecture to how things come about, and so it is designed to search for common applicable rules that govern how things come about. Science then treats particular, concrete phenomena as the thing to be studied, but assumes that these phenomena arise according to common, indeed universal, rules of interaction and relation.
Sensible (as in detectable-by-the senses) things brought into being by invisible but consistent rules: this is the understanding of the world that the scientific method not only points to, but also assumes, in order to do its work. And, as already stated (and as is obvious to anyone who thinks about it for even a moment), this method is extremely powerful and effective.
But “science” is often also invoked in a different way, not as a method but as an ontology. When people talk about believing in science, they often mean that they believe that the model of reality that scientists currently propose is true, that this model corresponds more-or-less perfectly with how reality actually is. So, for example, since science assumes a materialist method in developing its theories, many people argue that if one believes in science, one must be a materialist—must believe that only material entities are real.
This elision between method and ontology is common, and to many it may seem innocuous or even obvious. But philosophically it is definitely an error, and it likely causes much mischief, especially in philosophy of science and philosophy of mind. And though this conflation of scientific method and scientific ontology is so common that we often don’t even realize we are doing it, it’s not hard to see it if we look for it, and understand the error at its root.2
Throughout the course of the Enlightenment, from around 1650 until about 1800, modern science as we know it developed. By the 19th century, physics had arrived at a basically materialist view of the cosmos as a vast (indeed, presumably infinite) uniform space within which a huge number of eternal, indestructible particles moved and interacted. This view developed over time from the work of early thinkers like Francis Bacon and Isaac Newton, culminating in the work of philosophers and physicists like Pierre-Simon Laplace and James Maxwell. Even so, Newton’s contributions were so critical that this view is often referred to as the “Newtonian” physics.3
Newton and his successors were by no means the first to think this way (such theories go back as least as far as ancient Greece and India, two thousand years before Newton), but when combined with the new sophisticated models of mathematics (such as calculus, which Newton was also a pioneer in), the modern, Enlightenment version of this particle materialism was far more robust than previous ones: it offered the possibility of understanding every event in the cosmos as the interaction of tiny particles bumping into each other.
Now, many people would take this Newtonian physics and develop it further. Perhaps most famous among those inspired by Newton’s work was Pierre-Simon Laplace, who was so convinced by the Newtonian materialist-particle picture of the cosmos that he believed that, given perfect information about the location and direction of every particle in the cosmos at a given time, someone could correctly predict everything that would happen thereafter. This articulation of determinism, though it was wildly ambitious, was nothing more than the faithful working out of the conclusions to the Newtonian ontological picture: if the cosmos was a set of particles that interacted according to fixed and precise mathematical laws, then it surely must be true that someone could make perfect predictions about those particles’ future behavior, so long as they really had all the information required.
Now, the Newtonian picture of the universe lasted for many centuries, and much scientific work from the 17th through the 19th centuries only seemed to confirm that it was, at least basically, right. And, of course, as more and more experiments seemed to confirm this model of the cosmos, many scientists argued that this itself was very good evidence that the deterministic, basically materialist picture of the universe was in fact true. After all, if the Newtonian ontology was wrong about how things worked, wouldn’t predictions based on the model end up being falsified? If I tell you that I’ve got an angry cat in a bag, and you hear meowing and hissing, and can see a vaguely cat-shaped lump moving around in the bag, and you can even occasionally see tiny little scythes poking through the fabric every so often—well, concluding that there is a cat in the bag seems pretty rational; it’s the best explanation for all the data you’ve gathered—even if you haven’t, strictly speaking, seen the cat itself.
More than two centuries of scientific work had yielded more and more data that aligned very well with the Newtonian ontological picture of the cosmos. Experiment after experiment seemed to confirm this view of things.
However, the scientific method is restless; it keeps moving. New experiments are run and new theories are developed to deal with new data, new problems. And sometimes this new data and the new theories that are developed from it challenge the old consensus.
In the early 20th century, the Newtonian model began to fall apart. First off, Einstein’s theories of relatively (both special and general) radically transformed our understanding of space: instead of being a vast field which was a uniform kind of “stage” upon which particles moved and interacted, space itself had a kind of “texture”, and could be said to “warp” or “bend”. Even more troublingly for the Newtonian picture of things, space and time were interrelated in such a way that the same event might appear very differently to different observers, if they were moving at very different speeds. Indeed, two things could actually age at different speeds, if one of them was moving very fast. Furthermore, the later development of the theory of the Big Bang was also perceived as a major threat to the Newtonian ontology. Newton had assumed (like Aristotle before him) that the space of the cosmos was eternal and unchanging. The Big Bang suggested a universe that was born, that grew, and this introduced all kinds of new problems and questions for physics.4
None of this was predicted by the Newtonian system, and so this ontology seemed less secure. But things were only going to get worse. New theories, which would culminate in quantum mechanics, developed especially fast in the 1920s and offered a very different picture of fundamental stuff than Newton and his successors had proposed.
In the 19th century, there had been a long process by which the nature of the fundamental particles that Newtonian thought had proposed were analyzed and understood. First off, a variety of different chemicals had been discovered: oxygen, nitrogen, carbon, and so on. At first, these “elements” were taken to be themselves atomic particles—that is, these were seen as the indestructible little things which existed absolutely, without any parts.5 But further work in the 19th century revealed this wasn’t right. Instead, each of these elements were revealed to be composites, built out of even smaller particles. But in fact, this discovery actually bolstered the Newtonian position in many ways: instead of more than 100 unique elements as the fundamental building blocks of reality, we now had perhaps only three: protons, neutrons, and electrons. This was a much more parsimonious system, and it seemed to confirm the basic Newtonian position. The addition of things like photons was no problem at all; indeed, it allowed more and more physical phenomena to be understood as the interaction of particles, giving a Newtonian physics even more reach.
But as hinted at above, quantum mechanics troubled this view of things. First, it began to propose sub-sub-“atomic” particles; it turned out that protons and neutrons were made of even smaller things, quarks. And the list of these new super-small particles only expanded, with things like muons and gluons being proposed in order to make sense of all the experimental data coming in. But what was really troubling for many (very much including Einstein, who was as originally dead-set against quantum mechanics as he was the Big Bang theory) was that this new view of things argued that particles could also be understood as waves, or indeed even as an area of space in which there was a distribution of the probability that a particle would be found there.6
Now, experimentally, the new quantum theories (and there were, and are, a variety of them, which differ in subtle but important ways) were extremely successful—they could explain phenomena that the previous pre-quantum view of things could not. According to the scientific method, this means these theories should definitely be preferred. But they also presented a position that made the old ontology seem totally untenable. If particles can be described just as well as waves, then it’s hard to see how one could argue that particles themselves are somehow the fundamental building block of reality. Instead, particles seem to be an expression of something non-particular, and indeed, potentially non-material. Quantum mechanics gives a picture of what it means for something to be “physical” that is drastically different from what science had assumed for more than two centuries. Other aspects of quantum mechanics, such as non-local causality, only make the Newtonian picture even harder to square with what we know about reality.7
Now one might conclude at this point that these new findings suggest that the Newtonian picture of reality is wrong, and therefore without any scientific value. One might say that, though we should still hold Newton in high esteem for his historical contributions to the development of science, his theories should no longer be seen as applicable. However, the reality is that most technological calculation and work that is done today operates as if Newton was basically right. After all, the strange effects and implications of quantum mechanics are really only visible at extremely small scales; when looking at the world through human eyes, focused on things that are visible and tangible to us, it turns out we can basically ignore quantum mechanics and get by just fine.
And this is true not only for relatively low-tech everyday activities like assembling IKEA furniture or boiling water. It’s also true for lots of very demanding fields—such as rocket science. When NASA put together the calculations that put human beings on the moon in the 1960s, the mathematical models they used were basically Newtonian. They did not try to calculate the quantum behavior of, say, the aluminum in their rockets. They took a much bigger-picture view, understanding aluminum basically the way a 19th-century physicist would have, and…it worked. Very well!
It’s important not to miss what this means: scientists can use a model of reality drawn from an outdated ontology, an ontology that in many respects incorrectly models the world, and yet develop technology that works very well. This fact drives home the gulf between science as a methodology, on the one hand, and science as a fixed ontological model of the world, on the other. The scientific method remains extraordinarily powerful; if we want to understand the likelihood of a given phenomenon arising in the world, it remains the best, arguably the only worthwhile, method for the task. But this method generates ontological pictures of reality, theoretical “snapshots” developed from the most recent experiments and theorizing, which themselves often seem undeniably true at the moment, but which, in time, have often been revealed to be full of errors.
In other words: it turns out that science as a method for predicting future phenomena can operate very well without actually having a correct model of the universe. Many people seem to think this isn’t possible, that science’s progress is dependent on extrapolating facts from a basically true ontology. But the actual history of scientific development—and our own interaction with the scientific method day to day—reveals this to be a pious falsehood.
Now, this conclusion—however unsettling it may be to people who very much want to believe that science reveals an unerring ontological picture of the world—is no threat to actual science—that is, the disciplined application of the scientific method. Again: the overturning of incorrect scientific ontologies is something that the scientific method itself does! In other words, so long as we understand “believing in science” to mean believing that application of the scientific method is the best way to consistently make better predictions about future events, rather than as the act of believing in the most current ontology derived from that method, our faith in science is extremely well-founded. But the minute we decide that the scientific method has given us a perfect model of reality, we are likely placing our faith in a reification, a simulation, a convenient but illusory model—and it very well may be science itself that eventually delivers us from our delusion.
To put this all another way, science is a way of knowing the world which is focused primarily on honing itself, its own tools, rather than on producing a finished model. It is predictive in character, a continuing process of epistemological clarification, proposing, and then demolishing, different ontological pictures as it advances. This, of course, does not mean that the older ontologies weren’t—and indeed, aren’t—without merit. As long as a given scientific ontology captures enough of the truth of things to model the part of reality relevant to the particular task we are trying to complete, then not only are older models perfectly acceptable, they are much preferable to newer models. Not even a theoretical physicist would want to waste time calculating the quantum aspects of pine wood when assembling their dresser. But again, this is a valuing of an ontology as a practical, pragmatic matter: using the ontology as a tool to complete a task, not fixating on that ontology as a perfectly correct model of reality.
The Newtonian model of the cosmos is, strictly speaking, wrong. However, crucially, it turns out that it is right enough for the vast majority of human tasks, and so we keep using it, to great effect. But this only drives home the simple fact: scientific truth is methodological, practical, and predictive, not ontological and dogmatic.
There are many implications to this simple, if often overlooked, fact about genuine science. But next week I will begin to pursue important impacts this truth has for philosophy of mind. Until then, may your methods be disciplined and your ontologies held loosely.
This is not to say that the scientific method is the only or even best approach to all questions. Metaphysical, theological, and moral questions, for example, are probably not well-suited to scientific analysis—indeed, there is reason to think that the scientific method is dependent on metaphysics, not the other way around. Even so, most human questions are are not strictly metaphysical, and so the scientific method is often the right tool for the job.
This confusion of method and ontology is sometimes referred to, pejoratively, as scientism. I won’t be using that term here, since I think it often just raises the temperature of such conversations without adding much light. But if you have read things on that subject that sound familiar to what you’ll read below, that wouldn’t surprise me.
Though it is worth noting that Newton was not a materialist tout court; he is noteworthy for his spirituality and religious faith as well as his interest in science.
Indeed, even Einstein himself was a major opponent of the Big Bang theory for many years; he saw it as essentially the scientific reification of Jewish and Christian cosmology!
The word “atomic” comes from the Greek, meaning indivisible.
As it turns out, Newton himself had theorized about the possibility of viewing certain phenomena as either waves or as particles, especially light. So this uncertainty wasn’t exactly new to science, but it did overturn a specific interpretation of the findings of the scientific method.
As I have touched on before, there is reason to think that contemporary physics suggests that reality is fundamentally neither particle-based nor even really material, as it ultimately reduces to a series of mathematical formulae: an abstract, formal system. But this is a question of the philosophy of science, how to metaphysically interpret scientific findings, rather than a scientific question a such—as I plan to address in the next article in this series.
Great post!
"it turns out that science as a method for predicting future phenomena can operate very well without actually having a correct model of the universe."
Yes! It's amazing to me that so many people who are privy to the history of science still aren't clear on this yet. Science is not metaphysics!
And many very well informed people nevertheless have strong faith in reductive determinism, despite the fact that science has moved on and we no longer have any reason to think there are fundamental building blocks of reality. What in the world is everything supposed to reduce to if these building blocks are gone? What is supposed to "make up" the emerging phenomena? And how can determinism be true if fundamental reality is not deterministic, but probabilistic?
Three thoughts here. 1. The most fundamental problem with every attempt at physics is it generally works--at least Einstein did--with a geometric model of reality. Even those posing more than 3 dimensions cannot exactly mathematically account for these dimensions using the same parameters as did Einstein. Thus in a sense each part of science, suffers from the incapacity to develop a language internally consistent and capable of addressing all variety of phenomena. This is represented with the Three Body Problem where three bodies--say planets--who are in each others gravitational fields and thus orbiting one another, are in orbits that cannot be predictable--one cannot say where one planet will be or another or one in reference to another at any given time in the future. The only way passed this is supercomputers and essentially an exponential amount of calculations--but none of which are really producing a law by which we can arrive at certain prediction. 2. Science emerged along with scientific method by tossing aside final and formal causes. Attention realistically was only paid to efficient causes and thus emerges a sort of billiard table ontology where its all a bunch of balls bouncing off of each other then problematized by Einstein's relativity as to not ensure they'd move at same speeds or angles depending upon gravity, etc.... 3. Because of the loss of final and formal causes especially, along with the fact that science is quantitative measurement--it presents its theories with diagrams with qualities or is thought in one's head as having qualities, but it is only pure quantity. When you put these factors together, can we say we have any ontology at all anymore? The subject is bracketed from the equation and though they can only theorize based on evidence collected using scientific method, their theorization automatically requires the scientist to take the quantitative data and develop a qualitative picture which in the end is dismissed as not existent or important to the whole process. Nothin is and nothing is x or has y or aims toward p or becomes z, rather to make sense of anything at all in this picture one has to isolate a constant and certain variables (from countless possible ones) and pretend like this in itself isn't also self-defeating the integrity of science. Sorry for the rant.