In section 17 of his Monadology, Leibniz puts forward the following argument against materialism:
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for. Further, nothing but this (namely, perceptions and their changes) can be found in a simple substance. It is also in this alone that all the internal activities of simple substances can consist.
Because of the example he alludes to, this argument is sometimes referred to as “Leibniz’s Mill.” How is it supposed to work? Given the emphasis on simplicity, Leibniz’s point is clearly at least in part that a mind cannot be a composite thing, as a mill is composite insofar as it has parts which interact. Rather, a mind has to be simple in the sense of something non-composite or without parts. A useful exposition of the argument so interpreted can be found here. So understood, the argument is akin to Descartes’ “indivisibility argument” and to anti-materialist arguments from the unity of consciousness. (I discuss the indivisibility argument in chapter 2 of Philosophy of Mind and the unity of consciousness in chapter 5.)
Another way to read the argument, though, would be to take it to be saying that the mill example shows there to be a gap between material-cum-mechanical facts on the one hand and mental facts on the other. If the brain were made the size of a mill so that we could walk around in it, we would never encounter in it anything that corresponded to thinking, feeling, and perception. All we would encounter are material parts interacting causally (even if the causal interaction would be more complex than what the mill analogy suggests). The point would seem to be that we could know all the material facts without knowing any mental facts (thus making the argument akin to Frank Jackson’s knowledge argument), or that the entirety of the material facts does not entail any mental facts (thus making it akin to the zombie argument). On this reading, considerations about simplicity versus compositeness arguably would not be essential to the main, anti-materialist point of the argument – though Leibniz himself presumably thought that the gap between material and mental facts should lead us to regard the mind as simple rather than composite.
In his new book Leibniz’s Mill: A Challenge to Materialism, Charles Landesman seems to be reading the argument in this second way. William Hasker reviews the book here, and understandably complains that Landesman overlooks the “simplicity” aspect of Leibniz’s argument. (Hasker, by the way, provides a very useful exposition of the anti-materialist argument from the unity of consciousness in his important book The Emergent Self.) In fairness to Landesman, though, Leibniz’s Mill is (despite the title) not primarily intended as an exposition of Leibniz, who actually plays a relatively small role in the book. And it is in any event worthwhile considering this second reading of the argument, whether or not it corresponds entirely to Leibniz’s own intentions.
Landesman considers the following objection raised by John Searle in his book Intentionality:
An exactly parallel argument to Leibniz’s would be that the behavior of H2O molecules can never explain the liquidity of water, because if we entered into the system of molecules “as into a mill we should only find on visiting it pieces which push one against another, but never anything by which to explain” liquidity. But in both cases we would be looking at the system at the wrong level. The liquidity of water is not to be found at the level of the individual molecule, nor are the visual perception and the thirst to be found at the level of the individual neuron or synapse. (p. 268)
It is ironic that Searle should put forward such an objection, given that he is also a critic of materialism who has himself elsewhere denied that such cases are “exactly parallel.” In particular, he has insisted that whereas liquidity, solidity, and other such properties of material systems have what he calls a “third-person ontology” insofar as they are entirely objective or “public” phenomena equally accessible to every observer, consciousness has by contrast a “first-person ontology” insofar as it is subjective, “private,” or directly accessible only to the subject of a conscious experience. But then it would seem to follow that if we observed a system of water molecules on the large scale – not just an individual molecule or two but the whole system – and noted that they were moving around in such-and-such a way relative to one another, we would (given the standard scientific account of liquidity) just be observing the system’s liquidity. By contrast, if we observed, on the large scale, the system of neurons which makes up the brain, we would not thereby observe the conscious experiences of the person whose brain it is. This is a consequence of Searle’s own distinction between third-person and first-person ontology, and his own insistence that consciousness is unique in having the latter sort of ontology. (See my paper “Why Searle Is a Property Dualist” for references and for further discussion of Searle’s views.)
Landesman makes a related point in response to Searle when he notes that when observing either a mill-sized brain or a mill-sized system of water molecules, we would not be limited to observing the individual neuron or molecule but could imagine instead observing the systems on the large scale. And when we do so, Landesman continues, we would certainly be able to observe the liquidity of the water if by “liquidity” we mean a certain kind of interaction between molecules. On the other hand, we might instead mean by “liquidity” the phenomenal features liquid water presents to us – the way it looks or feels to us, for example – and these, Landesman allows, would not be observable as we walked through a mill-sized system of water molecules. But then, liquidity in this sense would really not be a feature of the water itself in the first place, but only of our experience of it. And in that case it is irrelevant that we would not observe it in observing the system of molecules. (Cf. my discussion of the fallacy Paul Churchland commits when he suggests that the red surface of an apple is really just “a matrix of molecules reflecting photons at certain critical wavelengths.”) By contrast, thought and perception are features of the mind itself, and yet we would not be aware of them in observing the large-scale interactions between neurons in a mill-sized brain. Thus, Landesman concludes (quite correctly, in my view), Searle’s objection fails.
Now one might, as Landesman notes, respond by insisting that in observing the interactions between neurons, it might be that “we are in fact observing thoughts and perceptions, although we fail to recognize them for what they are” (p. 24). But as Michael Lockwood writes in Mind, Brain and the Quantum:
To [Leibniz’s argument] I suppose one could retort by asking Leibniz how he expected to know if he had found something that explained Perception. Except that that is supposed to be his point: one wouldn’t know and hence nothing one encountered could explain Perception. (p. 35)
And as Landesman says, it is hard to see how one could justify the claim that in our stroll through a mill-sized brain we would “in fact” be observing thoughts and perceptions without realizing it, unless one is assuming the truth of some particular theory about the mind/brain relationship that could ground this suggestion. But in that case one would merely be begging the question against Leibniz, whose point is that the mind is not susceptible of explanation in terms of such a theory. And if one instead took the eliminativist view that the mind is illusory and that mentalistic descriptions should simply be replaced by neuroscientific ones, then one would in effect be conceding Leibniz’s point that an inspection of neural processes will never reveal thought or perception.
Still, one might suggest that Leibniz’s argument can be seen to fail when we consider that if we were to walk through a computer expanded to the size of a mill, we wouldn’t observe the program it is running, the symbols it is processing, etc. But it is still running the program and processing the symbols for all that. (Lockwood considers such an objection at p. 35 of Mind, Brain and the Quantum.) But this analogy is no good either. The problem is that – as Searle has trenchantly argued – “computation” as usually understood is not an intrinsic feature of the physical world in the first place, but an observer-relative feature. Nothing counts as the processing of symbolic representations, the running of algorithms, etc. except relative to human interests – in particular, those of the designers and users of computers. That is why we wouldn’t observe the distinctively computational features of a mill-sized computer – they aren’t there intrinsically in the first place, but only ever assigned by us. (Similarly, if we examine the physical features of a written word – whether normal-sized or expanded to the size of a mill – we will never observe its meaning or semantic content, but that is because the meaning or semantic content is not intrinsic to the physical properties of a written word in the first place, but rather derived from us as language users.) But mental features would have to be intrinsic to the brain, if materialism were true. So the proposed analogy between computers and brains fails.
But what if Searle is wrong and something like computation really is an intrinsic feature of the material world after all? For example, isn’t DNA rightly said to contain the “program” or “information” for an organism? And yet, if a DNA molecule were made the size of a mill, we wouldn’t observe this “program” or “information.” So, doesn’t this show that Leibniz is mistaken?
I don’t know if using computationalist language is the best way to put the point, but I think such a response would be a promising one. But it would not help the materialist in the least. For if we say that there is something intrinsic to a material system in virtue of which it “points to” something beyond itself (as “information” does) or to a certain end-state (as a “program” does), then we are either attributing to matter something like final causality as it is understood in the Aristotelian-Scholastic tradition (as I have noted in earlier posts, such as this one) or we are attributing to it something like intentionality and thus adopting either a panpsychist, or dual-aspect, or neutral monist account of matter. And in either case, we are thereby abandoning a materialist conception of matter – and thus conceding the main point of Leibniz’s argument.