And now, dear reader, our critical look at Alex Rosenberg’s The Atheist’s Guide to Reality brings us to the pseudoscience du jour. Wittgenstein famously said that “in psychology there are experimental methods and conceptual confusion” (Philosophical Investigations, II, xiv, p. 232). He might as well have been talking about contemporary neuroscience -- or, more precisely, about how neuroscience becomes distorted in the hands of those rich in empirical data but poor in philosophical understanding. Every week seems to bring some new sensationalistic claim to the effect that neuroscience has “shown” this or that -- that free will is an illusion, or that mindreading is possible, or that consciousness plays no role in human action -- supported by arguments notable only for the crudeness of the fallacies they commit.
Tyler Burge has given the label “neurobabble” to this modern intellectual pathology, and Raymond Tallis calls it “neurotrash,” born of “neuromania.” I’ve had reason to comment on it in earlier posts (here and here) and an extreme manifestation of the disease is criticized in the last chapter of The Last Superstition. M. R. Bennett and P. M. S. Hacker subject neurobabble to detailed and devastating criticism in their book Philosophical Foundations of Neuroscience, and Tallis does a bit of housecleaning of his own in Aping Mankind. Neurobabble is a key ingredient in Rosenberg’s scientism. Like so many other contemporary secularists, he has got the brain absolutely on the brain, and maintains that modern neuroscience vindicates some of his more outrageous metaphysical claims. In particular, he thinks that so-called “blindsight” phenomena establish that consciousness is irrelevant to our actions, and that neuroscientist Benjamin Libet’s experiments cast doubt on free will. (Jerry Coyne, in a recent article, has made similar claims about free will. What I’ll say about Rosenberg applies to Coyne as well.) The big picture
Some general remarks are in order before turning to Rosenberg’s specific claims. Consider that every written token of the English word “soup” is made up of marks which look at least vaguely like “s,” “o,” “u,” and “p.” But of course, it doesn’t follow that the word “soup” is identical to any collection of such marks, or that its properties supervene on the material properties of such marks, or that it can be explained entirely in terms of the material properties of such marks. It would be absurd to suggest that students of language should confine their attention to such material properties, or that any features of language that could not be detected via the study of such properties aren’t real. Everyone who considers the matter knows this. To take another example, borrowed from psychologist Jerome Kagan, “as a viewer slowly approaches Claude Monet's painting of the Seine at dawn there comes a moment when the scene dissolves into tiny patches of color.” But it doesn’t follow that its status and qualities as a painting reduce to, supervene upon, or can be explained entirely in terms of the material properties of the color patches. It would be absurd to suggest that students of art should confine their attention to such properties, or that any features of a painting that could not be detected via a study of such properties aren’t real. Everyone who considers the matter knows this too.
Yet when neuroscientists discover some neural correlate of this or that mental phenomenon, a certain kind of materialist concludes that the mind’s identity with, or supervenience upon, or reducibility to, or complete explanation in terms of neural processes is all but a done deal; and when they fail to discover any such correlate, such a materialist will conclude that the mental event or process in question doesn’t really exist. In fact such conclusions presuppose, rather than establish, neuroscientific reductionism -- just as someone who concluded that sentences and their meanings don’t really exist, but only ink splotches do, or that paintings don’t really exist and only isolated color patches do, would be presupposing rather than establishing reductionism about language or art. It is first assumed by such materialists that all that really exists is what can be put in the language of physiology, neurochemistry, and the like; and then it is “inferred,” in an entirely question-begging fashion, that what we take ourselves to know from introspection is either entirely reducible to what the neuroscience textbooks tell us or doesn’t really exist at all. Circular reasoning of this sort pervades the neurotrash literature.
New Atheist vulgarians like Coyne will no doubt retort that the only alternative to their crass reductionism is a belief in ghosts, ectoplasm, or some other spook stuff of the sort beloved of the more ideological sort of materialist, who only ever wants to attack straw men. Of course, dualists of either the Cartesian or Thomistic stripe are not in fact beholden to such concepts. (See my series of posts on Paul Churchland for an illustration of how badly some materialists caricature dualism.) But the anti-reductionist position does not require a commitment to dualism in any case. The objections of Burge, Tallis, Bennett and Hacker do not presuppose dualism, much less any theological point of view.
Rather, what is necessary is just the ability to see that it is only persons, rather than any of their components, who can intelligibly be said to be conscious, to think, to perceive, to act, freely to choose, and so on (just as it is paintings and words, rather than the paint or ink splotches they are made up of, that can intelligibly be said to represent things, to have syntactic and semantic features, and so forth). Hence, from a failure to locate such activities at the neuronal level, it simply does not follow that the activities do not exist -- again, one must presuppose reductionism to draw that sort of conclusion, so that a failure to locate the activities at the subpersonal level hardly establishes reductionism. Similarly, it makes no sense to attribute the activities in question to the subpersonal level (as some reductionists do) -- to characterize neural processes as “deciding” this or “perceiving” that. Only persons decide, perceive, think, freely choose, etc., if anything does. Hence it is to the level of persons as a whole, and not to their parts, that we must look if we are fully to understand what is happening when we think, perceive, feel, choose, act, etc.
Appeals to the predictive and technological successes of neuroscience no more establish that neuroscience gives us an exhaustive picture of human nature than the predictive and technological successes of physics tell us that physics gives us an exhaustive picture of reality as a whole. I explained in an earlier post why the latter sort of inference is fallacious, and parallel considerations show why the former sort is fallacious. Mathematical models in physics are abstractions from something concrete, something apart from which the mathematics would be entirely inefficacious. The models surely capture something real, but by no means the whole of what is real. To think otherwise is sort of like thinking that what is “really” in a photograph is only what is captured by the outlines one might find in a coloring book. Neuroscientific models are no different. They too are abstractions from concrete reality, a reality that outstrips the model. They no more provide an exhaustive description of a person than a chemical analysis of the ink in a book exhausts the content of the book.
Arguments to the contrary typically not only beg the question, but are inconsistent. For instance, arguments for the untrustworthiness of introspection crucially rely on evidence derived from introspection. Mental properties that are claimed not to exist at the personal level are smuggled in at the subpersonal level. Such question-begging reductionism and inconsistency often take the form of what Bennett and Hacker call the mereological fallacy (and what others have called the homunculus fallacy). Higher-level, personal features of human beings (decision, awareness, intentionality, etc.) are “explained” or explained away by appealing to purported lower-level, subpersonal features of the nervous system, but where the purported lower-level features are really just further instantiations of the higher-level features in question -- in which case they have really just been relocated rather than either explained or eliminated.
Another fallacy often committed by the neuromaniacs involves ignoring the distinction between normal and deviant cases. Dogs naturally have four legs. Everyone knows this, and everyone also knows that it is irrelevant that there are dogs which, as a result of injury or genetic defect, have less than four legs. No one would take seriously for a moment the suggestion that the existence of the odd three-legged dog should lead us to conclude that it isn’t really natural after all for a dog to have four legs. Everyone also knows that dogs tend to prefer meat to other kinds of food. Though dogs will eat other things and the occasional dog may even prefer other things, that does not undermine the point that there is a tendency in dogs toward meat-eating. No one would take seriously for a moment the suggestion that the existence of the odd dog who prefers fruit and vegetables shows that dogs are “really” all herbivores.
Yet such common sense goes out the window with neurobabblers, who (as it were) allow the deformed tail to wag the otherwise healthy dog. In particular, the way people behave in artificial experimental conditions (such as Libet’s experiments) is taken to determine how we should interpret what happens in ordinary conditions, rather than the other way around. Unusual behavior on the part of subjects with neurological damage is taken to show what is “really” going on in normal subjects (as in “blindsight” and “split-brain” phenomena).
We will see how some of these general features of the arguments of neuromaniacs manifest themselves in what Rosenberg has to say. Notice first, though, that nowhere in what has been said so far has there been any appeal to “intuition.” Neuromaniacs like to pretend otherwise -- to pretend that their critics have only inchoate hunches on their side while the neuromaniacs have science on theirs -- but this is sheer bluff. (And I for one hate arguments that appeal to intuition.) The appeal has rather been to mundane facts, to the plain evidence of everyday experience -- that is, to empirical evidence of the sort those beholden to scientism pretend to favor. In fact their attitude to empirical evidence is ambivalent. When doing so will enhance their appeal to the mob, those committed to scientism will play up their just-the-facts-ma’am homespun common sense. But once the bait is swallowed, they will switch gears and insist that common sense and ordinary experience actually get much or even everything wrong -- conveniently forgetting that this casts into doubt the very empirical evidence that was supposed to have led to the scientistic picture of the world in the first place. The paradox is as old as Democritus, and Rosenberg is just an extreme case of a general pattern one finds throughout the literature of scientism, materialism, and naturalism.
The neurobabbler, then, is committed to a position that is not only radically at odds with what the actual evidence of experience tells us, but arbitrary and inconsistent in its treatment of that evidence. The burden of proof is on him to show, in a non-question-begging way, that his position is even coherent -- not on us to show that he is wrong.
The blindsighted leading the blind
In “blindsight,” a subject whose primary visual cortex has been damaged to the extent that he is no longer capable of having conscious visual experience in at least certain portions of his visual field is nevertheless able to identify distant objects in those portions of the field, by color, shape and the like (by pointing to or reaching for the objects, say, or by guessing). Though blind, the subject can “see” the objects in front of him in the sense that information about them is somehow getting to him through his eyes even if it is not associated with conscious experiences of the sort that typically accompany vision.
What this tells us, Rosenberg insists, is that “introspection is highly unreliable as a source of knowledge about the way our minds work” (p. 151). Indeed, Rosenberg claims that “science reveals that introspection -- thinking about what is going on in consciousness -- is completely untrustworthy as a source of information about the mind and how it works” (pp. 147-8, emphasis added). In particular, “the idea that to see things you have to be conscious of them” is “completely wrong” (p. 149). But there are three problems with these claims. First, the “blindsight” evidence cited by Rosenberg does not in fact show that introspection is unreliable at all, let alone “highly” or “completely” unreliable. Second, even if it is partially unreliable, it doesn’t follow that to see things you needn’t be conscious of them. Third, the blindsight cases in fact presuppose that introspection is at least partially reliable.
Take the last point first. The blindsight subject tells us that he has no visual experience at all of the objects he is looking at -- that he cannot see their colors or shapes. How does he know this? Via introspection, of course. The description of the phenomenon as “blindsight,” and the argument Rosenberg wants to base on this phenomenon, presupposes that he is right about that much. If he’s wrong about it, then that entails that he really is conscious of the colors, shapes, etc. -- and such consciousness is, of course, precisely what Rosenberg wants to deny is necessary to vision. Moreover, the argument also presupposes that the subject can tell the difference between being blind and having conscious visual experience -- something the subjects in question did have in the past, before suffering the neural damage that gave rise to the blindsight phenomena. Hence, their introspection of that earlier conscious experience must also be at least partially reliable.
So, the subject cannot be completely wrong if the argument is even to get off the ground. But isn’t he at least partially wrong? Well, wrong about what, exactly? Rosenberg says that the example shows that introspection “is highly unreliable as a source of knowledge about the way our minds work,” and he asks rhetorically:
After all, what could have been more introspectively obvious than the notion that you need to have conscious experience of colors to see colors, conscious shape experiences to see shapes, and so on, for all the five senses? (p. 151)
But this is sloppy. Strictly speaking, what we are supposed to know via introspection by itself are only our immediate conscious episodes -- “I am now thinking about an elephant” or “I am now experiencing a headache” or the like. No one maintains that the claim that “You need to have conscious experience of colors to see colors, etc.” is directly knowable via introspection, full stop. The most anyone would maintain is that introspection together with other premises might support such a claim. So, even if the claim turned out to be false, that would not show that introspection itself is unreliable. It could be instead that one of the other premises is false, or that the inference from the premises is fallacious.
Now, blindsight subjects also say that it feels like they are guessing, even though their judgments are more accurate than guesses. Doesn’t this show that introspection is deceiving them? It does not. For what is it that they are supposed to have gotten wrong in saying that it feels to them like they are guessing? Certainly Rosenberg cannot say “It feels to them like they are guessing but in fact they are conscious of the colors and shapes”-- since his whole argument depends on their not being conscious of the colors and shapes. But then, what is it that they are “really” doing rather than guessing? Again, what is it exactly that they are wrong about?
Suppose you hit me in the back with a stone and I say that it felt like a baseball. Did introspection mislead me? Of course not. It wasn’t a baseball, but what introspection told me was not what it was, but what it felt like, and it really did feel like a baseball. The judgment that it was in fact a baseball was not derived from introspection alone, but from introspection together with certain other premises -- premises about what that sort of feeling has been associated with in the past, what objects people tend to throw under circumstances like the current ones, and so forth.
Similarly, when the blindsight subject says that it feels to him like he is guessing, the fact that his answers are better than what one would expect from guesses does not show that introspection is wrong. It still does feel like a guess, even if it turns out that it is more than that. It is the feel of the experience alone that introspection gives him knowledge of, not the entire reality underlying the feeling. The judgment that it is merely a guess is not derived from introspection alone, but from the introspective feel of the experience together with premises about what experiences that feel like this one have involved in the past, assumptions (false, as it turns out) about whether people can process visual information without consciously experiencing it, and so forth. Blindsight cases show only that the inference as a whole is mistaken, not that the introspective component by itself is mistaken.
Rosenberg might respond: “But the blindsight subject doesn’t merely say it felt like he had guessed. He says he did guess. And isn’t that mistaken?” But what is the difference, exactly, between feeling like one is guessing and really guessing? To guess is to propose an answer without thinking that one has sufficient evidence for it. And that is just what the blindsight subject does. True, we have reason to think that information is getting through his visual system in such a way that it causes him to answer as he does. But he has no access to that information, and thus it doesn’t serve as evidence for what he says. The neuroscientific evidence suggests only that his guesses have a certain cause. It does not tell us that they weren’t really guesses after all.
So, Rosenberg hasn’t established from blindsight alone that introspection is even sometimes unreliable, let alone that it always is. But the deeper problem with his argument is that, from the fact that some of the information typically deriving from conscious visual experience can in some cases be received through the visual system without the accompanying experience, it simply does not follow that all such information always does (or even can) be received without conscious experience. Again, the subjects cited by Rosenberg were not always blind; they had seen colors, shapes, and the like in the past and then became either permanently or temporarily unable to have conscious visual experiences. There are no grounds for saying that this past experience is irrelevant to their ability somehow to process visual information “blindsight”-style -- for denying that they can identify colors and shapes now, without visual experience of them, only because they once did have visual experience of them. You might as well say that, since many deaf people can read lips, it follows that perception of sounds isn’t necessary for speech. Obviously, lip-reading is a non-standard way of figuring out what people are saying, and is parasitic on the normal case in which sound perception is crucial. Similarly, Rosenberg has given us no reason whatsoever to doubt that blindsight is parasitic on cases where conscious experience is necessary for color perception. As with the three-legged dog, the deviant case must be interpreted relative to the normal case, not the other way around.
As Bennett and Hacker note, there are also problems with the way the so-called “blindsight” cases are described in the first place. For one thing, the typical cases involve patients with a scotoma -- blindness in a part of the visual field, not all of it -- who exhibit “blindsight” behavior under special experimental conditions. In ordinary contexts their visual experiences are largely normal. For another thing, how to describe the unusual behavior is by no means obvious, precisely because while in some ways it seems to indicate blindness (the subjects report that they cannot see anything in the relevant part of the visual field), in other ways it seems to indicate the presence of experience (precisely because the subject is able to discriminate phenomena in a way that would typically require visual experience). In short, the import of the cases is not obvious; even how one describes them presupposes, rather than establishes, crucial philosophical assumptions. It is quite ludicrous, then, glibly to proclaim that “neuroscience” has established such-and-such a philosophical conclusion. The philosophical claims are read into the neuroscience, not read off from it.
Libet, learn it, love it
In Benjamin Libet’s famous experiments, subjects were asked to push a button whenever they wished, and also to note when they had consciously felt that they had willed to press it. As they did so, their brains were wired so that the activity in the motor cortex responsible for causing their wrists to flex could be detected. The outcome of the experiments was that while an average of 0.2 seconds passed between the conscious sense of willing and the flexing of the wrist, the activity in the motor cortex would begin an average of 0.5 seconds before the wrist flexing. Hence the willing (it is suggested) seems to follow the neural activity which initiates the action, rather than causing that neural activity.
Recent experiments involving brain scans show that when a subject "decides" to push a button on the left or right side of a computer, the choice can be predicted by brain activity at least seven seconds before the subject is consciously aware of having made it.
Writers like Rosenberg and Coyne find all this extremely impressive. According to Rosenberg, the work done by Libet and others “shows conclusively that the conscious decisions to do things never cause the actions we introspectively think they do” and “defenders of free will have been twisting themselves into knots” trying to show otherwise (p. 152). Coyne assures us that:
"Decisions" made like that aren't conscious ones. And if our choices are unconscious, with some determined well before the moment we think we've made them, then we don't have free will in any meaningful sense.
To be sure, Libet himself qualified his claims, allowing that though we don’t initiate movements in the way we think we do, we can at least either inhibit or accede to them once initiated. Even Rosenberg allows that Libet’s experiments by themselves don’t prove that there is no free will. But he insists that they do show that introspection is not a reliable source of knowledge about the will.
What’s really impressive about all of this, though, is how easily impressed otherwise intelligent people can be when in the grip of an ideology. And the fallaciousness of the inferences in question is not too difficult to see. To begin with only the most obvious fallacy, Rosenberg’s and Coyne’s argument presupposes that the neural activity in question is the total cause of the action, which is of course precisely part of what is at issue in the debate between neurobabblers and their critics. And for the critics, both the neural activity and the “feelings” experienced by Libet’s subjects are merely fine-grained, subpersonal aspects of the person -- where it is the person as a whole, and not any of his parts, who is properly said to be the cause of any of his actions. Just as the significance of a word or sentence is crucially determined by the overall communicative situation of which it is a part -- you are not going to know whether “Shut it!” is merely a terse request to close the door, or a quite rude command to keep silent, without knowing the context -- so too the significance of both a neural process and a conscious experience cannot be known apart from the larger neurological-cum-psychological context. Treating the wrist flexing and the neural activity in question in isolation merely assumes reductionism and does nothing to establish reductionism.
After all, neural activity and bodily movements as such do not entail action, free or otherwise. The spasmodic twitch of a muscle involves both neural activity and bodily movement, but it is not an action. So, whether such-and-such a bit of neural activity or bodily movement is associated with a genuine action cannot be read off from the physiological facts alone. In particular, there is nothing in the physiology as such that tells us that the neural activity Libet is interested in counts as a “decision” or an instance of “willing.” And what exactly justifies us in identifying this neural activity as “the” cause of the action in the first place, as opposed to merely a contributing cause? And what do we count as “the action”? Moving one’s hand? Pressing the button? Following the prompts of the feelings the experimenters have told one to watch for? There is no way to answer apart from appeal to the intentions of the subject -- in which case we have to rely on his reports of what he had in mind, rather than the neurological evidence, contrary to Rosenberg’s insistence that introspection is of no value. And as Tallis points out, the intentions of the subject long predate the neural activity Libet fixates upon. Those intentions were formed during conscious episodes that occurred minutes or hours before the experimental situation. (And this is just to note some of the more obvious problems with Libet’s claims. The variety of ways Libet’s evidence can be interpreted has been explored in detail by Alfred Mele.)
As Tallis also points out, arguments of the sort inspired by Libet’s work typically presuppose an extremely crude model of what counts as an action. One would think from the way Rosenberg and Coyne tell it that intentional actions are those preceded by a conscious thought of the form “I will now proceed to do X. Here goes…” But a moment’s reflection shows that that sort of thing is in fact extremely rare. Indeed, that most intentional action is not “conscious” in this way is something common sense knew long before Libet came on the scene. To borrow some examples from Tallis, when you do something as simple as walking to the pub or catching a ball, you carry out an enormous number of actions “without thinking about it.” You do not consciously think “I will now move my right foot, now my left, now my right, now my left, etc.” or “I will now run, I will now jump, I will now flex my fingers, etc.” You just act. Yet your actions are paradigmatically intentional and free -- you are not having a muscle spasm, or sleepwalking, or hypnotized, or under duress, etc. To be sure, that by itself doesn’t show that free will exists. But the point is that Rosenberg, Coyne, and their ilk have not shown that free will does not exist, because free will is not the straw man they are attacking.
Indeed, not only is a conscious feeling of the sort Libet and his admirers describe not necessary for free action, it is not sufficient either. As Bennett and Hacker point out, feeling an urge to sneeze does not make a sneeze voluntary. Since Libet himself is willing to allow that we might at least inhibit actions initiated by unconscious neural processes, even if we don’t initiate any ourselves, Bennett and Hacker observe that:
Strikingly, Libet’s theory would in effect assimilate all human voluntary action to the status of inhibited sneezes or sneezes which one did not choose to inhibit. For, in his view, all human movements are initiated by the brain before any awareness of a desire to move, and all that is left for voluntary control is the inhibiting or permitting of the movement that is already under way. (Philosophical Foundations of Neuroscience, p. 230)
As this shows, the very idea that “free actions,” if they existed, would be those preceded by a certain kind of “feeling” of being moved to do this or that, is wrongheaded. In particular, it is a crude mistake to assimilate willing to the having of an “urge.” As Bennett and Hacker emphasize, being moved by an urge -- such as an urge to sneeze, or to vomit, or to cough -- is the opposite of a voluntary action. And when Libet instructs the subjects of his experiments to note when they have certain “feelings” or “urges,” he not only manifests his own sloppy thinking about the nature of action, but encourages similarly sloppy thinking in his subjects, which casts into doubt the value of the whole experiment. The subjects start looking inwardly for “feelings” and “urges” as evidence of voluntary action -- something no one does in ordinary contexts, because in ordinary contexts voluntary action doesn’t involve feelings and urges in the first place. Of course, one might respond that Libet may not have intended to suggest that a decision to move one’s wrist is exactly like having an urge to sneeze or to vomit. But that only reinforces the point that the relevant conceptual issues bearing on the nature of action have been poorly thought out by those making sensationalistic claims about what the neuroscientific evidence has “shown.”
Losing consciousness
It’s time to bring this long post to an end. But we’re not done with Rosenberg yet. In general, he assures us, “consciousness can’t be trusted to be right about the most basic things” (p. 162). Yet science itself, in whose name Rosenberg makes this bold claim, is grounded in observation and experiment -- which are conscious activities. How exactly are we supposed to resolve this paradox? Rosenberg never tells us, any more than Democritus did (though at least Democritus could see the problem). But even this incoherence is as nothing compared to that entailed by Rosenberg’s denial of the intentionality of thought. It is to that denial -- the crowning lunacy of scientism -- that we will turn in the next post in this series.