Tuesday

Reading Rosenberg, Part IX

Our long critical look at Alex Rosenberg’s The Atheist’s Guide to Reality now brings us at last to that most radical of Rosenberg’s claims -- the thesis that neither our thoughts nor anything else has any meaning whatsoever.  To the reader unfamiliar with recent philosophy of mind I should emphasize that the claim is not merely that our thoughts, actions, and lives have no ultimate point or purpose, which is hardly a novel idea.  It is far more bizarre than that.  Consider the following two sequences of shapes: “cat” and “^\*:”  We would ordinarily say that the first has meaning -- it refers to animals of the feline sort -- while the latter is a meaningless set of marks.  And we would ordinarily say that while the meaning of a word like “cat” is conventional, the meaning of our thoughts about cats -- from which the meaning of the word in question derives -- is intrinsic or “built in” to the thought rather than conventional or derived.  What Rosenberg is saying is that in reality, both our thoughts about cats and the sequence of shapes “cat” are as utterly meaningless as the sequence of shapes “^\*:”  Neither “cat” nor any of our thoughts is any more about cats or about anything else than the sequence “^\*:” is about anything.  Meaning, “aboutness,” or intentionality (to use the technical philosophical term) is an illusion.  In fact, Rosenberg claims, “the brain does everything without thinking about anything at all.”
 
This entails that the marks you are looking at now, as you read this post, and the marks on the printed pages of Rosenberg’s own book, are as completely devoid of meaning as “^\*:” is.  You might as well be looking at the splotches on an oil-stained rag.  That “there literally is no such thing as linguistic meaning” is something Rosenberg is more explicit about his 2009 article “The Disenchanted Naturalist’s Guide to Reality” (a precursor to Atheist’s Guide) than he is in the book itself.  He is also more explicit in the article than in the book that in his view “there literally are no beliefs and desires.”  In the book the emphasis is not on the claim that there are no beliefs, desires, or thoughts of any kind, but rather on the claim that even if in some sense there are thoughts, they have no meaning or “aboutness.”  

Nor does Atheist’s Guide make it clear that Rosenberg is defending a version of what academic philosophers call eliminative materialism; and neither does he address all the objections that have been raised against this position (which, it should be noted, is a minority view even among materialists).  It seems that Rosenberg judged that his assertions were, for a book aimed at a general audience, fantastic enough as it is and that it would be asking for trouble either explicitly to draw out all of their bizarre implications, or to address the technical philosophical questions they raise.   And of course, asking prospective book buyers to purchase a volume which is on the surface written in forbidding philosophical jargon, but which is in fact filled with what the author himself regards as nothing more than meaningless ink splotches, has its drawbacks as a marketing strategy.

In any event, why would anyone say such a bizarre thing as that meaning or “aboutness” does not exist?    Well, consider again the word “cat,” whether written or spoken.  The meaning, as I have said, is entirely conventional or derivative.  There is nothing in the physical properties of ink splotches, pixels, compression waves, or what have you, that gives or could give the marks or sounds they constitute the meaning of the word “cat,” or any meaning at all.  But the same could be said of neurons.  They too seem as obviously devoid of any intrinsic meaning as ink splotches and compression waves are.  Yet if we are to say that a thought is a kind of neural process, we have to say that when we think about Paris (for example) there is a network of neurons that is somehow about Paris.  But then, the materialist Rosenberg asks (as any dualist might):

The first clump of matter, the bit of wet stuff in my brain, the Paris neurons, is [purportedly] about the second chunk of matter, the much greater quantity of diverse kinds of stuff that make up Paris.  How can the first clump -- the Paris neurons in my brain -- be about, denote, refer to, name, represent, or otherwise point to the second clump -- the agglomeration of Paris…?  A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe -- right next to it or 100 million light-years away? (The Atheist’s Guide to Reality, pp. 173-74)

Rosenberg considers various answers that might be given to this question, including materialist answers, and finds them all wanting.  The neurons cannot be about Paris in the way a picture is, because unlike a picture they don’t resemble Paris at all.  But neither can they be about Paris in the way that a red octagonal “Stop” sign is about stopping even though it doesn’t resemble that action.  For a red octagon, or the word “Stop” for that matter, only mean what they do as a matter of convention, only because we interpret the shapes in question as representing the action of stopping.  And when you think about Paris, no one is assigning a conventional interpretation to such-and-such neurons in your brain so as to make them represent Paris.

To suggest that there is some further brain process that assigns such a meaning to the purported “Paris neurons” is, as Rosenberg points out, merely to commit a homunculus fallacy and explains nothing.  For if we say that one clump of neurons assigns meaning to another, we are saying that the one represents the other as having such-and-such a meaning.  That means that we now have to explain how the first possesses the meaning or representational content by virtue of which it does that, which entails that we have not solved the first problem at all but only added a second one to it.  We have “explained” the meaning of one clump of neurons by reference to meaning implicitly present in another clump, and thus merely initiated a vicious explanatory regress.

The only way to break the regress would be to postulate some bit of matter that just has its meaning intrinsically, without deriving it from anything else.  But there can be no such bit of matter, in Rosenberg’s view:

Physics has ruled out the existence of clumps of matter of the required sort.  There are just fermions and bosons and combinations of them.  None of that stuff is just, all by itself, about any other stuff.  There is nothing in the whole universe -- including, of course, all the neurons in your brain -- that just by its nature or composition can do this job of being about some other clump of matter. (p. 179)

Now I would say that there is a sense in which Rosenberg is absolutely right about that much.  For given what most modern philosophers and scientists will allow to count as “physical” or “material,” there can indeed be no such thing as a physical system which has any inherent meaning, “aboutness,” or intentionality.  The reason is that ever since the anti-Aristotelian or “mechanistic” revolution of the early moderns, most philosophers and scientists have stipulated -- and a stipulation is all that it has ever been -- that a physical explanation can make no reference to final causes, to one thing “pointing to” or being “directed toward” some end beyond itself.  As philosopher of science David Hull points out:

Historically, explanations were designated as mechanistic to indicate that they included no reference to final causes or vital forces.  In this weak sense, all present-day scientific explanations are mechanistic. (“Mechanistic explanation,” in The Cambridge Dictionary of Philosophy)

And it is a short step from this mechanistic conception of matter to the conclusion that intentionality of the sort exhibited by our thoughts and words (which is but one instance of “directedness,” “pointing to,” or finality among others) cannot possibly be material.

Now there are several alternative conclusions one could draw from this.  One possibility (the right one, in my view) would be to conclude that the early moderns were wrong and that it is just a mistake to think that “directedness,” “aboutness,” or final causality is not an inherent feature of matter.  This is the Aristotelian-Thomistic (A-T) position.  (That does not entail that human thought is entirely material -- for it has a conceptual structure which in the A-T view cannot in principle be accounted for in material terms -- but the intentionality manifest in the sub-conceptual imaginative and sensory powers of the lower animals would on the A-T view be material.)  A second possibility is to take the Cartesian dualist position that the mechanistic conception of matter is correct and that since intentionality cannot in that case be material, it must reside in an immaterial substance (or, for a property dualist, in immaterial properties).  A third possibility would be the panpsychist position that matter can possess intentionality insofar as all matter is associated with mental properties of some sort.  (This differs from the A-T view insofar as A-T would deny that thought or consciousness of any sort exists below the level of animals.  To be sure, plants and inorganic processes exhibit immanent final causality, but from the A-T point of view it is possible for something to possess inherent finality even if it is devoid of thought or consciousness.)  A fourth possibility would be to take the idealist view that there really is no such thing as matter in the first place, but only mind.

Rosenberg does not accept any of these positions.  (Indeed, he does not even consider them, much less explicitly argue against them.  In general Rosenberg seems to have little knowledge of anything written by philosophers too far outside the naturalist orthodoxy with which he is comfortable.)  But, as I have indicated, he also rejects any materialist account of meaning, “aboutness,” or intentionality.  (And rightly so.  I have criticized such accounts myself in several places, such as in chapter 7 of Philosophy of Mind, in The Last Superstition, and in many earlier posts.  I would argue that all materialist attempts to explain intentionality either fail completely or tend to be disguised versions of dualism or Aristotelianism.)  

Since Rosenberg is committed to scientism, which entails materialism, the only remaining option available to him is the eliminativist move of simply denying that meaning, “aboutness,” or intentionality is real.  To be sure (and as we have seen in an earlier post) Rosenberg has no good arguments for scientism in the first place.  But he is, I think, absolutely correct to hold that if one is going to be consistent in one’s scientism and materialism, then one is going to have to take a radical, eliminative materialist position on intentionality.  Indeed, I made the very same argument in The Last Superstition.  The difference is that whereas I presented eliminativism as a reductio ad absurdum of the naturalistic premises that lead to it, Rosenberg presents it as the sober truth which we ought to embrace, however “difficult to accept,” “counterintuitive,” “bizarre,” and indeed “unwelcome” he acknowledges it to be.  

The problem, though, is not just that denying meaning or “aboutness” is counterintuitive and that Rosenberg’s arguments for denying it are no good.  The problem is that the eliminativist position is incoherent.  It cannot possibly be right.  Now, a common but simplistic way of making this point is to accuse the eliminative materialist of expressing the belief that there are no beliefs, and thereby contradicting himself.  In a reply to critics of his “Disenchanted Naturalist” article, Rosenberg dismisses this objection as “puerile,” and he is right to insist that the eliminativist is not refuted so easily.  For it is not too hard for an eliminativist to avoid using “I believe that…” and similar locutions.  But that is not to the point.  The question is whether the eliminativist can in principle state his position in a way that entirely avoids any implicit commitment to the reality of intentionality.  And many prominent philosophers (Lynne Rudder Baker, Hilary Putnam, William Hasker, and others) have argued that this cannot be done.  Unfortunately, Rosenberg says nothing in response to these more serious critics.  He seems to think that dismissing the “puerile” version of the claim that eliminative materialism is incoherent suffices to dispatch all versions of that claim.  (Here Rosenberg seems guilty of what I have elsewhere called “meta-sophistry.”)

As I have argued in several places (e.g. in chapter 6 of The Last Superstition and here, here, and here) the trouble is that whether or not the eliminativist can avoid using locutions like “I believe that…,”many of the key notions on which his position rests nevertheless crucially presuppose intentionality in one way or another.  For instance, the notion of “illusion” plays a central role in Rosenberg’s book.  It is his main weapon, deployed again and again to deal with all the obvious counterevidence to his bizarre claims.  Yet in what sense can there be illusions, mistakes, or falsehoods of any kind given Rosenberg’s eliminativism?  For “illusion,” “mistake,” “falsehood” and the like are all normative concepts; they presuppose a meaning (whether of a thought, a statement, a model, or whatever) that has failed to represent things correctly, or a purpose that something has failed to realize.  Yet we are repeatedly assured by Rosenberg that there are no purposes or meanings of any sort whatsoever.  So, how can there be illusions and falsehoods?  For that matter, how can there be truth or correctness, including the truth and correctness he would ascribe to science alone?  For these concepts too are normative, presupposing the realization of a purpose, the accuracy of a representation.  

Thus, “Water is composed of hydrogen and oxygen” is true, while “Water is composed of silicon” is false; and the reason is because of the meanings we associate with these sentences.  Had the sentences in question had different meanings, the truth values would not necessarily have been the same.  By contrast, “Trghfhhe bgghajdfsa adsa” is neither true nor false, because it has no meaning at all.  Yet if Rosenberg is right, “Water is composed of hydrogen and oxygen” is as devoid of meaning as “Trghfhhe bgghajdfsa adsa” is -- in which case it is also as devoid of a truth value as the latter is.  Moreover, if Rosenberg is right, every statement in Rosenberg’s book, and every statement in every book of science, is as devoid of meaning as “Trghfhhe bgghajdfsa adsa” is, and thus just as devoid of any truth value.  But then, in what sense do either science or Rosenberg’s own book give us the truth about things?

Logic is also normative insofar as inferences aim at truth and insofar as the logical relationships between beliefs and statements derive from their meanings.  “Socrates is mortal” follows from “All men are mortal” and “Socrates is a man” only because of the specific meanings we associate with these sets of symbols.  If we associated different meanings with them, the one would not necessarily follow from the others.  And if each was as meaningless as “Trghfhhe bgghajdfsa adsa” is, then there would be no logical relationships between them at all -- no such thing as the one set of symbols being entailed by, or rationally justified by the others.  But then, if Rosenberg is right, every sentence, including all the sentences in his book and every sentence in every book of science, are as meaningless as “Trghfhhe bgghajdfsa adsa” is.  And in that case there are no logical relations between any of the sentences in either his book or any science book, and thus no valid arguments (or indeed any arguments at all) to be found in them.  So in what sense do either science or the assertions made in Rosenberg’s book constitute rational defenses of the claims they put forward?

Notions like “theory,” “evidence,” “observation,” and the like are as suffused with intentionality as the notions of truth and logic are.  Hence if there is no such thing as intentionality, then there is also no such thing as a scientific theory, as evidence for a scientific theory, as an observation which might confirm or disconfirm a theory, etc.  Rosenberg’s scientism makes of all statements and all arguments -- scientific statements and arguments no less than moral or theological ones, and indeed every assertion of or argument for scientism itself -- a meaningless string of ink marks or noises, no more true or false, rational or irrational than bosons and fermions are.  No doubt Rosenberg would dismiss this sort of objection as “puerile” too.  But if he is to give us something more than mere abuse -- if he is to give us a rational defense of his position against the objection at hand -- then he owes us more than just a pledge to avoid using the words “I believe that…”  He owes us an explanation of exactly how notions like illusion, truth, falsity, logic, inference, evidence, observation, theory, and the like can be either reconstructed or replaced in a way that does not presuppose intentionality.  And that is something he does not give us.

In fact Rosenberg’s position is even more incoherent than what has already been said indicates, if that is possible.  A central theme of the last part of his book is that “history is bunk.”  For one thing, history as a discipline does not have the kind of predictive power physical science has, and given Rosenberg’s physics obsession that suffices, in his view, to show that it cannot be a genuine source of knowledge.  For another thing, historical inquiry typically presupposes that people’s thoughts are “about” things, that people have purposes and plans, that this “aboutness” and those purposes and plans are part of the explanation of why people do the things they do, and so forth.  And all of that is in Rosenberg’s view false, in part, of course, because he regards purpose and intentionality as illusions:  “Science,” he says, “must even deny the basic notion that we ever really think about the past and the future or even that our conscious thoughts ever give any meaning to the actions that express them” (p. 165).  But it is also because he thinks that the correct explanation of anything that appears to be purposive, in the human world no less than in the biological realm more generally, must be a Darwinian explanation.  In particular, human artifacts and institutions must be explained in terms of adaptation, in the Darwinian sense of “adaptation.”  In the case of the products of individual effort, this is a matter of blind variation and selection at the level of neurons.  In the case of large-scale social phenomena it is a matter of variation and selection at the level of customs, political institutions, and the like. “[A]lmost everything significant in human affairs and its history… is or was an adaption” and “only Darwinian processes can produce adaptations, whether biological or social” (p. 253).

Yet if we cannot so much as think about the future, how can we make predictions?  And if we cannot make predictions, how is physics any more predictive a science than history?  If we cannot so much as think about the past, how can we even come up with (much less be confident in the truth of) evolutionary explanations of social and biological phenomena?  If the products of individuals and social institutions are Darwinian adaptations, then what reason do we have to believe that science -- which is the product of individual and social effort -- is more true than the belief systems Rosenberg rejects (religion, morality, common sense, etc.), or indeed true at all?  For Darwinian processes select for fitness, not truth or falsity.  (Moreover, judging by the extremely tenacious hold even Rosenberg admits religion, morality, and common sense have had and still have on most people, these belief systems would seem to be superior to science vis-à-vis fitness!)  If history does not give us any real knowledge, then how can the history of science give us any real knowledge?  In particular, how can we know that science really is the success story that historians (and, indeed, Rosenberg) tell us it has been historically?  Indeed, how can we know that the scientific evidence really did show what we thought it did last year, last month, or last week, let alone decades or centuries ago?  And how can we know that religion has really been as bad historically as Rosenberg and other atheists say it has been?  Not only does Rosenberg not answer these (rather obvious) objections, he doesn’t even consider them.

Rosenberg also makes use of the trendy notion of “theory of mind” to help explain the origin of the “illusion” of intentionality.  But in so doing he merely yet again makes use of the very notions he is supposed to be eliminating in the course of explaining them away.  Hence he characterizes the “theory” of mind as “the ability to predict at least some of the purposeful-looking behavior of other animals” (p. 198, emphasis added), notes that parents, in applying this “theory,” “start treating [their] baby’s thoughts as being about stuff” (p. 202, emphasis added), and that this and further applications of this “theory” is the source of the “illusion” that thoughts have “aboutness.”  But of course, in the ordinary senses of the words, applying a “theory,” “predicting,” taking something to “look” a certain way, “treating” something as having a certain significance, and (as we have already noted) being subject to “illusion,” are all ways of representing things as being a certain way, whether correctly or incorrectly.  And representation presupposes “aboutness” or intentionality.  Hence one can hardly coherently appeal to these notions in the course of trying to show that intentionality is an illusion, unless one explains exactly how each one of them can both be cashed out in non-intentional terms and still do the work the eliminativist needs them to do.  And once again, this is precisely what Rosenberg does not do.

What Rosenberg does do is to offer some analogies in an attempt to make his position seem less implausible.  They all fail miserably.  For example, he suggests that alterations in the neural activity of a sea slug generate only new habits of behavior but nothing with the intentionality we take our thoughts to have.  But the nervous systems of rats, he says, differ from those of sea slugs only in degree, and ours in turn differ from those of rats only by further degrees.  Thus, Rosenberg concludes, there is no more reason to attribute “aboutness” or intentionality to us than there is reason to attribute it to rats or sea slugs.  But the problems with this argument are obvious.  For one thing, human beings, unlike rats and sea slugs, possess language, write books, engage in philosophical and scientific disputes, and carry out other activities that even Rosenberg would acknowledge seem to involve intentionality, and indeed are generally regarded as the paradigms of intentionality.  So it is no good for Rosenberg to insist against his critics that a comparison of human beings to sea slugs and rats shows that the former have no more intentionality than the latter do, unless he has already, independently shown that these apparently intentional human activities do not really involve intentionality after all.  Otherwise the critic can insist that these distinctively human activities show that the analogy is no good.  Yet the comparison with sea slugs and rats was itself supposed to show that human beings lack intentionality.  Hence the comparison simply begs the question.  Furthermore, whether neuroscience gives us the whole story about human thought and behavior is itself part of what is at issue between Rosenberg and his critics.  Hence to claim that the absence of relevant neurological differences between sea slugs, rats, and human beings shows that there is no difference at the level of intentionality is, once again, simply to beg the question.

Rosenberg attempts another analogy, between human beings and computers.  A computer, he says, can do things like give the correct answers to Jeopardy questions even though “its electronic circuits [aren’t] about anything, including about how to play Jeopardy” (p. 188).  So, if the computer can store “information” without its states being about anything, so can our brains.  Here too the problem with this argument should be obvious.  For one thing, it is false to say that the states of a computer aren’t about anything; they do have intentionality, even though it is only derived intentionality, like the intentionality of words.  And that is why what they do counts as storing “information” about the answers to Jeopardy questions and the like: Human beings designed them to do that, imparting this informational content to their internal states just as we impart meaning to words.  And we were able to do that because we have intentionality in an intrinsic or underived way.  Of course, Rosenberg will deny that that is what we have done, and will deny that there really is intentionality of either a derivative sort or an intrinsic sort.  But the point is that the computer analogy was itself supposed to help to show that there is no such thing as intentionality.  Hence for Rosenberg to deny intentionality as a way of salvaging the computer analogy would simply be to argue in a circle.  He would be appealing to the purported absence of intentionality in computers to bolster the claim that there is no intentionality of any sort, even in us -- and then appealing to the general non-existence of intentionality in order to show that computers in particular don’t have it.

(It should also be emphasized that Rosenberg is fooling himself if he thinks he can help himself to notions like “information,” “computation,” and the like so long as he avoids attributing “aboutness” to the states of a computer.  For notions like “computation,” “information,” “software,” “program,” “symbolic processing,” etc. themselves all presuppose intentionality, for reasons made clear by writers like John Searle and Karl Popper and which we surveyed in a recent post.  Computational notions are intentional through and through, not merely where questions about the specific content of a particular computational state are concerned.  This is not the first time Rosenberg has made this mistake, as we saw when examining his book Darwinian Reductionism.)

A final analogy Rosenberg appeals to is one that purportedly holds between thought and motion pictures.  Movies create the illusion of motion; in reality they are but a series of still photographs projected in rapid succession.  Similarly, Rosenberg says, the collection of neural circuits in our brains creates the illusion of “aboutness” or intentionality, whereas in reality “None of them is about anything; each is just an input/output circuit firing or not” (p. 191).  Once again there are several problems with the proposed analogy.  First of all, it is well understood how a sequence of still images can produce the illusion of motion.  But Rosenberg doesn’t explain the mechanism by which the firing of “input/output circuits” generates the “illusion” of intentionality.  He does say that it is because the outputs of the circuits are “appropriate” to their “specific circumstances” that we suppose them to be “about” those circumstances.  But in what respect are they “appropriate”?  It can’t be that they accurately represent those circumstances, since that would entail, not the illusion of intentionality, but the actual existence of intentionality.  

Nor would it do to say that the outputs of the circuits are caused by those circumstances.  For one thing, there are all sorts of causal factors that might enter into the generation of any instance of neural activity, some contemporaneous with one another, others tracing backward in time indefinitely.  What exactly makes the “specific circumstances” in question (whatever they are) stand out among the other causal factors so as to make the neural activity “appropriate” to them in particular?  (The problem the physicalist has in drawing a principled distinction between “causes” and mere “background conditions” in a way that avoids any reference to intentionality is one that Karl Popper and Hilary Putnam have emphasized, and which I discussed in detail in a recent article on Hayek and Popper.)  For another thing, the neural activity in a sea slug is also no doubt “appropriate” to its “specific circumstances,” yet apparently does not in Rosenberg’s view generate even the illusion of intentionality.  So why and how exactly does our neural activity generate this “illusion”?

A further problem with the analogy (one that Rosenberg himself acknowledges) is that in the case of motion pictures, we have real motion to compare the illusion of motion to.  There may not be real motion in the movie itself, but there is real motion elsewhere, and it is precisely by contrast with it that we can see that the motion the movie seems to present is illusory.  But where intentionality is concerned, Rosenberg says there is no such thing anywhere.  So it is hard to see exactly what Rosenberg is comparing the purportedly ersatz intentionality of our thoughts to when he judges it to be merely illusory.  In fact it is hard to see what it would be even to have the notion of intentionality (whether one considers it an illusion or not) without thereby exhibiting intentionality.  (“Having a notion” or “having a concept” are, after all, themselves intentional notions.)  Indeed, as we have already seen, the very notion of “illusion” itself seems to presuppose intentionality, so that whereas it is easy enough to understand what it means to say that some motion is illusory, it is difficult to see what it could mean to say that all intentionality is illusory.  To be sure, Rosenberg admits the analogy is “imperfect.”  But it is far worse than merely imperfect.  For in drawing the analogy, Rosenberg does absolutely nothing to address the problems of coherence that the critic raises for eliminativism, but instead only offers a further illustration of those problems!

Apparently some editor (or perhaps Rosenberg himself) could see that there is a serious difficulty here, at least rhetorically.  For at the end of the main chapter of Atheist’s Guide devoted to defending the eliminativist position on intentionality (chapter 8), I find that there are in the final, published version two paragraphs absent in the advance reading copy I was sent when I reviewed the book for First Things.  In this new material Rosenberg acknowledges that among his readers, there will be “philosophers [who] are muttering” that his position is “worse than self-contradictory” -- that it is “incoherent” insofar as it entails that “every sentence in [his own] book” is not “about anything.”  To this objection Rosenberg replies:

Look, if I am going to get scientism into your skull I have to use the only tools we’ve got for moving information from one head to another: noises, ink-marks, pixels.  Treat the illusion that goes with them like the optical illusions [discussed earlier in the book].  This book isn’t conveying statements.  It’s rearranging neural circuits, removing inaccurate disinformation and replacing it with accurate information.  Treat it as correcting maps instead of erasing sentences. (p. 193)

But it goes without saying that this is no response at all, but just yet another illustration of, rather than an answer to, the problem.  For “illusion,” “information,” “disinformation,” “accurate,” “inaccurate,” “correct,” and “map,” are, like “truth,” “falsity,” “inference,” “entailment,” “theory,” “evidence,” “observation,” etc., all notions that presuppose intentionality.  And if they can be reconstructed or replaced in a way that avoids any implicit commitment to intentionality while doing the work Rosenberg needs them to do, he needs to show us how this can be done, and he never does so.  All he does is to replace one bit of intentional language with another.  That does absolutely nothing to solve the problem; it just moves it around, like the pea in a shell game.  The trick is to break out of the circle of intentional notions entirely and consistently.  

I suppose it is only fair to note that in an email Rosenberg sent me after my First Things review appeared, he complained that in accusing him of incoherence I had unfairly portrayed him as having made a “callow undergrad mistake,” and that I should instead have “[tried] refuting teleosemantics and other nonrepresentationalist accounts of the propositional attitudes.”  There are several things to be said in response to this:

1. Rosenberg does not explain what “callow undergrad mistake” it is that I have falsely accused him of making.  I certainly have never made what he rightly calls the “puerile” charge that he claims to believe that there are no beliefs.  His incoherence isn’t quite that obvious.  Still, that his position is incoherent in a less direct way is something I have now documented at length -- not only in the current series of posts but in my earlier posts on his “Disenchanted Naturalist” article, as well as (more briefly) in my First Things review.  If I have somehow gotten him wrong, it should be easy enough to explain exactly how I have.

2. For several reasons, it is very odd for Rosenberg to complain that in my review I should have tried to refute “teleosemantics and other nonrepresentationalist accounts of the propositional attitudes.”  First of all, teleosemantics -- a naturalistic approach to intentionality associated with philosophers like Ruth Millikan, Fred Dretske, and others -- is generally regarded as a reductionist position, not an eliminativist position.  Nor is it an approach Rosenberg himself actually appeals to in his book in defense of his eliminativism.  Indeed, where Rosenberg does bother to mention Millikan, Dretske, et al. in the book -- in the section at the end giving recommendations for further reading -- he himself characterizes their approach as reductionist rather than eliminativist, and says that “it didn’t work”!  Now, perhaps Rosenberg thinks that there are insights from teleosemantics and related views which can be salvaged in defense of eliminativism.  But if so, he should have made this claim, and defended it, in the book itself, rather than (as he actually did) giving precisely the opposite impression.  (Is a book reviewer expected to do the author’s work for him on pain of being accused of unfairness?)  

3.  As it happens I have in several places explained why teleosemantic and other naturalistic approaches cannot help to salvage Rosenberg’s position (here, here, and here).  I have also criticized Dretske’s approach in an earlier post, Millikan’s approach in Philosophy of Mind and The Last Superstition, and other naturalistic approaches to intentionality in my article "Hayek, Popper, and the Causal Theory of the Mind" and in other earlier posts.  

In her book Saving Belief, Lynne Rudder Baker aptly characterized eliminative materialism as a kind of “cognitive suicide.”  As anyone who has seen the science-fiction movie Scanners knows, the destruction of a brain is not a pretty thing.  (Extreme content warning on that YouTube clip.)  Good thing for Rosenberg, then, that the mind and brain aren’t identical!
Related Posts Plugin for WordPress, Blogger...