Thursday

Unbroken and the problem of evil

I recently finished Laura Hillenbrand’s terrific new book Unbroken, the story of Louis Zamperini, 1936 Olympian and prisoner of war under the Japanese during WWII. I was compelled to buy a copy after reading an absolutely gripping excerpt in Vanity Fair, which described the harrowing 46 days Zamperini and his fellow airman Russell Phillips spent adrift at sea after their plane went down in the Pacific and before they were picked up by the Japanese. You can read it yourself here. After doing so you might think that a human being could endure no greater suffering than Zamperini and Phillips did as castaways. You would be wrong, as the rest of the book makes clear.

Unbroken is the sort of book which might provide a useful real-life “Exhibit A” supplement to the standard philosophical readings in a course on the problem of evil. The unbelievably relentless, concentrated, years-long deprivation and cruelty Zamperini suffered, first at sea, and then in a series of notoriously brutal Japanese prisoner of war camps, give the lie to any facile theodicy. I have argued that the existence of even the worst evils gives us absolutely no reason whatsoever to doubt the existence and goodness of the God of classical theism. In that sense the problem of evil poses no intellectual difficulty for theism. But I have also insisted that evil poses an enormous practical difficulty, because while we can know with certainty that God has a reason for allowing the evil He does, we are very often simply not in a position to know what that reason is in this or that particular case. We can know some of the general ways in which good can be drawn out of evil – our free choices have a significance that they would not have otherwise; we can make of our sufferings an opportunity for penance for the sins we have committed; we are able to develop moral virtues such as patience, gratitude, courage, compassion, and so forth – but we cannot expect always to know why this specific child was allowed to be raped and murdered or that specific village was allowed to be destroyed by an earthquake. Or why men like Zamperini – many of whom did not live to tell their stories – were permitted to endure what, even in light of the general considerations just mentioned, seems sheer “overkill.”

I was a student at Claremont Graduate School at the tail end of John Hick’s time there and the beginning of the late D. Z. Phillips’ tenure. Phillips was critical of Hick’s famous “soul-making” theodicy. I remember his mocking impression of God as a kind of moral personal trainer: “Here you go, a bit of cancer should help toughen you up!” As Phillips’ jokes tended to be, this was both funny and somewhat unfair – Hick is not a man prone in any way to minimize human suffering, and I don't think he would claim that we can identify a “soul-making” function for each and every instance of evil. All the same (and as I’m sure Hick himself would agree), we must not let our attempts to understand God’s reasons for allowing evil lead us to sentimentalize evil, to pretend that “Buck up, old chap, it’s all for the greater good!” should suffice to soothe just anyone’s pain.

At the same time, it is also possible to lapse into sentimentality on the other side. We all know of the sort of embittered atheist who has suffered far less than a Louis Zamperini and yet who goes about his life with a metaphysical chip on his shoulder – “God done me wrong!” or even “Maybe I’ve been lucky, but look what God has let other people suffer through!” Don’t misunderstand: I have known people who have abandoned religion because of the real suffering they endured, and for whom I feel compassion. But I have also known people whose appeal to the problem of evil has seemed to me an exercise in self-righteous rationalization. “What a compassionate person I am for rejecting a God who would allow such evil, and how cold-hearted you religious people are for not doing so!” – that sort of thing. And I have also known people who have suffered enormously – in one case, to a degree that would make for a book worthy of the Laura Hillenbrand treatment – and yet whose faith in God has been their refuge.

Certainly Zamperini is the sort of man who would seem justified in a life of bitter fist-shaking at God, if anyone would. Indeed, as his suffering continued for years after the war – flashbacks, continual nightmares, the end of his athletic career as a result of an injury suffered during his captivity, listlessness, and an unquenchable thirst for revenge – Zamperini’s attitude toward religion was for some time one of hostility. But then he had a religious conversion, after hearing Billy Graham preach in Los Angeles. His flashbacks, nightmares, and listlessness ended. He traveled to Japan, visited his tormenters in prison, forgave them, and his life in the decades since – he is now in his nineties – seems to have been one of real joy.

Why does one man survive and even flourish in the face of suffering, while another is shattered by it? We cannot presume to judge the latter; God alone can do that. But neither can we dismiss the testimony of the former. Louis Zamperini’s story should warn us against sentimentality of either a religious sort or an atheistic sort. For it illustrates both that there is evil the point of which is simply beyond our understanding and that there is no evil that of itself need break a man. We simply cannot know in every case what God is up to. But He knows, and sometimes knowing that He does has to be enough.

Tuesday

Hume, cosmological arguments, and the fallacy of composition

Both critics and defenders of arguments for the existence of God as an Uncaused Cause often assume that such arguments are essentially concerned to explain the universe considered as a whole. That is true of some versions, but not all. For instance, it is not true of Aquinas’s arguments, at least as many Thomists understand them. For the Thomist, you don’t need to start with something grand like the universe in order to show that God exists. Any old thing will do – a stone, a jar of peanut butter, your left shoe, whatever. The existence of any one of these things even for an instant involves the actualization of potencies here and now, which in turn presupposes the activity of a purely actual actualizer here and now. It involves the conjoining of an essence to an act of existence here and now, which presupposes a sustaining cause whose essence and existence are identical. It involves a union of parts in something composite, which presupposes that which is absolutely simple or incomposite. And so forth. (As always, for the details see Aquinas, especially chapter 3.)

Criticisms of First Cause arguments that assume that what is in question is how to explain the universe as a whole are therefore irrelevant to Aquinas’s versions. Still, those versions which are concerned with explaining this are also important. One objection often raised against them is that they commit a fallacy of composition. In particular, it is claimed that they fallaciously infer from the premise that the various objects that make up the universe are contingent to the conclusion that the universe as a whole is contingent. What is true of the parts of a whole is not necessarily true of the whole itself: If each brick in a wall of Legos is an inch long, it doesn’t follow that the wall as a whole is an inch long. Similarly, even if each object in the universe is contingent, why suppose that the universe as a whole is?

There are two problems with this objection. First, not every inference from part to whole commits a fallacy of composition; whether an inference does so depends on the subject matter. If each brick in a wall of Legos is red, it does follow that the wall as a whole is red. So, is inferring from the contingency of the parts of the universe to that of the whole universe more like the inference to the weight of the Lego wall, or more like the inference to its color? Surely it is more like the latter. If A and B are of the same length, putting them side by side is going to give us a whole with a length different from those of A and B themselves. That just follows from the nature of length. If A and B are of the same color, putting them side by side is not going to give us a whole with a color different from those of A and B themselves. That just follows from the nature of color. If A and B are both contingent, does putting them together give us something that is necessary? It is hard to see how; indeed, anyone willing to concede that Lego blocks, tables, chairs, rocks, trees, and the like are individually contingent is surely going to concede that any arbitrary group of these things is no less contingent. And why should the inference to the contingency of such collections stop when we get to the universe as a whole? It seems a natural extension of the reasoning, and the burden of proof is surely on the critic of such an argument to show that the universe as a whole is somehow non-contingent, given that the parts, and collections of parts smaller than the universe as a whole, are contingent.

So, that is one problem. Another problem is that it isn’t obvious that the sort of cosmological argument that takes as a premise the contingency of the universe needs to rely on such part-to-whole reasoning in the first place. When we judge that a book, an apple, or a typewriter is contingent, do we do so only after first judging that each page of the book, each seed in the apple, each key of the typewriter, and indeed each particle making up any of these things is contingent? Surely not; we can just consider the book, apple, or typewriter itself, directly and without reference to the contingency of its parts. So why should things be any different for the universe as a whole?

If anything, it is certain critics of the sort of argument in question who seem more plausibly accused of committing a fallacy of composition. Consider this famous passage from Hume’s Dialogues:

Did I show you the particular causes of each individual in a collection of twenty particles of matter, I should think it very unreasonable, should you afterwards ask me, what was the cause of the whole twenty. This is sufficiently explained in explaining the cause of the parts.

(Paul Edwards makes a similar objection – see the “five Eskimos” example in this famous article. We considered some problems with some of Edwards’ other criticisms of the cosmological argument in an earlier post.)

The reasoning couldn’t be more plain: If you explain each part of a collection, you’ve explained the whole. Therefore (so this sort of objection to the kind of cosmological argument in question continues) if we can explain each individual thing or event in the universe as the effect of some previous thing or event in the universe, we’ve explained the whole collection of things or events, and needn’t appeal to anything outside the universe. And yet, as we saw in a previous post, to identify the immediate efficient cause of each thing in a collection simply is not necessarily to explain the collection as a whole. If a certain book exists because it was copied from an earlier book, the earlier book existed because it was copied from a yet earlier book, that book existed because it was copied from a still earlier book, and so on, we will hardly have provided a sufficient explanation of the series of books if we suppose that it either has extended backward into the past to infinity or that via time travel it forms a causal loop. So, hasn’t Hume himself committed a fallacy of composition?

A defender of Hume might reply as follows: It is only when each part of a collection has been sufficiently explained that the Humean claims it follows that the whole collection has been explained; and in the counterexamples in question (the book example and others of the sort explored in the previous post) each part clearly hasn’t been sufficiently explained but only partially explained (because, say, the origin of the information contained in the book still needs to be explained). So (the proposed reply continues) the Humean would not be committed to saying, falsely, that the whole collection has been explained in such cases.

This saves the Humean critique from committing the fallacy of composition, but only at the cost of making it question-begging. For a defender of the sort of cosmological argument we’ve been discussing could happily agree that if each part of a collection has been sufficiently explained, then the whole collection has been explained as well. He just thinks that to identify an immediate contingent cause for each contingent thing or event in the universe is not to give a sufficient explanation of it. If the Humean disagrees, then he needs to give some reason why identifying such a cause would be sufficient (again, especially given what was said in the previous post). Merely to assert that it would be sufficient – which is all Hume does, and which is all that is done by those who quote Hume as if he had made some devastating point – simply assumes what is at issue.

Thursday

Putting the Cross back into Christmas

It is difficult to be a human being. Illness, injury, death, bereavement, depression, frustrated hopes, unfulfilled dreams, unrequited love, despair, humiliation, hunger, nakedness, want of every kind – the usual illustrations of the problem of evil provide ample evidence of this. The point applies no less to those relatively untouched by such misfortunes. For they are more prone than their fellows to become complacent, superficial, ungrateful, and selfish – an even graver misfortune, and one that tends to lead us into the lesser ones after all. But every human being has his own distinctive moral weaknesses. It is difficult to be a human being because it is difficult to be a good human being – a human being who flourishes, who fulfills the various ends nature has set for us, whether they be our animal ends or our higher, rational and moral ends.

That is itself, in a sense, our natural lot. Nature determines the good for us and obliges us to pursue it. But she has put us in circumstances that make its fulfillment far from easy. We share the world with bacteria, viruses, and wild animals, with earthquakes and floods, and with other human beings who share our limitations. What we need for our fulfillment is there for the getting, but actually to get it takes fortitude, hard work, hard thinking, and being in the right place at the right time. Being a human being can sometimes seem like being a humble sperm cell – billions upon billions with the same end, and only a tiny fraction ever realizing it. Or it would seem that way if we did not also by nature have immortal souls. The light of reason tells us that there is a God, that He is good, and that the sufferings of this life are not the end of the story for us. Thus does nature give some consolation even in the face of the obstacle course she has set before us. But only some – in part because she gives us no details about the life to come, and in part because even what she does tell us she tells us only under the best of cultural circumstances. Understanding natural theology requires some leisure and philosophical wherewithal. It also helps to live in an age which isn’t as intellectually decadent as ours is.

Original sin involves, in part, the loss of the supernatural assistance that would have removed the various difficulties of our natural state. Nature as God made her was good, if austere in the ways described. Nothing beyond what she gave us was “owed” to us. But God would have given us more anyway, by His grace – would have added to what had already been given us by nature, so as to enable us to get around her obstacles – if not for the Fall. The restoration of this supernatural gift is part of the meaning of the Incarnation, and thus part of the meaning of Christmas. But there is more to it than the restoration itself. As Aquinas says, the Incarnation was not in the strictest sense necessary for remedying the Fall, since God by His infinite power could have accomplished this another way. But it was necessary in a weaker sense, insofar as there was no more fitting way for it to be accomplished. (ST III.1.2) Quoting Augustine, Aquinas gives as one of several reasons it was most fitting the consideration that "Nothing was so necessary for raising our hope as to show us how deeply God loved us. And what could afford us a stronger proof of this than that the Son of God should become a partner with us of human nature?"

The problem of evil poses no intellectual difficulty for classical theism, in part because we have no reason whatsoever to believe that God cannot draw an outweighing good out of even the worst evils we suffer, and every reason to believe that He can and will. But it is an enormous practical difficulty, one that Christian theology remedies in a way mere philosophy cannot. Reason tells us to trust in God, but reason is cold, and falters in the face of a dying child. Yes, we are rational animals. But we are rational animals – creatures of flesh and feeling as well as of thought. And it is simply difficult to be a rational animal, a human being – to bleed, to feel one’s heart break, to suffer. The Son of God in His divine nature is beyond all that. Yet He took on human nature anyway, so that we poor men and women would not suffer alone. In Jesus Christ the God of the philosophers wears a human face. And in the end, “He will wipe away every tear from their eyes” (Rev 21:4). But not before crying some of them Himself, on a cross, and in a manger.

Wednesday

The Long Rain

It’s been raining for days and days here in L.A., and I can’t stop thinking of Ray Bradbury’s classic short story “The Long Rain.” Bradbury no doubt gets the physics, geology, and biology of a world of endless rain quite wrong – I don't think he ever claimed to be a hard SF writer – but it’s a terrific story all the same. It’s been filmed a couple of times, once as a segment of the movie version of The Illustrated Man, and once as an episode of The Ray Bradbury Theater. Both versions are well done, but the only one I can find online is the former. (You’ll have to follow the link to “The Illustrated Man (1969) Part 8” at the end for the conclusion. What you see below starts abruptly, but it’s only a couple of minutes into the segment, which begins at the tail end of “Part 6.”)

Tuesday

Haldane on Hawking

John Haldane responds to The Grand Design by Stephen Hawking and Leonard Mlodinow, in the latest issue of First Things.

Saturday

Heil and Mumford on contemporary academic philosophy

John Heil, from the preface to From an Ontological Point of View (Oxford University Press, 2003):

Philosophy today is often described as a profession. Philosophers have specialized interests and address one another in specialized journals. On the whole, what we do in philosophy is of little interest to anyone without a Ph.D. in the subject. Indeed, subdisciplines within philosophy are often intellectually isolated from one another…

The professionalization of philosophy, together with a depressed academic job market, has led to the interesting idea that success in philosophy should be measured by appropriate professional standards. In practice, this has too often meant that cleverness and technical savvy trump depth. Positions and ideas are dismissed or left unconsidered because they are not comme il faut. Journals are filled with papers exhibiting an impressive level of professional competence, but little in the way of insight, originality, or abiding interest. Non-mainstream, even wildly non-mainstream, conclusions are allowed, even encouraged, provided they come with appropriate technical credentials.

Stephen Mumford, in his contribution to Metaphysics: 5 Questions, edited by Asbjørn Steglich-Petersen (Automatic Press, 2010):

Since philosophy has become professionalized, I think few stones have been left unturned. Rather than subjects being neglected, I think there are more topics that have received too much attention. Most of the journals are filled with material that but a few people will ever read and which I think will not stand the test of time. The problem is that in various ways professional philosophers are obliged to publish, whether they have anything new and substantial to say or not. I would really like to see the journal editors take a lead in this respect and stop publishing papers on the negative basis of them making the fewest errors or fewest controversial claims and start publishing on the positive criterion of them having something important or interesting to say…

I like papers that offer bold new insights but it is all too rare that one finds them. The system of edited, peer-reviewed journals is an inherently conservative one where paradigm-challenging work is very unlikely to be accepted because it threatens the interests of the editor and referees…

I think contemporary philosophy has become too self-congratulatory, with an arrogant self-assurance that the work we are producing is vastly superior to that of the interested amateurs of the past. But has anyone of late produced as fine and appealing a work as Hume’s Treatise or Locke’s Essay? On the contrary, I fear that in future centuries, the current era will be looked upon as a philosophical dark age where very little of interest was authored.

No comment, except to invite comparison with what one might gather about the careerist mentality that prevails in much of “the profession” from Michael Huemer’s sobering advice to aspiring grad students in philosophy. (Here’s your homework assignment: Compare “advancing in the profession,” as that is understood today, and “the love of wisdom,” with reference to the dispute between Socrates and the Sophists.)

Friday

Even I don’t think it’s THAT good…

I see that a dealer at Amazon is selling the (currently out-of-stock) hardback of The Last Superstition for – wait for it – $999.99. Ridiculous, no? Especially given that several other dealers are pricing it in the bargain basement $150 range (!)

Seriously, what’s the deal? I’ve seen weird prices like this before at Amazon, and I assume that second-hand dealers have some automatic, computerized system for jacking up the price on out-of-stock books. But who’s going to buy a copy of any recent book for a thousand bucks, let alone my little tome? What’s the point of leaving a book listed online at such a ridiculous price? Anyone out there know how this works?

Just to play it safe, though, you might want to have that hardback copy of TLS CGC graded, stick it in a Mylar bag, and store it in a humidity controlled safe deposit box between your copies of Vault of Horror #12 and Amazing Fantasy #15. Meanwhile, the paperback is available for a sane $12.92.

Thursday

Causal loops, infinite regresses, and information

On a reader’s recommendation, the wife and I took in the 2007 Spanish science fiction movie Timecrimes last weekend. Great flick. It’s a time travel story similar in structure to Heinlein’s “By His Bootstraps” (which I discussed in a recent post), though the plot is very different. Hector, the movie’s protagonist, spies through his binoculars a woman removing her shirt in the woods beyond his house. After going out to investigate, he comes upon the woman lying naked and motionless, and is then suddenly stabbed in the arm by an attacker whose head is wrapped in bloody bandages. (Since this is a family blog of sorts, I suppose I should alert the unwary viewer lest he be temporarily blinded by the rather bright pair of headlights that appears onscreen a couple of times, and I don’t mean the ones on Hector’s car. Good thing my wife was there to shield my eyes!)

If you haven’t seen the movie and don’t want it (partially) spoiled for you, don’t read any further. Fleeing through the woods in a panic, Hector stumbles upon a laboratory complex and makes his way inside one of the buildings to hide from his attacker. After bandaging his arm, he finds a walkie-talkie and makes contact with a scientist in another building on the grounds, who tells Hector that he can see his attacker approaching the building Hector is in and urges him to exit at the other side and make his way to where the scientist is. When he meets up with the scientist, the latter convinces Hector to hide inside a strange machine, which turns out to be a time machine. Hector is transported an hour or so into the past and – you guessed it – through a complex series of events it is revealed that the bandaged man who attacked him earlier was none other than Hector’s future self. And that’s just the beginning of the story’s Heinlein-style twists.

Now, one traditional objection to the possibility even in principle of time travel is that it would seem to entail that one might travel back in time and kill one’s younger self. This seems obviously impossible, for if one killed one’s younger self, then one would no longer be around at the point when one was supposed to be entering the time machine in order to go back and carry out the killing. In response, it is sometimes suggested that such paradoxes can be avoided by appealing to the “many-worlds” interpretation of quantum mechanics and construing the scenario in such a way that your killing of your younger self results, not in the annihilation of the version of you who goes back and does the killing, but rather in the branching off of an alternate universe in which another version of you does not live past the time in which your future self kills you (though of course you do live past that time in the universe from which this new one branched off, which is why you can travel back to do the killing).

This assumes, though, that we should be thinking of time travel as a way in which one might change the past. And of course, many time travel stories do make this assumption – Ray Bradbury’s short story “A Sound of Thunder” is one example, and the Back to the Future series of movies are another. But other time travel stories work on the assumption that the past is fixed, and that even if one travels back in time and tries to change the past, one will find that one’s circumstances do not allow one to do so – indeed, the time traveler will himself turn out to be the cause of certain past events, perhaps even the events he hoped to change. The Heinlein stories cited in my previous post on this subject work on this assumption, as do movies like 12 Monkeys and Timecrimes. In Timecrimes, Hector takes pains to ensure that certain past events happen exactly as he remembered them, and while trying to prevent certain other past events from occurring, he finds that he has inadvertently caused them to happen precisely in the attempt to change history. In 12 Monkeys, Bruce Willis’s time-traveling future self turns out to be the man he had, as a young boy, witnessed getting shot in an airport. The sex-changing time traveler in Heinlein’s “—All You Zombies—“ is his/her own father and mother. In “By His Bootstraps,” Heinlein’s protagonist is manipulated by his future self into becoming that future self.

Now, paradoxes of the sort represented by the suicidal time traveler are avoided on this sort of scenario. But other oddities replace them. For example, Hector makes his way into the woods, thereby setting out on his time-traveling adventure, only because he saw a woman there removing her shirt. But it turns out that she was removing it only because Hector’s future self forced her to, precisely so as to guarantee that his earlier self would see her. Jane in “—All You Zombies—“ is born because his earlier, female self was impregnated by his later, male self. But those later selves only existed in the first place because Jane was born. In “By His Bootstraps,” Bob Wilson learns a heretofore unknown language from a notebook that he takes from the future and later recopies after it wears out. But it turns out that the notebook he had taken was none other than a future, still unworn stage of the very copy he had made of it.

None of these cases involves the kind of contradiction seemingly entailed by the suicidal time traveler example. But they do involve a kind of circularity. Is it vicious? Some theorists of time travel seem to think not, or at least allow that such scenarios are in principle possible in a way the suicidal time travel scenario is not (short of the “many-worlds” interpretation, that is – though even then, it seems it is not really himself that the time traveler kills, but only an alternate version of himself). And yet there is obviously something very fishy about these scenarios. It is perhaps easiest to see what is wrong in the notebook example from “By His Bootstraps”: The earlier stage of the notebook came from the later stage, and the later from the earlier. But what about the information embodied in the notebook? Why did the notebook have just the content it did, rather than some other content or no content at all?

But something similar can be said about the other cases. Hector’s later self knew from memory that the woman would be visible to his earlier self from precisely such-and-such a spot in the woods, and therefore took steps to make sure that she would stand there. But his earlier self, because he had acquired the information about her location from observing where she was standing, was able to form those memories only because his later self had had the information in question. So why was it precisely that information about her location that got transmitted from the past to the future and then back to the past? Jane’s genetic information came from his/her “mother” and “father,” but they are really just later versions of her. So why was it exactly that genetic information that gave Jane his/her distinctive biological features? Why did Jane have that specific height and hair color, those specific behavioral predilections, and so forth – indeed, why was he/she a human being at all rather than a dog, or a blade of grass, or something inorganic? It is because we need an account of the informational content of these temporally looped events that merely noting that each event was generated by another is insufficiently explanatory.

Now, notice that exactly the same point applies, even if perhaps less obviously, when we consider an infinite regress of events into the past rather than a temporal loop. In “On the Ultimate Origination of Things,” Leibniz notes that if we were told that a certain geometry textbook had been copied from an earlier copy, that one from an earlier one still, that one from a yet earlier copy, and so on infinitely into the past, we would hardly have a sufficient explanation of the book we started out with. Moreover, why does the series of books as a whole exist, and with precisely the content they have rather than some other content? Tracing the series of causes backward forever into the past seems to leave the most important fact about the phenomenon to be explained untouched, no less than the time travel causal loop scenarios do.

Such time travel scenarios are philosophically significant, then, insofar as they illustrate how fatuous is Hume’s suggestion (made in the course of criticizing cosmological arguments) that identifying the immediate cause of each thing or event in some series of things or events suffices to explain the series itself. This is obviously insufficient in the time travel case, and there is no reason whatsoever to think it sufficient in the infinite temporal regress case. Even if time-travel-generated causal loops were possible in principle, we would still need to appeal to something outside them in order to account for the specific information content that is passed from link to link in the loop. And even if an infinite regress of causes into the past were possible in principle, we would still need to appeal to something outside of it in order to explain the specific information content that is passed from one link to another throughout the infinite series.

But is the example of Leibniz’s book misleading? Do series of causes in the natural order, where books and other artifacts are not in question, really involve transfers of information? Indeed they do. As the Aristotelian-Thomistic (A-T) metaphysician holds, each efficient cause inherently points beyond itself to its typical effect or range of effects, the generating of which is its final cause; and when this efficient causation involves the generation of a new substance, that substance will have causal powers of its own, and it will have inherited them from its causal ancestors. The “information” concerning the outcomes to which such causal powers point exists at each stage of the lifespan of each entity which has the powers, and it is transferred to each new substance which has the same powers. But you don’t need to be an A-T metaphysician or to use such Scholastic language to see that there are such transfers of information. It is evident also from the work of contemporary anti-Humean metaphysicians and philosophers of science like C. B. Martin, John Heil, George Molnar, Brian Ellis, and others, who advocate a metaphysics of causal powers which are “directed at” their manifestations (which is just what the Scholastics meant by final causality, whether all of these contemporary writers are aware of the parallels to Scholasticism or not). And it is evident from the “information” talk that contemporary physicists, biologists, and “naturalistic” philosophers are constantly helping themselves to – even though such talk is intelligible only if the natural world is not after all devoid of inherent “directedness” or finality, as the early moderns who overthrew Scholasticism are widely but falsely thought to have shown that it is.

(As my longtime readers know, this is not to deny that the early moderns made genuine scientific advances. The point is rather that their philosophical novelties were disastrously wrong, and the physicists, biologists, and anti-Humean metaphysicians and philosophers of science just alluded to are in effect reinventing the metaphysical wheel that Aristotelians and Scholastics had already perfected centuries ago. That more people don’t see this owes to (a) a failure to distinguish Aristotelian physics, which was refuted by the moderns, from Aristotelian metaphysics, which was not, (b) an uncritical acceptance of various anti-Scholastic clichés and straw men, and a consequent failure to understand what Aristotelians and Scholastics really mean by “substantial form,” “final cause,” and the like, and (c) a vested interest in upholding the historical myth that Scholastic metaphysics – and, more to the point, the theological system it upheld – were somehow undermined by modern science. But that is a story I’ve told at length elsewhere, most notably in The Last Superstition.)

Of course, ID theorists have tried to make hay out of the idea that there is information in the natural order, but they muddy the metaphysical waters by talking about “probabilities,” “complexity,” “specified information,” and the like. The point has nothing to do with such hoo-hah, nor with either rejecting or accepting Darwinian accounts of this or that biological phenomenon. It is at once much simpler, much deeper, and more conclusive than all that. Nor is it some lame anthropomorphic “designer” or other that an explanation of the information embedded in nature requires. In fact it requires nothing less than the God of classical theism. That, at any rate, is what the Thomistic “argument from finality” represented by Aquinas’s Fifth Way aims to show. (The details can be found in Aquinas.) That argument holds that, apart from the sustaining action of God, there could be no finality in nature, no directedness of causal powers to their manifestations, no information content at all, not even for an instant – whether or not each instant is part of an infinite regress of instants, or for that matter part of a temporal loop of instants of the science-fiction sort illustrated by Timecrimes, Heinlein stories, and Stephen Hawking pop science books.

The latest on ID and Thomism

Frank Beckwith kindly reviews my book Aquinas in a lengthy essay in the latest Philosophia Christi. He focuses on the dispute between Thomism and Intelligent Design theory (though those who haven’t read the book should know that it deals with this subject only briefly). In other recent discussion, over at The Huffington Post, John Farrell comments on the conflict between ID theory and Thomism, kindly linking to yours truly. Over at Touchstone, Logan Paul Gage takes issue with the claim that there is any conflict between ID and Thomism, politely disagreeing with yours truly. Over at EvolutionBlog, Jason Rosenhouse, though critical of ID, seems completely baffled by yours truly. I’ve got zero interest in getting into another ID vs. Thomism blog war at the moment, so I’ll refrain from commenting. For now.

Saturday

Kaczor on abortion

Christopher Kaczor’s The Ethics of Abortion is just out from Routledge. David Boonin, author of A Defense of Abortion, calls it “one of the very best book-length defenses of the claim that abortion is morally impermissible.” Natural law theorist J. Budziszewski says that the book “replies to the most difficult objections to the pro-life position, many of which have not been adequately addressed by previous authors.” Notre Dame Philosophical Reviews calls it “the most complete, the most penetrating and the most up-to-date set of critiques of the arguments for abortion choice presently available.” Don Marquis, author of the widely anthologized article “Why Abortion is Immoral,” calls it “essential reading.” Check it out.

Thursday

A is A

The Advaita Vedānta school within Hindu philosophy holds that the self is identical with God. A student of mine recently lamented that too many Westerners who claim to follow this doctrine draw precisely the wrong lesson from it. Instead of freeing themselves from the limitations of their selfish egos and looking at the world from the divine point of view, they deify their selfishness. They bring God down to their level rather than rising up to His level.

Well, that is annoying. The trouble is that startling identity claims have a way of boomeranging. The Vedantist says “You are God!” hoping to shock his listener out of his egotism. The shallow listener thinks “Wow, I am God!” and his egotism is only reinforced. He puts the accent on the “I” rather than on “God.” And why not, if he and God really are identical?

Something similar can be said of the claim that the mind is identical to the brain. This is usually interpreted as an assertion of materialist reductionism. But why not interpret it instead as an assertion of idealist reductionism, a claim to the effect that a certain purportedly material object, the brain, is really mental? Indeed, the later Bertrand Russell held something like this view; or rather, he held that it is the sense data we encounter in introspection (“qualia” we’d say today) that are the fundamental reality, and that minds and material objects are constructs out of this purportedly “neutral” stuff. Russell took sense data to be “neutral” – that is to say, of themselves neither mental nor material – rather than mental (as they are usually regarded), because he thought they could intelligibly be held to exist unsensed by any mind. Some contemporary followers of Russell (such as Michael Lockwood) follow this line. But others who have toyed with Russellian views (such as Galen Strawson and David Chalmers) concede that sense data really are inherently mental rather than neutral, so that the version of Russell’s views that they have developed is not a kind of “neutral monism” (as Russell sometimes called his view) but rather a variant of idealism or panpsychism. “Matter,” on their view, is really mental in its intrinsic nature. And if you are going to take seriously an identification of mind and matter in the first place, why not “reduce” in that direction?

Functionalism claims that the mind is not identical to the brain per se, but rather that it is to be identified with a certain kind of causal structure, which might be realized in the brain but could also in principle be realized in material systems other than the brain (in an android, say, or in an extraterrestrial with a material composition radically different from ours). Under the influence of Russell’s views, my younger self toyed with the idea of reversing this functionalist identification, of doing to functionalism what Russell did to the mind-brain identity theory. Instead of reducing mind to a certain kind of causal structure, I proposed that the reduction should go the other way. It is not that the mental phenomena we know from introspection are really “nothing but” a certain kind of causal structure of the sort we observe in the external world; it is rather (so the view went) that the kind of causal structure in question turns out to be inherently mental, that introspection reveals to us something about the intrinsic nature of causation that perception of the external world does not. I called the view “Russellian functionalism,” or, alternatively, “Hayekian functionalism,” since the view occurred to me as I was first studying F. A. Hayek’s neglected book The Sensory Order, which combines ideas similar to Russell’s with a kind of functionalism (though Hayek himself did not state his position in quite the way I did). I defended the view in print in my 2001 article “Qualia: Irreducibly Subjective But Not Intrinsic.”

I haven’t held this view for a long time. I now think it is bizarre – indeed, I realized that it was bizarre even at the time (I just thought we had to accept it anyway), and others with whom I then discussed it certainly thought, quite rightly, that it was not only bizarre but difficult even to understand. But I also think that it is no more bizarre or difficult to understand than garden-variety functionalism is (just as idealism is no more bizarre or difficult to understand than materialism is – if you think otherwise, I submit that that reflects your historical circumstances more than it does anything philosophically substantial). To assert that to be in pain is nothing more than to be in a state specified by the machine table of a Turing machine (or however one spells out one’s functionalism) should really strike us as quite unintelligible, because it is unintelligible. You might as well say that to be in pain is to be divisible by 3, or that to be in pain is to be a list of the ingredients for a jelly donut. Like all reductionist claims, it is really only intelligible as a roundabout way of asserting a kind of eliminativism: “’Pain’ as common sense understands it doesn’t exist at all; all that is really going on is what is described by the machine table.” In that case, though, we have a claim that is manifestly false. And, when the eliminativist line is taken toward intentionality – that is to say, when it is claimed that there is no such thing as a belief, or an assertion, or as something being “about” or “directed at” something beyond itself in any way at all – we have a claim that is incoherent. (Keith Yandell has a useful discussion of Advaita Vedānta as implying a kind of eliminativism.)

As Aristotle or Mr. A could tell you, A is A. Or as Joseph Butler famously put it, “Every thing is what it is and not another thing.” Very few people ever deny the law of identity outright. But it is violated implicitly all the time. Mind is not matter and matter is not mind. God is not me and I am not God. A stone is a stone, a tree is a tree, a dog is a dog, and a man is a man, and not a single one of them is “nothing but” a collection of particles (even if being composed of particles is a part of the correct story about their nature, as of course it is). To say all of that is not only to state the obvious but also to state (what is from an Aristotelian-Thomistic point of view) the metaphysically unavoidable. To deny these things is to lose one’s metaphysical moorings, and no one who does so should be surprised if he ends up somewhere other than where he hoped to go.

Saturday

The dreaded causa sui

There is no case known (neither is it, indeed, possible) in which a thing is found to be the efficient cause of itself; for so it would be prior to itself, which is impossible.

Summa Theologiae I.2.3

If, then, something were its own cause of being, it would be understood to be before it had being – which is impossible…

Summa Contra Gentiles I.22.6

Was Aquinas mistaken? Could something be its own cause? Stephen Hawking and Leonard Mlodinow seem to think so. In their recent book The Grand Design, they tell us that “we create [the universe’s] history by our observation, rather than history creating us” and that since we are part of the universe, it follows that “the universe… create[d] itself from nothing.”

I examine their position (and the many things that are wrong with it) in my review of the book for National Review. What is of interest for present purposes is their suggestion that future events can bring about past ones. Could this be a way of making plausible “the dreaded causa sui” (as I seem to recall John Searle once referring to the idea in a lecture)? That is to say, might a thing A possibly cause itself as long as it does so indirectly, by causing some other thing B to exist or occur in the past which in turn causes A?

To be sure, Hawking and Mlodinow provide only the murkiest account of how their self-causation scenario is supposed to work, and do not even acknowledge, much less attempt to answer, the obvious objections one might raise against it. But one can imagine ways in which such a scenario might be developed. Suppose for the sake of argument that the doctrine of temporal parts is true. And suppose we consider various examples from science fiction of one temporal part or stage of an individual playing a role in bringing about earlier parts or stages of the same individual.

In his 1941 short story “By His Bootstraps,” Robert Heinlein presents a tightly worked out scenario in which his protagonist Bob Wilson is manipulated by time-traveling future versions of himself into carrying out actions that put him into a series of situations in which he has to manipulate his past self in just the way he remembers having been manipulated. That is to say, temporal stage Z of Wilson causes temporal stage A of Wilson to initiate a transition through various intermediate Wilson stages which eventually loop back around to Z. In the 1952 E.C. Comics story “Why Papa Left Home” (from Weird Science #11), a time-traveling scientist stranded several decades in the past settles down to marry (and later impregnate) a girl who reminds him of the single mother who raised him, only to discover, after his abrupt and unexpected return to the present and to his horror, that she actually was his mother and that he is his own father. Doubling down on this Oedipal theme in what is probably the mother of all time travel paradoxes, Heinlein’s ingenious 1959 short story “– All You Zombies – ” features a sex-changing time-traveler (“Jane”) who turns out to be his own father and his own mother. (Don’t ask, just read it.)

Now, if we think of each of these characters as a series of discrete temporal parts – again labeled A through Z for simplicity’s sake – then we might say that each part has a kind of independent existence. A, B, C, D, and on through Z are like the wires making up a cable, in which each wire can be individuated without reference to the others even though they also all make up the whole. The difference would be that while the wires are arranged spatially so as to make up the cable, the stages in question are arranged temporally so as to make up a person. And what we have in the science-fiction scenarios in question is just the unusual sort of case wherein some of the stages loop back on the others, just as some of the wires in a cable might loop back and be wound around the others.

Mind you, I do not in fact think any of this is right. I do not accept the doctrine of temporal parts, and I do not think that such time travel scenarios really are possible even in principle given a sound metaphysics. (I’ll have reason to address these issues in detail in forthcoming writing projects, so stay tuned.) But as I say, we’re just granting all this for the sake of argument. And if we do, it might seem that we are describing a kind of self-causation.

In fact we are not, at least not in the sense of “self-causation” that Aquinas is ruling out as impossible in principle. For notice that in order to make sense of the scenarios in question, we have had to treat each of the stages of the persons involved as distinct, independent existences. For instance, in “– All You Zombies –“ it is, strictly speaking, not that Jane causes herself/himself to exist so much as that the later stages of Jane cause earlier stages of Jane to exist. And since each stage is distinct from the others, we don’t really have a case of self-causation in the strict sense. For none of the stages causes itself – each is caused by other stages. The situation is analogous to the “self-motion” of animals, which Aristotle and Aquinas point out is not really inconsistent with their principle that whatever is moved is moved by another, since such “self-motion” really involves one part of an animal moving another part.

We might also compare these scenarios to the kinds of causal series ordered per accidens that Aquinas is happy to allow might in principle regress to infinity. The stock example is a father who begets a son who in turn begets another. Each has a causal power to beget further sons that is independent of the continued activity or inactivity of any previous begetter. Contrast a causal series ordered per se, the stock example of which is a hand moving a stone with a stick. Here the stick’s power to move the stone derives from the hand, and would disappear if the hand were to stop moving. In the strictest sense, it is not the stick which moves the stone, but the hand which moves it, by means of the stick. By contrast, if Al begets Bob and Bob begets Chuck, it is Bob who begets Chuck, and in no sense Al who does it. The reason the latter, per accidens sort of causal series might in principle regress to infinity, then, is that the activity of any member does not of necessity trace to the activity of an earlier member which uses it as an instrument. But things are different with a per se casual series, in which no member other than the first could operate at all were the first not working through it. (I had reason to say more about the difference between these sorts of causal series, and about what is meant by “first” in the expression “first cause,” in this recent post.)

Aquinas allows for the sake of argument that the universe might have had no beginning, given that the series of causes extending backward in time is ordered per accidens. When he argues for God as first cause of the world, then, he does not mean “first” in a temporal sense. His argument is rather that the universe could exist here and now, and at any particular moment, only if God is conserving it in existence, for anything less than that which is Pure Act or Being Itself could not in his view persist for an instant unless it were caused to do so by that which is Pure Act or Being Itself, to which it is related in a per se rather than per accidens way. In particular, anything which is in any way a compound of act and potency (as all compounds of form and matter are, and, more generally, as all compounds of existence and essence are) must be continually actualized by that which need not itself be actualized insofar as it is “already” Pure Actuality. (See Aquinas for the details.)

Now every temporal part of the characters in our hypothetical science-fiction examples is relevantly like the particular moments in the history of the universe. Even if the universe had no beginning but regressed back in time to infinity, it would still have to be sustained in being at any particular moment by God. It could not at any particular moment be causing itself. And even if the temporal parts of the characters in question looped around back on themselves, they would still at any particular moment have to be sustained in being by God. They too could not at any particular moment be causing themselves. In short, the theoretical possibility of a circular temporal series would be as irrelevant to Aquinas’s point as the theoretical possibility of an infinite temporal series is. When Aquinas denies that anything can cause itself given the absurdity of a cause preceding itself, what he is most concerned to deny is, not that a cause can be prior to itself temporally (though he would deny that too), but that it can be prior to itself ontologically, that it could be more fundamental than itself in the order of what exists at any given moment, as it would have to be if it were sustaining itself in being. (And again, in any event no cause strictly exists prior to itself even temporally in the scenarios we’ve been describing; for each temporal part of the characters in question is caused by a distinct temporal part, not by itself.)

Hence, even if the universe were (as it is not) as Robert Heinlein or Stephen Hawking describes it, it would require at any particular instant a cause distinct from it in order for it to exist at that instant. (The same would be true if we consider the universe as a single four-dimensional object. It would still be a composite of form and matter and essence and existence, and thus of act and potency, and could therefore not in principle exist were it not caused by that which is not composite in any of these ways but just is Pure Act and Being Itself.) When we carefully unpack what the scenarios would have to involve, we can see that they do not entail any sort of causa sui, nor anything that could in principle exist apart from a divine first cause.

Wednesday

Plantinga’s ontological argument

Alvin Plantinga famously defends a version of the ontological argument that makes use of the notion of possible worlds. As is typically done, we might think of a “possible world” as a complete way that things might have been. In the actual world I am writing up this blog post, but I could have decided instead to go pour myself a Scotch. (Since it’s still morning, I won’t – I can wait an hour.) So, we might say that there is a possible world more or less like the actual world – Obama is still president, I still teach and write philosophy, and so forth – except that instead of writing up this blog post at this particular moment, I am pouring myself a Scotch. (Naturally there will be some other differences that follow from this one.) We can imagine possible worlds that are even more different or less different in various ways – a possible world where the Allies lost World War II, a possible world in which human beings never existed, a possible world exactly like the actual one except that the book next to me sits a millimeter farther to the right than it actually does, and so forth. Not everything is a possible world, though. There is no possible world where 2 + 2 = 5 or in which squares are round.

Philosophers make use of the notion of possible worlds in all sorts of ways. For example, it is sometimes suggested that we can analyze the essence of a thing in terms of possible worlds: What is essential to X is what X has in every possible world, what is non-essential is what X has in some worlds but not others. It sometimes suggested that modality in general can be analyzed in terms of possible worlds: A necessary truth is one that is true in every possible world, a possible truth one that is true in at least one possible world, a contingent truth one that is true in some worlds but not others, an impossible proposition one that is true in no possible world. Plantinga, again, makes use of the notion in order to reformulate the ontological argument famously invented by Anselm. We might summarize his version (presented in The Nature of Necessity and elsewhere) as follows:

1. There is a possible world W in which there exists a being with maximal greatness.

2. Maximal greatness entails having maximal excellence in every possible world.

3. Maximal excellence entails omniscience, omnipotence, and moral perfection in every possible world.

4. So in W there exists a being which is omniscient, omnipotent, and morally perfect in every possible world.

5. So in W the proposition “There is no omniscient, omnipotent, and morally perfect being” is impossible.

6. But what is impossible in one possible world is impossible in every possible world.

7. So the proposition “There is no omniscient, omnipotent, and morally perfect being” is impossible in the actual world.

8. So there is in the actual world an omniscient, omnipotent, and morally perfect being.

Plantinga famously concedes that a rational person need not accept this argument, and claims only that a rational person could accept it. The reason is that while he thinks a rational person could accept its first and key premise, another rational person could doubt it. One reason it might be doubted, Plantinga tells us, is that a rational person could believe that there is a possible world in which the property of “no-maximality” – that is, the property of being such that there is no maximally great being – is exemplified. And if this is possible, then the first and key premise of Plantinga’s argument is false. In short, Plantinga allows that while a reasonable person could accept his ontological argument, another reasonable person could accept instead the following rival argument:

1. No-maximality is possibly exemplified.

2. If no-maximality is possibly exemplified, then maximal greatness is impossible.

3. So maximal greatness is impossible.

In The Miracle of Theism, atheist J. L. Mackie argues that even this concession of Plantinga’s overstates the value of his ontological argument. For it is not at all clear, Mackie says, that a rational person can treat the question of whether to accept either Plantinga’s argument or its “no-maximality” rival as a toss-up, as if we would be within our epistemic rights to choose whichever one strikes our fancy. Why wouldn’t suspense of judgment in the face of such a deadlock, a refusal to endorse either argument, be the more rational option? Indeed, if anything it is the “no-maximality” argument that would be the more rational choice, Mackie suggests, in light of Ockham’s razor.

But though I do not myself endorse Plantinga’s argument, I think these objections from Mackie have no force, and that even Plantinga sells himself short. For it is simply implausible to suppose that, other things being equal, the key premises of Plantinga’s argument and its “no-maximality” rival are on an epistemic par. To see why, consider the following parallel claims:

U: There is a possible world containing unicorns.

NU: “No-unicornality,” the property of there being no unicorns in any possible world, is possibly exemplified.

Are U and NU on an epistemic par? Surely not. NU is really nothing more than a denial of U. But U is extremely plausible, at least if we accept the whole “possible worlds” way of talking about these things in the first place. It essentially amounts to the uncontroversial claim that there is no contradiction entailed by our concept of a unicorn. And the burden of proof is surely on someone who denies this to show that there is a contradiction. It would be no good for him to say “Well, even after carefully analyzing the concept of a unicorn I can’t point to any contradiction, but for all we know there might be one anyway, so NU is just as plausible a claim as U.” It is obviously not just as plausible, for a failed attempt to discover a contradiction in some concept itself provides at least some actual evidence to think the concept describes a real possibility, while to make the mere assertion that there might nevertheless be a contradiction is not to provide evidence of anything. The mere suggestion that NU might be true thus in no way stalemates the defender of U. All other things being equal, we should accept U and reject NU, until such time as the defender of NU gives us actual reason to believe it.

But the “no-maximality” premise of the rival to Plantinga’s ontological argument seems in no relevant way different from NU. It is really just the assertion that a maximally great being is not possible, and thus merely an assertion to the effect that Plantinga’s first and key premise is false. And while Plantinga’s concept of a maximally great being is obviously more complicated and harder to evaluate with confidence than the concept of a unicorn, it seems no less true in this case that merely to suggest that a maximally great being is not possible in no way puts us in any kind of deadlock. Unless someone has actually given evidence to think that Plantinga’s concept of a maximally great being entails a contradiction or is otherwise incoherent, the rational position (again, at least if we buy the whole “possible worlds” framework in the first place) would be to accept his key premise rather than the key premise of the “no-maximality” argument, and rather than suspending judgment.

(Mackie’s assumption that Ockham’s razor is relevant here – he speaks of not multiplying entities beyond necessity – also seems very odd to me. Appealing to Ockham’s razor is clearly in order when you are dealing with alternative explanations each of which is already known to be at least in principle possible, and are trying to weigh probabilities in light of empirical evidence. But questions about semantics, logical relationships, conceptual and metaphysical possibilities, and the like – the sorts of issues we are considering when trying to decide whether Plantinga’s key premise or its rival is correct – are not like that. The whole idea of applying Ockham’s razor to such issues seems to be a category mistake. But I won’t pursue the thought further here.)

Other objections to Plantinga are also oversold. There is, for example, the tired “parody objection” that critics have been trotting out against ontological arguments since Gaunilo, and which I suggested in my previous post have no force, at least against the most plausible versions of such arguments. For example, John Hick suggests (in his An Interpretation of Religion) that Plantinga’s reasoning could equally well be used to argue for the existence of a maximally evil being, one that is omnipotent, omniscient, and morally depraved in every possible world. The problem with this objection is that it assumes that good and evil are on a metaphysical par, and as I have had reason to note before, that is by no means an uncontroversial (or in my view correct) assumption.

But defending the idea that evil is a privation would require a defense of the more general, classical metaphysics on which it rests. And there lies the rub. For Plantinga is not a classical (i.e. Platonic, Aristotelian, or Scholastic) metaphysician. That is reflected not only in the way he conceives of God’s omnipotence, omniscience, and “moral perfection” – we’ve noted before that Plantinga is a “theistic personalist” rather than a classical theist – but also in the more general metaphysical apparatus he deploys in presenting his ontological argument. From a classical metaphysical point of view, and certainly from an Aristotelian-Thomistic (A-T) point of view, the “possible worlds” approach is simply misguided from the start (for reasons we’ve also had occasion to discuss before). Many no doubt think that Plantinga’s argument is at least an improvement on Anselm’s. I think it is quite the opposite. In no way do I intend that as a slight against Plantinga; on the contrary, The Nature of Necessity is, as no one familiar with it needs me to point out, a testament to his brilliance. But it is also, like the best of the work of the moderns in general, a brilliant mistake. A sound natural theology must be grounded in a sound metaphysics, which means a classical (and preferably A-T) metaphysics. Within the context of a classical metaphysics, Anselm developed as deep and plausible an ontological argument as anyone ever has. But (so we A-T types think) even he couldn’t pull it off.
Related Posts Plugin for WordPress, Blogger...