Tuesday, August 10, 2010

Just War and the Bomb

I recently engaged in a debate over at Edward Feser's site regarding the use of the atomic bombs in WWII. Dr. Feser's post also references this article by James Akin. In this post I would like to engage in a lengthier meditation on the use of atomic weapons to end WWII, expanding on some points I made in the comment's section on Dr. Feser's blog.

The first is to reconsider the distinction between "soldiers" and "civilians", and the "innocent" in a world of total war. Just war theory was created back when Augustine was trying to buck up the morale of Romans defending themselves against barbarians. The idea was that the Romans could justly engage in war to defend themselves, including killing barbarian invaders. But this justification didn't extend to non-combatants; say, the barbarian women and children. At that time, there was a pretty clear distinction between combatants and non-combatants. The guys with the swords were combatants, the women carrying children weren't. Moreover, the women carrying children were more a hindrance than a help to the invading barbarians. Armies back then lived off the land they invaded, and carrying along women and children only brought more mouths to feed. So the noncombatants back in those days added no combat value, and were truly innocent.

This state of affairs continued up until about the 18th century. Until then, the horizon of the average peasant was the end of his fields, and whether he would get a decent crop in that year. Wars between Kings didn't concern him overmuch, and he likely only learned the news of war only through an army (his King's or the enemy's) trampling through his fields. These wars were a matter of intermittent battles, between which things were pretty much indistinguishable from peace. The soldiers were armed with sword, pikes and arrows, none of which required a supply train or massive support from the home front. In a war like this, soldier and noncombatant have clear meanings.

Starting sometime in the 19th century - our Civil War is a good place - war began to change. It stopped being the occasional violent contest between armed minorities,  and started becoming an enduring economic contest between nations. Soldiers were now armed with rifles and cannon that required extensive supply support in terms of ammunition and repair.  A medieval army was good to go if everyone had a sword and some chain mail. Lee's Army of Northern Virginia needed everyone to carry a rifle, and also required wagons carrying millions of rounds of ammunition to be effective. It would need millions more after a few days of battle. Compared with a sword or a longbow, the Civil War rifle was an intricate piece of machinery that needed constant maintenance and was relatively easily destroyed and not so easy to replace.  Furthermore, the soldiers and their support were transported on a network of ships and railroads, requiring maintenance and even expansion. The quartermaster and the logistics officer, heretofore minor players at best in war, now became decisively important individuals.

What also changed was the introduction of the mass conscription army. Wars were no longer fought between standing classes of professional soldiers (e.g. the Roman Army - the "combatants" for Augustine), but instead between huge numbers of young men forcibly conscripted from civilian life for the purpose. The point of all these young men was to be the delivery point for all that destructive energy manufactured by the nation. Thus the Civil War battle was largely a matter of rows of young conscripts facing each other, repeatedly executing a series of mechanical motions - just like a factory worker - load, aim, shoot, load, aim, shoot, load, aim shoot - until one of the rows of young men was destroyed. Or both. It wasn't Augustine's kind of war anymore, and the distinction between "combatants" and "innocent noncombatants" was disappearing. For in what way was the factory worker innocent that the poor Georgia boy taking a minie ball in the face wasn't?

And this was something that William T. Sherman understood. His March to the Sea (see Terrible Innocence: General Sherman at War for a perceptive account of this, or Victor Davis Hanson's The Soul of Battle: From Ancient Times to the Present Day, How Three Great Liberators Vanquished Tyranny) evidenced a brilliant understanding of what modern war was about, and also revealed a moral clarity missing from a lot of the armchair generals questioning the wisdom of Harry Truman. Rather than continue the practice of standing up rows of young men to mow each other down, Sherman marched through the South and destroyed the material foundation that kept the Confederacy in existence. His march caused a lot of suffering, yes. But the cost in human life was paltry compared to what was going on in Virginia in the attritional war between Lee and Grant.

Sherman understood that Southern civilians, especially the plantation owners, were in no way "innocent noncombatants." They were the ones who started the war, kept the war going, and insisted that the young men stay in their trenches at Petersburg and suffer. Here is Hanson quoting some of Shermans' soldiers addressing Southern women:

You in wild enthusiasm, urge young men to the battlefield where men are being killed by the thousands., while you stay at home and sing "Bonnie Blue Flag"; but you set up a howl when you see the Yankees down here getting your chickens. Many of your young men have told us that they are tired of war and would quit, but you women would shame them and drive them back.

Sherman did not restrict himself to destroying purely military targets. In total war, everything in the nation is put in the service of the war. A cornfield is just as necessary to the war effort as a cannon factory. So the cornfield was burnt.

And we come to what is missing in the analysis of James Akin and Dr. Feser. Akin writes of "dogs that didn't bark", but the real missing dog is the missing dogface - the 17 year old farmhand from Georgia, conscripted into the U.S. Army, and about to be sent into Japanese machine gun fire. This young Johnny Reb nowhere makes an appearance in the moral analysis of Akin/Feser. But it figured significantly in the mind of Harry Truman, and thank God for that.

The question facing Harry Truman was not the pristine academic one of killing or not killing the innocent. The tragedy of modern war is that the decision often boils down to which innocent lives will be taken. Will it be the Japanese civilians in Hiroshima, or the farm boys from Nebraska and Georgia who will be killed? Why is it a "more morally pure intention" to drag the kid off the farm, put a gun in his hands, and send him onto the exploding beaches of Kyushu, rather than nuke Japanese civilians? To raise this question is to answer it, which is why Johnny from Georgia is missing in action from the Akin/Feser argument. While Akin spends time making fine but pointless distinctions among Japanese targets (only those involving "war resources" are legitimate, when everything is a war resource in a modern total war), he has no time for a moral analysis of the American boys his thinking would inevitably send to their deaths.

Harry Truman was Commander-in-Chief of the U.S. Armed Forces. The unstated agreement between the C-in-C and the soldier is that young men (and now women) will put their lives in mortal danger under the President's orders; and that the President will not spend their lives unnecessarily. Truman would have violated his duty to every American serviceman if he had a way to end the war, but instead ordered his soldiers into battle in the name of a morally pure intention. Unfortunately, Truman did not have Sherman's option of destroying property rather than lives. Instead he ordered the nuking of Japanese civilians for the sake of saving his men; men who, in the modern fashion, were really just civilians temporarily in uniform. Yes, Truman ordered the deaths of innocent people; in doing so, he avoided ordering the deaths of innocent young American men. There is no way to stay clean in modern war. Just how would the armchair President's have stayed morally pure at the end of the war? This is another dog that never barks in Akin's argument.

I'm glad I served under President's Reagan and Bush Sr., and not Presidents Akin and Feser. I wouldn't want to serve under any President who would send me into machine gun fire for the sake of his moral purity.

And if this puts me out of line with the Catechism.... so be it. But I suspect Akin's interpretation of the CCC passages in question is not the only one.

Friday, July 30, 2010

Irony Proof

Reading Matt Ridley's book The Rational Optimist: How Prosperity Evolves, he has this to say with reference to Plato:

The endless modern laments about how texting and emails are shortening attention span go back to Plato, who deplored writing as a destroyer of memorizing.

And Plato did it in writing. I wonder what that means?

Kierkegaard didn't think there was any point in trying to directly argue people out of the modern philosophical point of view.  Modern philosophy is irony-free because it is not subjective; to become subjective means to understand the meaning of irony. But whatever is said ironically can also be taken in its direct sense; we can, if we choose, interpret Plato as simply meaning directly what he wrote, as Ridley does. There is no way to prove, in any way acceptable to modern philosophical demands, that there is any more to Plato than this.

But, thank God, there is...

Wednesday, July 28, 2010

Massachusetts About to Do It Again

With all the universities in this state (sorry, "Commonwealth"), it's amazing how many folks can't fathom simple logic.

Massachusetts is about to enact a law such that it must cast its Presidential Electoral Votes for the national popular vote winner. Now there are four possibilities concerning the popular vote:

1) The Massachusetts popular vote goes for the Democrat, and the national vote goes for the Democrat.

2) The Massachusetts popular vote goes for the Republican, and the national vote goes for the Republican.

3) The Massachusetts popular vote goes for the Democrat, and the national vote goes for the Republican.

4) The Massachusetts popular vote goes for the Republican, and the national vote goes for the Democrat.

The proposed law makes no difference with respect to possibilities 1 and 2. Possibility 4 is a practical impossibility. So the only practical opportunity for the law to take effect is possibility 3. In other words, the effect this law will have will be to elect a Republican in the peculiar case that the Republican wins the popular vote but would lose the electoral vote. Massachusetts to the rescue! As Jeff Jacoby has pointed out, this law would have forced Massachusetts to vote for Richard Nixon rather than George McGovern in 1972.

When Scott Brown was elected to the Senate after Massachusetts changed its laws in 2004 to prevent Mitt Romney from appointing a Republican to fill the vacant seat of "President" John Kerry, I thought there might be a God. If Massachusetts manages to put Romney (or even more delicious, Sarah Palin) into the White House in 2012, despite losing the MA popular vote, I'll know there is a God.

Monday, July 26, 2010

On Philosophy at Second Hand, with Specific Reference to Kant

This is the first post in what I hope is at least a two-part series, discussing the benefits of reading the great philosophers directly rather than at second-hand.


Arthur Schopenhauer, in the Preface to the Second Edition of The World As Will and Representation, In Two Volumes: Vol. I, has this to say about reading the great philosophers:

In consequence of his originality, it is true of him in the highest degree, as indeed of all genuine philosophers, that only from their own works does one come to know them, not from the accounts of others. For the thoughts of those extraordinary minds cannot stand filtration through an ordinary head.

The reason for this is not necessarily that the philosopher's thought is too sophisticated for the ordinary head to conceive. Just the opposite is more likely the problem: It is just in his simplicity that the great philosopher is most likely to be missed: For philosophy is about "first ideas" or the bedrock of our rational approach to the world. What distinguishes the great philosopher is his ability to reveal and analyze the first ideas. But just because they are first they are very easy to miss, because we naturally look past them. We habitually look past them.

And we do so for very good reasons. We don't need to think about first ideas to get on with the ordinary business of life. We take them for granted and deal only with the secondary questions that confront us: Can I afford a new house, what school my kids should attend, how to fix a car, etc. Our lives would grind to a halt if we constantly had "first questions" in mind - which is the basis for the perennial indictment of the philosopher that he is "useless", and why Aristotle described philosophy as the most noble but least necessary of endeavors. We operate more efficiently the more we can take these basic questions for granted, and so we develop habits of mind that put, and keep, these ideas in the realm of assumed background and, perhaps, even actively discourage the mind from uncovering them.

The great philosopher, then, is doing something that, in a sense, does not come naturally and even "goes against the grain." He uncovers the background that the mind wishes would stay there so it can get on with the "real" business of thought. So when we read a philosopher, the drift of our mind is to find a place for his thought within the categories with which our mind is already comfortable (I discuss this phenomenon in relation to materialists and St. Thomas in this post.) Of course, it may be that the philosopher's primary goal is to challenge those very categories.

So when we read a great philosopher at second hand, there is a danger that what we will read is the philosopher's thought as recast into the comfortable categories of the interpreter.  This happens with Kant when he is introduced in the following common way: We human beings have (at least) five senses. We know and encounter the world through them. But we see that other animals have different ways of appreciating the world through their senses, and even have different senses altogether. Bats, for example, detect objects through echo location. Some species of fish (sharks, I believe) sense the electromagnetic field of their prey. What must the world look like to a shark? Can we even conceive of what the experience of a shark is like? (See the famous paper of Thomas Nagel on this topic, although he focusses on bats and not sharks.) We come to see that the world is not given to us directly in its own terms, but comes to us recast in the terms dictated by our cognitive apparatus. Thus arises the Kantian distinction between phenomena and noumena, or "things as they appear to us" and "things as they are in themselves."

Now this is very close to what Kant is getting at (in my opinion, of course - I am well aware that my mind is subject to the same propensity to think in familiar channels as everyone else, so if anyone really wants exposure to Kant, he should be read directly rather than through me. Put your irony back in its holster). But "close" can be disastrous when interpreting philosophers, precisely because "close" may miss just the jump out of familiar channels that makes the philosopher significant. Absent this jump, everything that follows takes on a different meaning and you will end up in a very different place than the philosopher intended; just as Routes 1 and 93 start in very close parallel out of Boston, but if you travel on Rt. 1 rather than the intended 93, you will end up very far from where you hoped.

The introduction to Kant given above is vulnerable to a straightforward objection. If we know things only as they appear to us, rather than things as they are in themselves, then the question of what it is like to be a shark or a bat changes meaning; in fact it loses meaning. "Shark" and "bat" are just constructions our cognitive apparatus puts on experience; asking "what it is like to be a shark" is then just asking what it is like to be this particular kind of cognitive construction. The object of the question has changed; it is no longer some thing-in-itself outside ourselves (about which we can know nothing at all on the Kantian view), and instead has become a subjective question concerning the nature of inner experience. And it makes no sense to ask "What is it like to be a cognitive construction", because cognitive constructions have no inner lives; they are aspects of our inner lives. It is like asking what it is like to be the color red or to be a dream.

What, then, becomes of the initial case for the plausibility of Kantian philosophy? That case only had plausibility because we assumed, "naively" we later discover, that when we think about "bats" and "sharks", we are in contact with real things out there about which it makes sense for us to discuss their inner lives. But this is only possible if we can know something about the thing-in-itself, verboten knowledge according to Kant. So the Kantian philosophy destroys the ground of its own plausibility.

Someone to whom this objection occurs, and who is familiar with Kant only through the common introduction given above, may have nothing further to do with Kant after concluding that it is Kant, and not himself, who is being naive. And this would be a tragedy, because while there are good reasons to reject Kant's philosophy, this isn't one of them, and, even philosophers who are wrong have things to teach us, especially great philosophers like Kant. But a man is unlikely to give Kant further time if he has concluded that he was so obtuse as to not anticipate the objection given above. (In fact, it's a good clue that a critic has not really understood a great philosopher if he thinks he has a devastating, and obvious, refutation of the philosopher's basic idea. To borrow from Hume's argument against miracles: Is it more likely that the critic hasn't understood the philosopher, or that all the bright minds who have studied the philosopher over many years simply missed the obvious retort?)

Kant is not subject to the objection because he does not base the plausibility of his philosophy on meditations concerning the inner lives of other animals. He bases it on the only possible thing he can: The data of our own consciousness. In the Transcendental Aesthetic, Kant proposes to the reader that space and time are not things we empirically discover; they are in fact forms of empirical discovery. We do not first experience the tree over there and myself over here, and then discover space as the thing separating us. No, the very distinction that makes possible the experience of the tree as something distinct from myself is the distinction of space. Space is prior to the experience of trees in the sense of being constitutive of it; and the only candidate for the agent of constitution is our own consciousness. So the experience of space is really an experience of the demands of our own cognitive apparatus on reality; and everything experienced in space is an experience of whatever is out there only insofar as it has been reconstituted in terms of space through our consciousness. A similar argument is adduced for time.

Whether or not the reader finds the argument compelling (and I reiterate the point that this is my interpretation of Kant, and Kant was a much greater philosopher than I am, so it is better to read him directly for the argument), the point is that Kant has not stolen any bases by implicitly referring to a knowledge of things-in-themselves that he will later claim is impossible. This may seem an obscure point but it is what distinguishes the genuine Kantian philosophy from the bastardized, self-contradictory, pseudo-Kantian philosophy that has become part of our "default" intellectual furniture. Repeating a point I have perhaps made in too many posts, much of the contemporary philosophy of mind, I believe, takes a pseudo-Kantianism for granted. Any time you here someone talking about how the brain constructs experience or "models" the world, you are listening to someone on the Kant Express; but they very likely have not taken Kant seriously enough. 

Returning to my earlier point that our minds tend to want to run in familiar grooves, our minds have an almost overwhelming impulse to talk about things as they really are. (Of course, I think we have this impulse because we really can talk about how things really are, but that's another story.) Kant recognized this facet of our nature in saying that metaphysics, while an illusion, is an inevitable illusion. The pseudo-Kantians of today don't have Kant's discipline; they want to talk about how the mind (or rather, "the brain") is essentially a modeler of the world or a constructor of experience from sensation, and innocently suppose that they are talking directly about a real-world, thing-in-itself object called "the brain" when they do so. If we have trusted Kant enough to read him directly, then we can see the self-defeating nature of the project; it is the same self-defeating feature found in the typical introduction to Kant. 

The penalty for being a pseudo-Kantian is the same as the penalty for all philosophical confusion: A lack of self-understanding. This lack of self-understanding is why so much of the contemporary philosophy of mind has the character of a circular firing squad. ("The most striking feature is how much of mainstream philosophy of mind of the past fifty years seems obviously false." John Searle, The Rediscovery of the Mind, p. 3). It seems so obviously false because it is: Everyone is trying to square a circle. They are trying to show how the brain, through purely material operations, is the causal foundation of consciousness and thought. But since "the brain" is itself a construction of consciousness, the project is really about explaining consciousness in terms of itself, or rather consciousness in the terms of whatever a particular philosopher decides to take seriously about consciousness. In any case, it is circular, and no one seems like they will run out of ammunition any time soon.

Coming soon, I hope: On Philosophy at Second Hand, with Specific Reference to Plato.

Saturday, July 24, 2010

David Brooks on the Moral Sense

David Brooks of the New York Times has a piece here on the origin of what he calls the "moral sense." The article starts this way:

Where does our sense of right and wrong come from? Most people think it is a gift from God, who revealed His laws and elevates us with His love. A smaller number think that we figure the rules out for ourselves, using our capacity to reason and choosing a philosophical system to live by.
Moral naturalists, on the other hand, believe that we have moral sentiments that have emerged from a long history of relationships. To learn about morality, you don’t rely upon revelation or metaphysics; you observe people as they live.
Brooks goes on to describe the naturalist case for the evolutionary development of the "moral sense." Right off the bat, however, Brooks has posed what I can only call a false alternative, a phrase I now have a visceral reaction against since Barack Obama so often abuses it. ("There are those who pose the false alternative between spending trillions of dollars you don't have and fiscal sanity...") Anyway, God gives us the "rules" in a number of ways. One way is through direct revelation, another way is through the natural law:

When Gentiles who have not the law do by nature what the law requires, they are a law to themselves, even though they do not have the law. They show that what the law requires is written on their hearts, while their conscience also bears witness and their conflicting thoughts accuse or perhaps excuse them... Rom 2:14:15.

There is no conflict between the natural law known by reason and the divine law known through revelation; both have their source in God. This would even include Brooks's evolution-based morality since God, if He is, would not have His Purposes stymied by evolution. Evolution would then be just another way God could reveal His Will to us. In other words, God created the kind of world in which we live, knowing that we would evolve the right sort of moral rules.

But we've got to dismiss the evolutionary basis for morality, not because it is exclusive of a Divinely Revealed morality, but simply because it is incapable of serving as a basis for morality in any case. Moral rules concern the relationship between the possible and the actual; they criticize what we are doing in terms of what we should be doing but are not. But if your moral rules are entirely based on "observing people as they live", then your rules will necessarily be nothing more an affirmation of already-existing arrangements. And no one needs rules to tell them to keep on doing what they are already doing anyway.

Brooks quotes a professor who compares the moral sense to our sense of taste:

By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.

There is, however, no gainsaying taste. Some people like sweet foods, others like salty foods. Some people act fairly and others with cruelty. We haven't gotten to morality yet until we can say that it is better to act fairly than with cruelty, and that can only happen when we acknowledge that the possible (how people should act) has authority over the actual (how people in fact do act). I believe it was Kierkegaard who wrote that the poet is higher than the historian, because the poet criticizes the actual in terms of the possible. The evolutionist is an historian.

There was a time when slavery was a universally accepted human institution. At such a time, basing morality simply on how people live, we would have to conclude that slavery is a morally acceptable institution. There was a phrase popular back in the sixties that went if it feels good, do it. The evolutionary morality version of this is, if you are already doing it, keep on doing it. But who needs to be told that? No more than than they need to be told to keep on doing what feels good.

Now the supporter of evolutionary morality might object this way: Our studies show that evolution has endowed children with an inborn sense of justice:

This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.


Slavery, the supporter of evolutionary morality will say, clearly conflicts with this inborn sense of justice. Therefore slavery is wrong. It just took people a while to figure it out, but when they did, it was because they realized slavery conflicted with their evolutionary developed sense of justice.


This doesn't work because if, for centuries, people had no problem approving of slavery despite the rudimentary sense of justice they were born with, then clearly slavery did not conflict with this sense of justice. The evolutionist is just reading back into his rudimentary sense of justice his preferred moral results. In other words, he's slipping the possible in by the back door. If our principle is to "observe people how they live", and if they live in happy accord with a slave-based society, then we have no possible basis on which to condemn that society. And historically, that is not how slavery ended. The slave trade ended in the 19th century because the British Navy decided that a world without slavery was preferable to a world with slavery (the actual one), and further decided to bring this preferable world about at the end of a cannon.


The only way to get to morality is through the notion of a final cause for man; in other words, to acknowledge that man has a rationally appreciable point to his existence that he is free to bring about (or not bring about) through his actions. The final cause serves for him as an ideal, as the possible which he has not yet brought into existence, but should. But the primary reason Darwin offered his theory of evolution was to banish final causes from the world; in doing so he banished any rational basis for ethics as well. This isn't to stay that people can't still behave morally in the era of Darwin; it only means that any attempt to make sense of their behavior in Darwinian terms must fail.

Tuesday, July 20, 2010

Cana and Being a Spiritual Superhero

That's Tintoretto's Wedding at Cana that's now the banner of my blog. The miracle at Cana is perhaps my favorite that Christ performed. It's got a self-verifying quality to it that some of the other miracles lack. That Christ would miraculously cure the sick is something we might expect when God visits Earth; it's the kind of serious thing we imagine God would do, and therefore we can imagine someone imagining he did it. But who would imagine that the first miracle God would perform would be... to refill pots of wine so that a party could continue? And who would further imagine that God would perform this miracle because his mother asked him to? The miracle has a frivolous quality to it that is everlastingly shocking, as though the miracle really belongs in the Gospel According to John Blutarsky.


We find it difficult to accept one of the obvious implications of Cana: Christ expects us to have a good time. Maybe not with Animal House level excess, but the man who thinks he's too busy being holy to have an occasional beer with the lads is probably missing something important concerning what Christ is about (this post is inspired by a recent exchange I had in the comment box at the Maverick Philosopher blog on this subject. As usual, I was an utter failure at getting anyone to see my point.) Indeed, we tend to think that being seriously religious must involve being seriously miserable. So serious, in fact, that the necessary misery involved is reason enough to dismiss the claims of Christ altogether. Perhaps Christ performed the miracle at Cana, and spent so much time at parties, just to remove the excuse of those who avoid religion with the claim that they are not cut out to be spiritual superheroes.
But whereunto shall I esteem this generation to be like? It is like to children sitting in the market place. Who crying to their companions say: We have piped to you, and you have not danced: we have lamented, and you have not mourned.  For John came neither eating nor drinking; and they say: He has a devil. The Son of man came eating and drinking, and they say: Behold a man that is a glutton and a wine drinker, a friend of publicans and sinners. And wisdom is justified by her children. Matt 11:16-19.
Like most other reasons for dismissing Christ, the refusal to entertain the idea that Christ doesn't expect, in fact doesn't even want, us to try to become spiritual superheroes comes down to the sin of pride. The implication is that Christ is satisfied with spiritual mediocrities. Who wants to be mediocre? But there it is. Peter, James and John were not spiritual superheroes - especially Peter, yet he was chosen to be the primum inter pares, better to show forth the glory of God, who is content to work with mediocrities.  Nor are the saints spiritual superheroes; they are just mediocre enough to give up doing it themselves and allow God to takeover.

Friday, July 16, 2010

Thinking and Doing

The Maverick Philosopher has an aphorism here, that I will quote:

The thinker, because he is a thinker, cannot naively live his life of thought, but must be tormented by doubts regarding it.  The doer, because he is not a thinker, can naively live his life of action.

And which is the philosopher? The doer or the thinker? The philosopher is neither; the philosopher is the man who unites thought and deed; the one who "understands the abstract concretely." (Kierkegaard) At least he was once understood thus.

The ancient philosophers were not tormented by doubts about their lives, because they had not yet separated thought and deed in the modern fashion. For the ancient philosopher, thought was a deed, which was why the Socratic cross-examination was a fruitful method of philosophical investigation. To force a man into a contradiction was to force a change in his life, because men lived immediately in their thought. Today, we are not bothered by contradictions, since our thought bears no necessary connection to our lives. The intellectual,  the man who manages to live serenely while advocating an array of bizarre and self-contradictory doctrines, is a peculiarly modern phenomena.

Philosophy is held in such ill-repute today because, once the separation between thought and life is made, the penalty of contradiction disappears. The critics are then quite justified in dismissing philosophy as a gassy exchange of opinions from which nothing decisive can emerge. If philosophy is to be renewed, it will only be by thought and life being reunited.