Via the Drudgereport:
Harry Potter
The riddles of God are more satisfying than the solutions of man. - G.K. Chesterton
Monday, November 22, 2010
Tuesday, November 9, 2010
(No) Miracle on 34th Street
Santa Claus doesn't make the cut for Edward Feser, as he explains in this post.
His post includes a type of argument I've always found perplexing, which we might call the argument from artificial distance:
All of us, or virtually all of us, grew up believing in Santa Claus as small children. Yet Feser writes as though the experience of discovering the truth about Santa Claus is something about which we can only speculate - what must it do to a child's spirit? The artificial distance allows him to imply that all sorts of horrible things must happen, which aren't specifically spelled out, but are darkly hinted at. But if we remember that we ourselves believed in Santa Claus, and if we remember that time with fondness, and with gratitude to our parents for making the experience possible, then perhaps we will be forgiven for thinking that Feser's diabolical Santa Claus legend is more mythical than anything we believed as children.
There is a reason that the Santa Claus tradition has carried on and grown over the generations. It isn't because, despite being traumatized with it themselves as children, parents felt duty bound to inflict it on their children. It's because parents remember the whimsy and joy of their early years, of which Santa Claus was an integral part, and wish their children to share in a similar experience. Early childhood is a world of magic, innocence, whimsy and wonder; a time when cows jump over moons, boys climb beanstalks into the clouds, and fairy Godmothers turn pumpkins into carriages. The fairies even occasionally drop in on an ordinary child's life, as when they substitute a quarter for a tooth under your pillow.
In what sense is Santa Claus "false"? The practicalities involved with Santa Claus are so preposterous that any child, as soon as he approaches the age of reason, cannot but see the impossibilities. But then Santa Claus is not a creature of the age of reason; he is a creature of the age of imagination and wonder. When a child starts to leave the world of early childhood and reason begins to dawn in him, he will say goodbye to Santa Claus as an old friend whom he has outgrown; but one who will be remembered for communicating truths that can be learned in no other way. We love films like Miracle on 34th Street because they reintroduce us to our old friend, and to ourselves when we were innocent enough to believe in such things.
In one sense there certainly is a Santa Claus. Somebody is putting all those presents under a tree. It turns out that Santa Claus doesn't live on the North Pole, but in the room just down the hall. I don't remember being shocked or heartbroken when the truth about Santa Claus began to dawn on me; what I remember is it beginning to occur to me how unselfish my parents were. They had given me lavish gifts for years, but had gone out of their way to make sure they got no credit for it. Mom and Dad weren't lying; it was more like they were telling a long, wonderful practical joke, one they knew I would figure out eventually... and be forever grateful they played it.
His post includes a type of argument I've always found perplexing, which we might call the argument from artificial distance:
I would urge them to stop. A child is completely dependent on his parents’ word for his knowledge of the world, of right and wrong, and of God and religious matters generally. He looks up to them as the closest thing he knows to an infallible authority. What must it do to a child’s spirit when he finds out that something his parents insisted was true – something not only important to him but integrally tied to his religion insofar as it is related to Christmas and his observance of it – was a lie? Especially if the parents repeated the lie over the course of several years, took pains to make it convincing (eating the cookies left out for “Santa” etc.), and (as some parents do) reassured the child of its truth after he first expressed doubts? How important, how comforting, it is for a child to be able to believe: Whatever other people do, Mom and Dad will never lie to me. How heartbreaking for him to find out he was wrong!
All of us, or virtually all of us, grew up believing in Santa Claus as small children. Yet Feser writes as though the experience of discovering the truth about Santa Claus is something about which we can only speculate - what must it do to a child's spirit? The artificial distance allows him to imply that all sorts of horrible things must happen, which aren't specifically spelled out, but are darkly hinted at. But if we remember that we ourselves believed in Santa Claus, and if we remember that time with fondness, and with gratitude to our parents for making the experience possible, then perhaps we will be forgiven for thinking that Feser's diabolical Santa Claus legend is more mythical than anything we believed as children.
There is a reason that the Santa Claus tradition has carried on and grown over the generations. It isn't because, despite being traumatized with it themselves as children, parents felt duty bound to inflict it on their children. It's because parents remember the whimsy and joy of their early years, of which Santa Claus was an integral part, and wish their children to share in a similar experience. Early childhood is a world of magic, innocence, whimsy and wonder; a time when cows jump over moons, boys climb beanstalks into the clouds, and fairy Godmothers turn pumpkins into carriages. The fairies even occasionally drop in on an ordinary child's life, as when they substitute a quarter for a tooth under your pillow.
In what sense is Santa Claus "false"? The practicalities involved with Santa Claus are so preposterous that any child, as soon as he approaches the age of reason, cannot but see the impossibilities. But then Santa Claus is not a creature of the age of reason; he is a creature of the age of imagination and wonder. When a child starts to leave the world of early childhood and reason begins to dawn in him, he will say goodbye to Santa Claus as an old friend whom he has outgrown; but one who will be remembered for communicating truths that can be learned in no other way. We love films like Miracle on 34th Street because they reintroduce us to our old friend, and to ourselves when we were innocent enough to believe in such things.
In one sense there certainly is a Santa Claus. Somebody is putting all those presents under a tree. It turns out that Santa Claus doesn't live on the North Pole, but in the room just down the hall. I don't remember being shocked or heartbroken when the truth about Santa Claus began to dawn on me; what I remember is it beginning to occur to me how unselfish my parents were. They had given me lavish gifts for years, but had gone out of their way to make sure they got no credit for it. Mom and Dad weren't lying; it was more like they were telling a long, wonderful practical joke, one they knew I would figure out eventually... and be forever grateful they played it.
Sunday, October 31, 2010
Feser On Kant
Excellent post by Edward Feser on the influence of Kant.
The following statements by Feser:
allow me to publish one of my favorite quotes from Chesterton:
The following statements by Feser:
Idolatry is in fact the defining sin of modernity, and it is all the worse for being directed at man. The ancient pagan at least knew enough to worship something higher than himself
allow me to publish one of my favorite quotes from Chesterton:
Let Jones worship the sun or moon, anything rather than the Inner Light; let Jones worship cats or crocodiles, if he can find any in his street, but not the god within. Christianity came into the world firstly in order to assert with violence that a man had not only to look inwards, but to look outwards, to behold with astonishment and enthusiasm a divine company and a divine captain. The only fun of being a Christian was that a man was not left alone with the Inner Light, but definitely recognized an outer light, fair as the sun, clear as the moon, terrible as an army with banners. - GKC, Orthodoxy, Ch. V.Kant is absolutely critical to understanding oneself in the modern world. We are all Kantians by default; it is in the air we breathe. Only by a conscious effort at self-education is it possible to see our Kantian assumptions for what they are and, possibly, overcome them.
Monday, September 20, 2010
The Approach to Saints
What's most interesting about Andrew Stuttaford's comments concerning St. Thomas More and the Pope is what it reveals about the difference between the atheist and Catholic approaches to the saints. Stuttaford seems most concerned to arrive at a measured evaluation of the person Thomas More. To this end, he calls on the British biographer Peter Ackroyd to provide balance to the Pope's comments. But he seems to have missed entirely what the Pope is talking about.
Benedict is not concerned to burnish the reputation of St. Thomas as a "fighter for freedom of conscience", as though the saint's importance can be found in laying the groundwork for the First Amendment. In fact, the worldly reputation of More is of no concern to Benedict, as it was of no concern to More. Indeed, More was not fighting for any worldly goal, freedom of conscience, or otherwise. His great witness was to the fact that there are some things that transcend worldly goals, and are not negotiable in their terms.
The secularist cannot see that the appreciation of a saint is not about a careful weighing of the plusses and minuses in his life. It is about the window into the transcendent that the saint reveals; sometimes in the broad manner of her life, as in St. Therese of Lisieux; but sometimes also in a moment of dramatic crisis, as in the life of St. Thomas. Thomas was a man immersed in the cares, problems and compromises of his time; the secularist wants to judge him in terms of his decisions on these worldly matters. But the important thing about Thomas is that, despite his immersion in the world, he never became of the world, which the secularist is by definition. This is why St. Thomas is important to Benedict, and why he should be important to us.
Wednesday, September 1, 2010
Rieff on Crowds
David Rieff has an article on the relationship of crowds to morality over at Big Questions online.
The Gospels seem to contain an implicit commentary on the psychology/morality of crowds. The bad things that happen to Christ generally seem to happen in the context of crowds; the good things happen when Christ is dealing with people one on one. The archetypical case of the former, of course, is the mob urging Pilate to condemn Christ. Then there is Peter's rejection of Christ three times in the context of the implicit mob hanging around Christ's trial before the Sanhedrin. But there is also the rejection of Christ in Luke 4:16-30 and his frequent encounters with groups of Pharisees. On the other hand, when people respond to Christ, they generally do so as Kierkegaard's Individual, separated from the crowd, e.g. the woman caught in adultery in John 8, or the centurion.
I'm sure someone somewhere has done a thesis on this.
The Gospels seem to contain an implicit commentary on the psychology/morality of crowds. The bad things that happen to Christ generally seem to happen in the context of crowds; the good things happen when Christ is dealing with people one on one. The archetypical case of the former, of course, is the mob urging Pilate to condemn Christ. Then there is Peter's rejection of Christ three times in the context of the implicit mob hanging around Christ's trial before the Sanhedrin. But there is also the rejection of Christ in Luke 4:16-30 and his frequent encounters with groups of Pharisees. On the other hand, when people respond to Christ, they generally do so as Kierkegaard's Individual, separated from the crowd, e.g. the woman caught in adultery in John 8, or the centurion.
I'm sure someone somewhere has done a thesis on this.
Sunday, August 15, 2010
Moral Reason vs. Moral Emotion
Here is an article in today's Boston Globe concerning "the surprising moral force of disgust." The article starts in this manner:
We have here the usual modern confusion between moral reason and moral behavior. Moral reason pertains to the distinction between right and wrong behavior, and the possible rational foundation of moral decision. Moral behavior pertains to the actual causes that lead people to do what they do. This was a distinction well known to ancient philosophers (and modern philosophers like Kant as well.) For Aristotle, what distinguishes the truly virtuous man is the pleasure he takes in doing good; the vicious man finds acting well to be painful. These pains and pleasures are the "efficient causes" that (in large part) explain everyday behavior.
The point of moral education, Aristotle thought, was to train the emotions to reflect moral truth. The student must learn to take pleasure in the truly good and to feel pain at the truly evil. In the terms of the Boston Globe article, the student must learn to feel disgust at the truly disgusting, and to not feel disgust at that which truly is not. Then the causes of his everyday moral decision-making will be rightly ordered and he will tend to act well.
Of course, this moral education is only possible if there is a knowledge of good and evil that is not itself simply a refection of emotions. That there is such knowledge was acknowledged by Aristotle, and Kant as well (although they disagree on its foundation.) Modern science has done nothing to undermine the reasons for recognizing it. The evolutionary explanation offered by the Globe, for instance, doesn't even begin to do the job. Such an explanation may possibly explain how feelings of disgust arose; but even if they do, they haven't started to explain the origin of our notions of good and evil, or the virtuous and the base. Good and evil are much broader concepts than the disgusting; it follows that any association between disgusting and evil could occur only in light of an already existing concept of evil.
I wonder how much time and money has been wasted by scientists catching up to where Aristotle was 2500 years ago?
“Two things fill my mind with ever renewed wonder and awe the more often and deeper I dwell on them,” wrote Immanuel Kant, “the starry skies above me, and the moral law within me.”
Where does moral law come from? What lies behind our sense of right and wrong? For millennia, there have been two available answers. To the devoutly religious, morality is the word of God, handed down to holy men in groves or on mountaintops. To moral philosophers like Kant, it is a set of rules to be worked out by reason, chin on fist like Rodin’s thinker.
But what if neither is correct? What if our moral judgments are driven instead by more visceral human considerations? And what if one of those is not divine commandment or inductive reasoning, but simply whether a situation, in some small way, makes us feel like throwing up?
This is the argument that some behavioral scientists have begun to make: That a significant slice of morality can be explained by our innate feelings of disgust.
We have here the usual modern confusion between moral reason and moral behavior. Moral reason pertains to the distinction between right and wrong behavior, and the possible rational foundation of moral decision. Moral behavior pertains to the actual causes that lead people to do what they do. This was a distinction well known to ancient philosophers (and modern philosophers like Kant as well.) For Aristotle, what distinguishes the truly virtuous man is the pleasure he takes in doing good; the vicious man finds acting well to be painful. These pains and pleasures are the "efficient causes" that (in large part) explain everyday behavior.
The point of moral education, Aristotle thought, was to train the emotions to reflect moral truth. The student must learn to take pleasure in the truly good and to feel pain at the truly evil. In the terms of the Boston Globe article, the student must learn to feel disgust at the truly disgusting, and to not feel disgust at that which truly is not. Then the causes of his everyday moral decision-making will be rightly ordered and he will tend to act well.
Of course, this moral education is only possible if there is a knowledge of good and evil that is not itself simply a refection of emotions. That there is such knowledge was acknowledged by Aristotle, and Kant as well (although they disagree on its foundation.) Modern science has done nothing to undermine the reasons for recognizing it. The evolutionary explanation offered by the Globe, for instance, doesn't even begin to do the job. Such an explanation may possibly explain how feelings of disgust arose; but even if they do, they haven't started to explain the origin of our notions of good and evil, or the virtuous and the base. Good and evil are much broader concepts than the disgusting; it follows that any association between disgusting and evil could occur only in light of an already existing concept of evil.
I wonder how much time and money has been wasted by scientists catching up to where Aristotle was 2500 years ago?
Tuesday, August 10, 2010
Just War and the Bomb
I recently engaged in a debate over at Edward Feser's site regarding the use of the atomic bombs in WWII. Dr. Feser's post also references this article by James Akin. In this post I would like to engage in a lengthier meditation on the use of atomic weapons to end WWII, expanding on some points I made in the comment's section on Dr. Feser's blog.
The first is to reconsider the distinction between "soldiers" and "civilians", and the "innocent" in a world of total war. Just war theory was created back when Augustine was trying to buck up the morale of Romans defending themselves against barbarians. The idea was that the Romans could justly engage in war to defend themselves, including killing barbarian invaders. But this justification didn't extend to non-combatants; say, the barbarian women and children. At that time, there was a pretty clear distinction between combatants and non-combatants. The guys with the swords were combatants, the women carrying children weren't. Moreover, the women carrying children were more a hindrance than a help to the invading barbarians. Armies back then lived off the land they invaded, and carrying along women and children only brought more mouths to feed. So the noncombatants back in those days added no combat value, and were truly innocent.
This state of affairs continued up until about the 18th century. Until then, the horizon of the average peasant was the end of his fields, and whether he would get a decent crop in that year. Wars between Kings didn't concern him overmuch, and he likely only learned the news of war only through an army (his King's or the enemy's) trampling through his fields. These wars were a matter of intermittent battles, between which things were pretty much indistinguishable from peace. The soldiers were armed with sword, pikes and arrows, none of which required a supply train or massive support from the home front. In a war like this, soldier and noncombatant have clear meanings.
Starting sometime in the 19th century - our Civil War is a good place - war began to change. It stopped being the occasional violent contest between armed minorities, and started becoming an enduring economic contest between nations. Soldiers were now armed with rifles and cannon that required extensive supply support in terms of ammunition and repair. A medieval army was good to go if everyone had a sword and some chain mail. Lee's Army of Northern Virginia needed everyone to carry a rifle, and also required wagons carrying millions of rounds of ammunition to be effective. It would need millions more after a few days of battle. Compared with a sword or a longbow, the Civil War rifle was an intricate piece of machinery that needed constant maintenance and was relatively easily destroyed and not so easy to replace. Furthermore, the soldiers and their support were transported on a network of ships and railroads, requiring maintenance and even expansion. The quartermaster and the logistics officer, heretofore minor players at best in war, now became decisively important individuals.
What also changed was the introduction of the mass conscription army. Wars were no longer fought between standing classes of professional soldiers (e.g. the Roman Army - the "combatants" for Augustine), but instead between huge numbers of young men forcibly conscripted from civilian life for the purpose. The point of all these young men was to be the delivery point for all that destructive energy manufactured by the nation. Thus the Civil War battle was largely a matter of rows of young conscripts facing each other, repeatedly executing a series of mechanical motions - just like a factory worker - load, aim, shoot, load, aim, shoot, load, aim shoot - until one of the rows of young men was destroyed. Or both. It wasn't Augustine's kind of war anymore, and the distinction between "combatants" and "innocent noncombatants" was disappearing. For in what way was the factory worker innocent that the poor Georgia boy taking a minie ball in the face wasn't?
And this was something that William T. Sherman understood. His March to the Sea (see Terrible Innocence: General Sherman at War for a perceptive account of this, or Victor Davis Hanson's The Soul of Battle: From Ancient Times to the Present Day, How Three Great Liberators Vanquished Tyranny) evidenced a brilliant understanding of what modern war was about, and also revealed a moral clarity missing from a lot of the armchair generals questioning the wisdom of Harry Truman. Rather than continue the practice of standing up rows of young men to mow each other down, Sherman marched through the South and destroyed the material foundation that kept the Confederacy in existence. His march caused a lot of suffering, yes. But the cost in human life was paltry compared to what was going on in Virginia in the attritional war between Lee and Grant.
Sherman understood that Southern civilians, especially the plantation owners, were in no way "innocent noncombatants." They were the ones who started the war, kept the war going, and insisted that the young men stay in their trenches at Petersburg and suffer. Here is Hanson quoting some of Shermans' soldiers addressing Southern women:
Sherman did not restrict himself to destroying purely military targets. In total war, everything in the nation is put in the service of the war. A cornfield is just as necessary to the war effort as a cannon factory. So the cornfield was burnt.
And we come to what is missing in the analysis of James Akin and Dr. Feser. Akin writes of "dogs that didn't bark", but the real missing dog is the missing dogface - the 17 year old farmhand from Georgia, conscripted into the U.S. Army, and about to be sent into Japanese machine gun fire. This young Johnny Reb nowhere makes an appearance in the moral analysis of Akin/Feser. But it figured significantly in the mind of Harry Truman, and thank God for that.
The question facing Harry Truman was not the pristine academic one of killing or not killing the innocent. The tragedy of modern war is that the decision often boils down to which innocent lives will be taken. Will it be the Japanese civilians in Hiroshima, or the farm boys from Nebraska and Georgia who will be killed? Why is it a "more morally pure intention" to drag the kid off the farm, put a gun in his hands, and send him onto the exploding beaches of Kyushu, rather than nuke Japanese civilians? To raise this question is to answer it, which is why Johnny from Georgia is missing in action from the Akin/Feser argument. While Akin spends time making fine but pointless distinctions among Japanese targets (only those involving "war resources" are legitimate, when everything is a war resource in a modern total war), he has no time for a moral analysis of the American boys his thinking would inevitably send to their deaths.
Harry Truman was Commander-in-Chief of the U.S. Armed Forces. The unstated agreement between the C-in-C and the soldier is that young men (and now women) will put their lives in mortal danger under the President's orders; and that the President will not spend their lives unnecessarily. Truman would have violated his duty to every American serviceman if he had a way to end the war, but instead ordered his soldiers into battle in the name of a morally pure intention. Unfortunately, Truman did not have Sherman's option of destroying property rather than lives. Instead he ordered the nuking of Japanese civilians for the sake of saving his men; men who, in the modern fashion, were really just civilians temporarily in uniform. Yes, Truman ordered the deaths of innocent people; in doing so, he avoided ordering the deaths of innocent young American men. There is no way to stay clean in modern war. Just how would the armchair President's have stayed morally pure at the end of the war? This is another dog that never barks in Akin's argument.
I'm glad I served under President's Reagan and Bush Sr., and not Presidents Akin and Feser. I wouldn't want to serve under any President who would send me into machine gun fire for the sake of his moral purity.
And if this puts me out of line with the Catechism.... so be it. But I suspect Akin's interpretation of the CCC passages in question is not the only one.
The first is to reconsider the distinction between "soldiers" and "civilians", and the "innocent" in a world of total war. Just war theory was created back when Augustine was trying to buck up the morale of Romans defending themselves against barbarians. The idea was that the Romans could justly engage in war to defend themselves, including killing barbarian invaders. But this justification didn't extend to non-combatants; say, the barbarian women and children. At that time, there was a pretty clear distinction between combatants and non-combatants. The guys with the swords were combatants, the women carrying children weren't. Moreover, the women carrying children were more a hindrance than a help to the invading barbarians. Armies back then lived off the land they invaded, and carrying along women and children only brought more mouths to feed. So the noncombatants back in those days added no combat value, and were truly innocent.
This state of affairs continued up until about the 18th century. Until then, the horizon of the average peasant was the end of his fields, and whether he would get a decent crop in that year. Wars between Kings didn't concern him overmuch, and he likely only learned the news of war only through an army (his King's or the enemy's) trampling through his fields. These wars were a matter of intermittent battles, between which things were pretty much indistinguishable from peace. The soldiers were armed with sword, pikes and arrows, none of which required a supply train or massive support from the home front. In a war like this, soldier and noncombatant have clear meanings.
Starting sometime in the 19th century - our Civil War is a good place - war began to change. It stopped being the occasional violent contest between armed minorities, and started becoming an enduring economic contest between nations. Soldiers were now armed with rifles and cannon that required extensive supply support in terms of ammunition and repair. A medieval army was good to go if everyone had a sword and some chain mail. Lee's Army of Northern Virginia needed everyone to carry a rifle, and also required wagons carrying millions of rounds of ammunition to be effective. It would need millions more after a few days of battle. Compared with a sword or a longbow, the Civil War rifle was an intricate piece of machinery that needed constant maintenance and was relatively easily destroyed and not so easy to replace. Furthermore, the soldiers and their support were transported on a network of ships and railroads, requiring maintenance and even expansion. The quartermaster and the logistics officer, heretofore minor players at best in war, now became decisively important individuals.
What also changed was the introduction of the mass conscription army. Wars were no longer fought between standing classes of professional soldiers (e.g. the Roman Army - the "combatants" for Augustine), but instead between huge numbers of young men forcibly conscripted from civilian life for the purpose. The point of all these young men was to be the delivery point for all that destructive energy manufactured by the nation. Thus the Civil War battle was largely a matter of rows of young conscripts facing each other, repeatedly executing a series of mechanical motions - just like a factory worker - load, aim, shoot, load, aim, shoot, load, aim shoot - until one of the rows of young men was destroyed. Or both. It wasn't Augustine's kind of war anymore, and the distinction between "combatants" and "innocent noncombatants" was disappearing. For in what way was the factory worker innocent that the poor Georgia boy taking a minie ball in the face wasn't?
And this was something that William T. Sherman understood. His March to the Sea (see Terrible Innocence: General Sherman at War for a perceptive account of this, or Victor Davis Hanson's The Soul of Battle: From Ancient Times to the Present Day, How Three Great Liberators Vanquished Tyranny) evidenced a brilliant understanding of what modern war was about, and also revealed a moral clarity missing from a lot of the armchair generals questioning the wisdom of Harry Truman. Rather than continue the practice of standing up rows of young men to mow each other down, Sherman marched through the South and destroyed the material foundation that kept the Confederacy in existence. His march caused a lot of suffering, yes. But the cost in human life was paltry compared to what was going on in Virginia in the attritional war between Lee and Grant.
Sherman understood that Southern civilians, especially the plantation owners, were in no way "innocent noncombatants." They were the ones who started the war, kept the war going, and insisted that the young men stay in their trenches at Petersburg and suffer. Here is Hanson quoting some of Shermans' soldiers addressing Southern women:
You in wild enthusiasm, urge young men to the battlefield where men are being killed by the thousands., while you stay at home and sing "Bonnie Blue Flag"; but you set up a howl when you see the Yankees down here getting your chickens. Many of your young men have told us that they are tired of war and would quit, but you women would shame them and drive them back.
Sherman did not restrict himself to destroying purely military targets. In total war, everything in the nation is put in the service of the war. A cornfield is just as necessary to the war effort as a cannon factory. So the cornfield was burnt.
And we come to what is missing in the analysis of James Akin and Dr. Feser. Akin writes of "dogs that didn't bark", but the real missing dog is the missing dogface - the 17 year old farmhand from Georgia, conscripted into the U.S. Army, and about to be sent into Japanese machine gun fire. This young Johnny Reb nowhere makes an appearance in the moral analysis of Akin/Feser. But it figured significantly in the mind of Harry Truman, and thank God for that.
The question facing Harry Truman was not the pristine academic one of killing or not killing the innocent. The tragedy of modern war is that the decision often boils down to which innocent lives will be taken. Will it be the Japanese civilians in Hiroshima, or the farm boys from Nebraska and Georgia who will be killed? Why is it a "more morally pure intention" to drag the kid off the farm, put a gun in his hands, and send him onto the exploding beaches of Kyushu, rather than nuke Japanese civilians? To raise this question is to answer it, which is why Johnny from Georgia is missing in action from the Akin/Feser argument. While Akin spends time making fine but pointless distinctions among Japanese targets (only those involving "war resources" are legitimate, when everything is a war resource in a modern total war), he has no time for a moral analysis of the American boys his thinking would inevitably send to their deaths.
Harry Truman was Commander-in-Chief of the U.S. Armed Forces. The unstated agreement between the C-in-C and the soldier is that young men (and now women) will put their lives in mortal danger under the President's orders; and that the President will not spend their lives unnecessarily. Truman would have violated his duty to every American serviceman if he had a way to end the war, but instead ordered his soldiers into battle in the name of a morally pure intention. Unfortunately, Truman did not have Sherman's option of destroying property rather than lives. Instead he ordered the nuking of Japanese civilians for the sake of saving his men; men who, in the modern fashion, were really just civilians temporarily in uniform. Yes, Truman ordered the deaths of innocent people; in doing so, he avoided ordering the deaths of innocent young American men. There is no way to stay clean in modern war. Just how would the armchair President's have stayed morally pure at the end of the war? This is another dog that never barks in Akin's argument.
I'm glad I served under President's Reagan and Bush Sr., and not Presidents Akin and Feser. I wouldn't want to serve under any President who would send me into machine gun fire for the sake of his moral purity.
And if this puts me out of line with the Catechism.... so be it. But I suspect Akin's interpretation of the CCC passages in question is not the only one.
Friday, July 30, 2010
Irony Proof
Reading Matt Ridley's book The Rational Optimist: How Prosperity Evolves, he has this to say with reference to Plato:
And Plato did it in writing. I wonder what that means?
Kierkegaard didn't think there was any point in trying to directly argue people out of the modern philosophical point of view. Modern philosophy is irony-free because it is not subjective; to become subjective means to understand the meaning of irony. But whatever is said ironically can also be taken in its direct sense; we can, if we choose, interpret Plato as simply meaning directly what he wrote, as Ridley does. There is no way to prove, in any way acceptable to modern philosophical demands, that there is any more to Plato than this.
But, thank God, there is...
The endless modern laments about how texting and emails are shortening attention span go back to Plato, who deplored writing as a destroyer of memorizing.
And Plato did it in writing. I wonder what that means?
Kierkegaard didn't think there was any point in trying to directly argue people out of the modern philosophical point of view. Modern philosophy is irony-free because it is not subjective; to become subjective means to understand the meaning of irony. But whatever is said ironically can also be taken in its direct sense; we can, if we choose, interpret Plato as simply meaning directly what he wrote, as Ridley does. There is no way to prove, in any way acceptable to modern philosophical demands, that there is any more to Plato than this.
But, thank God, there is...
Wednesday, July 28, 2010
Massachusetts About to Do It Again
With all the universities in this state (sorry, "Commonwealth"), it's amazing how many folks can't fathom simple logic.
Massachusetts is about to enact a law such that it must cast its Presidential Electoral Votes for the national popular vote winner. Now there are four possibilities concerning the popular vote:
1) The Massachusetts popular vote goes for the Democrat, and the national vote goes for the Democrat.
2) The Massachusetts popular vote goes for the Republican, and the national vote goes for the Republican.
3) The Massachusetts popular vote goes for the Democrat, and the national vote goes for the Republican.
4) The Massachusetts popular vote goes for the Republican, and the national vote goes for the Democrat.
The proposed law makes no difference with respect to possibilities 1 and 2. Possibility 4 is a practical impossibility. So the only practical opportunity for the law to take effect is possibility 3. In other words, the effect this law will have will be to elect a Republican in the peculiar case that the Republican wins the popular vote but would lose the electoral vote. Massachusetts to the rescue! As Jeff Jacoby has pointed out, this law would have forced Massachusetts to vote for Richard Nixon rather than George McGovern in 1972.
When Scott Brown was elected to the Senate after Massachusetts changed its laws in 2004 to prevent Mitt Romney from appointing a Republican to fill the vacant seat of "President" John Kerry, I thought there might be a God. If Massachusetts manages to put Romney (or even more delicious, Sarah Palin) into the White House in 2012, despite losing the MA popular vote, I'll know there is a God.
Massachusetts is about to enact a law such that it must cast its Presidential Electoral Votes for the national popular vote winner. Now there are four possibilities concerning the popular vote:
1) The Massachusetts popular vote goes for the Democrat, and the national vote goes for the Democrat.
2) The Massachusetts popular vote goes for the Republican, and the national vote goes for the Republican.
3) The Massachusetts popular vote goes for the Democrat, and the national vote goes for the Republican.
4) The Massachusetts popular vote goes for the Republican, and the national vote goes for the Democrat.
The proposed law makes no difference with respect to possibilities 1 and 2. Possibility 4 is a practical impossibility. So the only practical opportunity for the law to take effect is possibility 3. In other words, the effect this law will have will be to elect a Republican in the peculiar case that the Republican wins the popular vote but would lose the electoral vote. Massachusetts to the rescue! As Jeff Jacoby has pointed out, this law would have forced Massachusetts to vote for Richard Nixon rather than George McGovern in 1972.
When Scott Brown was elected to the Senate after Massachusetts changed its laws in 2004 to prevent Mitt Romney from appointing a Republican to fill the vacant seat of "President" John Kerry, I thought there might be a God. If Massachusetts manages to put Romney (or even more delicious, Sarah Palin) into the White House in 2012, despite losing the MA popular vote, I'll know there is a God.
Monday, July 26, 2010
On Philosophy at Second Hand, with Specific Reference to Kant
This is the first post in what I hope is at least a two-part series, discussing the benefits of reading the great philosophers directly rather than at second-hand.
Arthur Schopenhauer, in the Preface to the Second Edition of The World As Will and Representation, In Two Volumes: Vol. I, has this to say about reading the great philosophers:
Arthur Schopenhauer, in the Preface to the Second Edition of The World As Will and Representation, In Two Volumes: Vol. I, has this to say about reading the great philosophers:
In consequence of his originality, it is true of him in the highest degree, as indeed of all genuine philosophers, that only from their own works does one come to know them, not from the accounts of others. For the thoughts of those extraordinary minds cannot stand filtration through an ordinary head.
The reason for this is not necessarily that the philosopher's thought is too sophisticated for the ordinary head to conceive. Just the opposite is more likely the problem: It is just in his simplicity that the great philosopher is most likely to be missed: For philosophy is about "first ideas" or the bedrock of our rational approach to the world. What distinguishes the great philosopher is his ability to reveal and analyze the first ideas. But just because they are first they are very easy to miss, because we naturally look past them. We habitually look past them.
And we do so for very good reasons. We don't need to think about first ideas to get on with the ordinary business of life. We take them for granted and deal only with the secondary questions that confront us: Can I afford a new house, what school my kids should attend, how to fix a car, etc. Our lives would grind to a halt if we constantly had "first questions" in mind - which is the basis for the perennial indictment of the philosopher that he is "useless", and why Aristotle described philosophy as the most noble but least necessary of endeavors. We operate more efficiently the more we can take these basic questions for granted, and so we develop habits of mind that put, and keep, these ideas in the realm of assumed background and, perhaps, even actively discourage the mind from uncovering them.
The great philosopher, then, is doing something that, in a sense, does not come naturally and even "goes against the grain." He uncovers the background that the mind wishes would stay there so it can get on with the "real" business of thought. So when we read a philosopher, the drift of our mind is to find a place for his thought within the categories with which our mind is already comfortable (I discuss this phenomenon in relation to materialists and St. Thomas in this post.) Of course, it may be that the philosopher's primary goal is to challenge those very categories.
So when we read a great philosopher at second hand, there is a danger that what we will read is the philosopher's thought as recast into the comfortable categories of the interpreter. This happens with Kant when he is introduced in the following common way: We human beings have (at least) five senses. We know and encounter the world through them. But we see that other animals have different ways of appreciating the world through their senses, and even have different senses altogether. Bats, for example, detect objects through echo location. Some species of fish (sharks, I believe) sense the electromagnetic field of their prey. What must the world look like to a shark? Can we even conceive of what the experience of a shark is like? (See the famous paper of Thomas Nagel on this topic, although he focusses on bats and not sharks.) We come to see that the world is not given to us directly in its own terms, but comes to us recast in the terms dictated by our cognitive apparatus. Thus arises the Kantian distinction between phenomena and noumena, or "things as they appear to us" and "things as they are in themselves."
Now this is very close to what Kant is getting at (in my opinion, of course - I am well aware that my mind is subject to the same propensity to think in familiar channels as everyone else, so if anyone really wants exposure to Kant, he should be read directly rather than through me. Put your irony back in its holster). But "close" can be disastrous when interpreting philosophers, precisely because "close" may miss just the jump out of familiar channels that makes the philosopher significant. Absent this jump, everything that follows takes on a different meaning and you will end up in a very different place than the philosopher intended; just as Routes 1 and 93 start in very close parallel out of Boston, but if you travel on Rt. 1 rather than the intended 93, you will end up very far from where you hoped.
The introduction to Kant given above is vulnerable to a straightforward objection. If we know things only as they appear to us, rather than things as they are in themselves, then the question of what it is like to be a shark or a bat changes meaning; in fact it loses meaning. "Shark" and "bat" are just constructions our cognitive apparatus puts on experience; asking "what it is like to be a shark" is then just asking what it is like to be this particular kind of cognitive construction. The object of the question has changed; it is no longer some thing-in-itself outside ourselves (about which we can know nothing at all on the Kantian view), and instead has become a subjective question concerning the nature of inner experience. And it makes no sense to ask "What is it like to be a cognitive construction", because cognitive constructions have no inner lives; they are aspects of our inner lives. It is like asking what it is like to be the color red or to be a dream.
What, then, becomes of the initial case for the plausibility of Kantian philosophy? That case only had plausibility because we assumed, "naively" we later discover, that when we think about "bats" and "sharks", we are in contact with real things out there about which it makes sense for us to discuss their inner lives. But this is only possible if we can know something about the thing-in-itself, verboten knowledge according to Kant. So the Kantian philosophy destroys the ground of its own plausibility.
Someone to whom this objection occurs, and who is familiar with Kant only through the common introduction given above, may have nothing further to do with Kant after concluding that it is Kant, and not himself, who is being naive. And this would be a tragedy, because while there are good reasons to reject Kant's philosophy, this isn't one of them, and, even philosophers who are wrong have things to teach us, especially great philosophers like Kant. But a man is unlikely to give Kant further time if he has concluded that he was so obtuse as to not anticipate the objection given above. (In fact, it's a good clue that a critic has not really understood a great philosopher if he thinks he has a devastating, and obvious, refutation of the philosopher's basic idea. To borrow from Hume's argument against miracles: Is it more likely that the critic hasn't understood the philosopher, or that all the bright minds who have studied the philosopher over many years simply missed the obvious retort?)
Kant is not subject to the objection because he does not base the plausibility of his philosophy on meditations concerning the inner lives of other animals. He bases it on the only possible thing he can: The data of our own consciousness. In the Transcendental Aesthetic, Kant proposes to the reader that space and time are not things we empirically discover; they are in fact forms of empirical discovery. We do not first experience the tree over there and myself over here, and then discover space as the thing separating us. No, the very distinction that makes possible the experience of the tree as something distinct from myself is the distinction of space. Space is prior to the experience of trees in the sense of being constitutive of it; and the only candidate for the agent of constitution is our own consciousness. So the experience of space is really an experience of the demands of our own cognitive apparatus on reality; and everything experienced in space is an experience of whatever is out there only insofar as it has been reconstituted in terms of space through our consciousness. A similar argument is adduced for time.
Whether or not the reader finds the argument compelling (and I reiterate the point that this is my interpretation of Kant, and Kant was a much greater philosopher than I am, so it is better to read him directly for the argument), the point is that Kant has not stolen any bases by implicitly referring to a knowledge of things-in-themselves that he will later claim is impossible. This may seem an obscure point but it is what distinguishes the genuine Kantian philosophy from the bastardized, self-contradictory, pseudo-Kantian philosophy that has become part of our "default" intellectual furniture. Repeating a point I have perhaps made in too many posts, much of the contemporary philosophy of mind, I believe, takes a pseudo-Kantianism for granted. Any time you here someone talking about how the brain constructs experience or "models" the world, you are listening to someone on the Kant Express; but they very likely have not taken Kant seriously enough.
Returning to my earlier point that our minds tend to want to run in familiar grooves, our minds have an almost overwhelming impulse to talk about things as they really are. (Of course, I think we have this impulse because we really can talk about how things really are, but that's another story.) Kant recognized this facet of our nature in saying that metaphysics, while an illusion, is an inevitable illusion. The pseudo-Kantians of today don't have Kant's discipline; they want to talk about how the mind (or rather, "the brain") is essentially a modeler of the world or a constructor of experience from sensation, and innocently suppose that they are talking directly about a real-world, thing-in-itself object called "the brain" when they do so. If we have trusted Kant enough to read him directly, then we can see the self-defeating nature of the project; it is the same self-defeating feature found in the typical introduction to Kant.
The penalty for being a pseudo-Kantian is the same as the penalty for all philosophical confusion: A lack of self-understanding. This lack of self-understanding is why so much of the contemporary philosophy of mind has the character of a circular firing squad. ("The most striking feature is how much of mainstream philosophy of mind of the past fifty years seems obviously false." John Searle, The Rediscovery of the Mind, p. 3). It seems so obviously false because it is: Everyone is trying to square a circle. They are trying to show how the brain, through purely material operations, is the causal foundation of consciousness and thought. But since "the brain" is itself a construction of consciousness, the project is really about explaining consciousness in terms of itself, or rather consciousness in the terms of whatever a particular philosopher decides to take seriously about consciousness. In any case, it is circular, and no one seems like they will run out of ammunition any time soon.
Coming soon, I hope: On Philosophy at Second Hand, with Specific Reference to Plato.
Saturday, July 24, 2010
David Brooks on the Moral Sense
David Brooks of the New York Times has a piece here on the origin of what he calls the "moral sense." The article starts this way:
Where does our sense of right and wrong come from? Most people think it is a gift from God, who revealed His laws and elevates us with His love. A smaller number think that we figure the rules out for ourselves, using our capacity to reason and choosing a philosophical system to live by.
Moral naturalists, on the other hand, believe that we have moral sentiments that have emerged from a long history of relationships. To learn about morality, you don’t rely upon revelation or metaphysics; you observe people as they live.
Brooks goes on to describe the naturalist case for the evolutionary development of the "moral sense." Right off the bat, however, Brooks has posed what I can only call a false alternative, a phrase I now have a visceral reaction against since Barack Obama so often abuses it. ("There are those who pose the false alternative between spending trillions of dollars you don't have and fiscal sanity...") Anyway, God gives us the "rules" in a number of ways. One way is through direct revelation, another way is through the natural law:
There is no conflict between the natural law known by reason and the divine law known through revelation; both have their source in God. This would even include Brooks's evolution-based morality since God, if He is, would not have His Purposes stymied by evolution. Evolution would then be just another way God could reveal His Will to us. In other words, God created the kind of world in which we live, knowing that we would evolve the right sort of moral rules.
But we've got to dismiss the evolutionary basis for morality, not because it is exclusive of a Divinely Revealed morality, but simply because it is incapable of serving as a basis for morality in any case. Moral rules concern the relationship between the possible and the actual; they criticize what we are doing in terms of what we should be doing but are not. But if your moral rules are entirely based on "observing people as they live", then your rules will necessarily be nothing more an affirmation of already-existing arrangements. And no one needs rules to tell them to keep on doing what they are already doing anyway.
Brooks quotes a professor who compares the moral sense to our sense of taste:
There is, however, no gainsaying taste. Some people like sweet foods, others like salty foods. Some people act fairly and others with cruelty. We haven't gotten to morality yet until we can say that it is better to act fairly than with cruelty, and that can only happen when we acknowledge that the possible (how people should act) has authority over the actual (how people in fact do act). I believe it was Kierkegaard who wrote that the poet is higher than the historian, because the poet criticizes the actual in terms of the possible. The evolutionist is an historian.
There was a time when slavery was a universally accepted human institution. At such a time, basing morality simply on how people live, we would have to conclude that slavery is a morally acceptable institution. There was a phrase popular back in the sixties that went if it feels good, do it. The evolutionary morality version of this is, if you are already doing it, keep on doing it. But who needs to be told that? No more than than they need to be told to keep on doing what feels good.
Now the supporter of evolutionary morality might object this way: Our studies show that evolution has endowed children with an inborn sense of justice:
Slavery, the supporter of evolutionary morality will say, clearly conflicts with this inborn sense of justice. Therefore slavery is wrong. It just took people a while to figure it out, but when they did, it was because they realized slavery conflicted with their evolutionary developed sense of justice.
This doesn't work because if, for centuries, people had no problem approving of slavery despite the rudimentary sense of justice they were born with, then clearly slavery did not conflict with this sense of justice. The evolutionist is just reading back into his rudimentary sense of justice his preferred moral results. In other words, he's slipping the possible in by the back door. If our principle is to "observe people how they live", and if they live in happy accord with a slave-based society, then we have no possible basis on which to condemn that society. And historically, that is not how slavery ended. The slave trade ended in the 19th century because the British Navy decided that a world without slavery was preferable to a world with slavery (the actual one), and further decided to bring this preferable world about at the end of a cannon.
The only way to get to morality is through the notion of a final cause for man; in other words, to acknowledge that man has a rationally appreciable point to his existence that he is free to bring about (or not bring about) through his actions. The final cause serves for him as an ideal, as the possible which he has not yet brought into existence, but should. But the primary reason Darwin offered his theory of evolution was to banish final causes from the world; in doing so he banished any rational basis for ethics as well. This isn't to stay that people can't still behave morally in the era of Darwin; it only means that any attempt to make sense of their behavior in Darwinian terms must fail.
When Gentiles who have not the law do by nature what the law requires, they are a law to themselves, even though they do not have the law. They show that what the law requires is written on their hearts, while their conscience also bears witness and their conflicting thoughts accuse or perhaps excuse them... Rom 2:14:15.
There is no conflict between the natural law known by reason and the divine law known through revelation; both have their source in God. This would even include Brooks's evolution-based morality since God, if He is, would not have His Purposes stymied by evolution. Evolution would then be just another way God could reveal His Will to us. In other words, God created the kind of world in which we live, knowing that we would evolve the right sort of moral rules.
But we've got to dismiss the evolutionary basis for morality, not because it is exclusive of a Divinely Revealed morality, but simply because it is incapable of serving as a basis for morality in any case. Moral rules concern the relationship between the possible and the actual; they criticize what we are doing in terms of what we should be doing but are not. But if your moral rules are entirely based on "observing people as they live", then your rules will necessarily be nothing more an affirmation of already-existing arrangements. And no one needs rules to tell them to keep on doing what they are already doing anyway.
Brooks quotes a professor who compares the moral sense to our sense of taste:
By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.
There is, however, no gainsaying taste. Some people like sweet foods, others like salty foods. Some people act fairly and others with cruelty. We haven't gotten to morality yet until we can say that it is better to act fairly than with cruelty, and that can only happen when we acknowledge that the possible (how people should act) has authority over the actual (how people in fact do act). I believe it was Kierkegaard who wrote that the poet is higher than the historian, because the poet criticizes the actual in terms of the possible. The evolutionist is an historian.
There was a time when slavery was a universally accepted human institution. At such a time, basing morality simply on how people live, we would have to conclude that slavery is a morally acceptable institution. There was a phrase popular back in the sixties that went if it feels good, do it. The evolutionary morality version of this is, if you are already doing it, keep on doing it. But who needs to be told that? No more than than they need to be told to keep on doing what feels good.
Now the supporter of evolutionary morality might object this way: Our studies show that evolution has endowed children with an inborn sense of justice:
This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.
Slavery, the supporter of evolutionary morality will say, clearly conflicts with this inborn sense of justice. Therefore slavery is wrong. It just took people a while to figure it out, but when they did, it was because they realized slavery conflicted with their evolutionary developed sense of justice.
This doesn't work because if, for centuries, people had no problem approving of slavery despite the rudimentary sense of justice they were born with, then clearly slavery did not conflict with this sense of justice. The evolutionist is just reading back into his rudimentary sense of justice his preferred moral results. In other words, he's slipping the possible in by the back door. If our principle is to "observe people how they live", and if they live in happy accord with a slave-based society, then we have no possible basis on which to condemn that society. And historically, that is not how slavery ended. The slave trade ended in the 19th century because the British Navy decided that a world without slavery was preferable to a world with slavery (the actual one), and further decided to bring this preferable world about at the end of a cannon.
The only way to get to morality is through the notion of a final cause for man; in other words, to acknowledge that man has a rationally appreciable point to his existence that he is free to bring about (or not bring about) through his actions. The final cause serves for him as an ideal, as the possible which he has not yet brought into existence, but should. But the primary reason Darwin offered his theory of evolution was to banish final causes from the world; in doing so he banished any rational basis for ethics as well. This isn't to stay that people can't still behave morally in the era of Darwin; it only means that any attempt to make sense of their behavior in Darwinian terms must fail.
Tuesday, July 20, 2010
Cana and Being a Spiritual Superhero
That's Tintoretto's Wedding at Cana that's now the banner of my blog. The miracle at Cana is perhaps my favorite that Christ performed. It's got a self-verifying quality to it that some of the other miracles lack. That Christ would miraculously cure the sick is something we might expect when God visits Earth; it's the kind of serious thing we imagine God would do, and therefore we can imagine someone imagining he did it. But who would imagine that the first miracle God would perform would be... to refill pots of wine so that a party could continue? And who would further imagine that God would perform this miracle because his mother asked him to? The miracle has a frivolous quality to it that is everlastingly shocking, as though the miracle really belongs in the Gospel According to John Blutarsky.
We find it difficult to accept one of the obvious implications of Cana: Christ expects us to have a good time. Maybe not with Animal House level excess, but the man who thinks he's too busy being holy to have an occasional beer with the lads is probably missing something important concerning what Christ is about (this post is inspired by a recent exchange I had in the comment box at the Maverick Philosopher blog on this subject. As usual, I was an utter failure at getting anyone to see my point.) Indeed, we tend to think that being seriously religious must involve being seriously miserable. So serious, in fact, that the necessary misery involved is reason enough to dismiss the claims of Christ altogether. Perhaps Christ performed the miracle at Cana, and spent so much time at parties, just to remove the excuse of those who avoid religion with the claim that they are not cut out to be spiritual superheroes.
We find it difficult to accept one of the obvious implications of Cana: Christ expects us to have a good time. Maybe not with Animal House level excess, but the man who thinks he's too busy being holy to have an occasional beer with the lads is probably missing something important concerning what Christ is about (this post is inspired by a recent exchange I had in the comment box at the Maverick Philosopher blog on this subject. As usual, I was an utter failure at getting anyone to see my point.) Indeed, we tend to think that being seriously religious must involve being seriously miserable. So serious, in fact, that the necessary misery involved is reason enough to dismiss the claims of Christ altogether. Perhaps Christ performed the miracle at Cana, and spent so much time at parties, just to remove the excuse of those who avoid religion with the claim that they are not cut out to be spiritual superheroes.
But whereunto shall I esteem this generation to be like? It is like to children sitting in the market place. Who crying to their companions say: We have piped to you, and you have not danced: we have lamented, and you have not mourned. For John came neither eating nor drinking; and they say: He has a devil. The Son of man came eating and drinking, and they say: Behold a man that is a glutton and a wine drinker, a friend of publicans and sinners. And wisdom is justified by her children. Matt 11:16-19.Like most other reasons for dismissing Christ, the refusal to entertain the idea that Christ doesn't expect, in fact doesn't even want, us to try to become spiritual superheroes comes down to the sin of pride. The implication is that Christ is satisfied with spiritual mediocrities. Who wants to be mediocre? But there it is. Peter, James and John were not spiritual superheroes - especially Peter, yet he was chosen to be the primum inter pares, better to show forth the glory of God, who is content to work with mediocrities. Nor are the saints spiritual superheroes; they are just mediocre enough to give up doing it themselves and allow God to takeover.
Friday, July 16, 2010
Thinking and Doing
The Maverick Philosopher has an aphorism here, that I will quote:
And which is the philosopher? The doer or the thinker? The philosopher is neither; the philosopher is the man who unites thought and deed; the one who "understands the abstract concretely." (Kierkegaard) At least he was once understood thus.
The ancient philosophers were not tormented by doubts about their lives, because they had not yet separated thought and deed in the modern fashion. For the ancient philosopher, thought was a deed, which was why the Socratic cross-examination was a fruitful method of philosophical investigation. To force a man into a contradiction was to force a change in his life, because men lived immediately in their thought. Today, we are not bothered by contradictions, since our thought bears no necessary connection to our lives. The intellectual, the man who manages to live serenely while advocating an array of bizarre and self-contradictory doctrines, is a peculiarly modern phenomena.
Philosophy is held in such ill-repute today because, once the separation between thought and life is made, the penalty of contradiction disappears. The critics are then quite justified in dismissing philosophy as a gassy exchange of opinions from which nothing decisive can emerge. If philosophy is to be renewed, it will only be by thought and life being reunited.
The thinker, because he is a thinker, cannot naively live his life of thought, but must be tormented by doubts regarding it. The doer, because he is not a thinker, can naively live his life of action.
And which is the philosopher? The doer or the thinker? The philosopher is neither; the philosopher is the man who unites thought and deed; the one who "understands the abstract concretely." (Kierkegaard) At least he was once understood thus.
The ancient philosophers were not tormented by doubts about their lives, because they had not yet separated thought and deed in the modern fashion. For the ancient philosopher, thought was a deed, which was why the Socratic cross-examination was a fruitful method of philosophical investigation. To force a man into a contradiction was to force a change in his life, because men lived immediately in their thought. Today, we are not bothered by contradictions, since our thought bears no necessary connection to our lives. The intellectual, the man who manages to live serenely while advocating an array of bizarre and self-contradictory doctrines, is a peculiarly modern phenomena.
Philosophy is held in such ill-repute today because, once the separation between thought and life is made, the penalty of contradiction disappears. The critics are then quite justified in dismissing philosophy as a gassy exchange of opinions from which nothing decisive can emerge. If philosophy is to be renewed, it will only be by thought and life being reunited.
Saturday, July 10, 2010
Douthat on Shrek
Ross Douthat has a review of the latest Shrek film in the June 21 National Review that reassures me that I'm not just a lone, crazy voice in the wilderness when it comes to this series, which I've hated from the get-go. He nails it exactly right:
Our culture robs children of their innocence as early as it can; and it is only in that innocence that the real meaning of fairy tales can be perceived. I believe this is one of the primary truths we learn from G.K. Chesterton. When we are older, we cannot but assume a critical distance from what we read. The child is still in the process of forming his self; what he reads (or is read to him) becomes a part of him in a way it never can again. For Chesterton, every truth worth knowing he learned in the nursery.
It is bad enough our children are exposed to things that destroy their innocence early on, and make the appreciation of fairy tales more difficult. Now, in the Shrek series, the fairy tale tradition itself is subverted. This constitutes a kind of inoculation against the power of fairy tales. Douthat is as depressed about this as I am:
The result is the sort of impertinent, self-satisfied young adults whom I encounter among my children's peers. They are not exactly insolent; but they are already jaded at age 17 and unselfconscious in their conviction that the world offers nothing before which they should bow. The notion that there might be something out there that might be more grand, significant and awesome than themselves is something that can't occur to them; they've been inoculated against it as they might have been inoculated against small pox. That such youths are somewhat unpleasant is not the major point. It is that they have been robbed of the virtue of humility that is the prerequisite for eros, the deep and mysterious longing in the soul for it knows not what. To draw on Chesterton one more time, we can perceive the gigantic only to the extent that we are small. This is one of the primary lessons of fairy tales, a lesson our children can no longer learn... at least as long as Shrek and its ilk is available to them.
What Sex and the City did for the love story, Shrek has done for the fairy tale: It's taken a classic genre and purged it of any trace of innocence, substituting raunch, cynicism, and a self-congratulatory knowingness instead, and then tying up the jaded narrative with a happily-ever-after bow.
Our culture robs children of their innocence as early as it can; and it is only in that innocence that the real meaning of fairy tales can be perceived. I believe this is one of the primary truths we learn from G.K. Chesterton. When we are older, we cannot but assume a critical distance from what we read. The child is still in the process of forming his self; what he reads (or is read to him) becomes a part of him in a way it never can again. For Chesterton, every truth worth knowing he learned in the nursery.
It is bad enough our children are exposed to things that destroy their innocence early on, and make the appreciation of fairy tales more difficult. Now, in the Shrek series, the fairy tale tradition itself is subverted. This constitutes a kind of inoculation against the power of fairy tales. Douthat is as depressed about this as I am:
I have a horrible feeling that the Shrek franchise offers millions of kids their first exposure - and worse, their last - to the Brothers Grimm and Charles Perrault.
The result is the sort of impertinent, self-satisfied young adults whom I encounter among my children's peers. They are not exactly insolent; but they are already jaded at age 17 and unselfconscious in their conviction that the world offers nothing before which they should bow. The notion that there might be something out there that might be more grand, significant and awesome than themselves is something that can't occur to them; they've been inoculated against it as they might have been inoculated against small pox. That such youths are somewhat unpleasant is not the major point. It is that they have been robbed of the virtue of humility that is the prerequisite for eros, the deep and mysterious longing in the soul for it knows not what. To draw on Chesterton one more time, we can perceive the gigantic only to the extent that we are small. This is one of the primary lessons of fairy tales, a lesson our children can no longer learn... at least as long as Shrek and its ilk is available to them.
Labels:
Chesterton,
children's stories,
Fairy Tales,
film
Friday, July 9, 2010
Making Money from Tuesday's Child
Here's a way to make money from the Tuesday's Child problem, assuming you can convince people that John Derbyshire is right and the probability in question is 13/27 (or 0.48, nearly even odds).
"I have two children. At least one is a boy. He was born on Wednesday. Two dollars will get you three if I have two boys."
First, we need to convince seven of your neighbors of Derb's analysis, to wit, that the probability in question is 13/27. Then we need to find a large number of fathers with two children, at least one of whom is a boy. It doesn't matter on what days they were born. Have them bring the birth certificates. I hope you will agree with me that the probability that any father in this group has two boys is 1/3 (or 0.333, not 0.48).
Then we segregate the fathers into seven groups according to what day of the week the boy was born on. If the father has two boys born on different days of the week, have him flip a coin and join son #1's group if heads, son #2's group if tails.
Have the group with a boy born on Tuesday line up, and then repeatedly knock on neighbor #1's door and say:
"I have two children. At least one is a boy. He was born on Tuesday. Two dollars will get you three if I have two boys."
Hopefully neighbor #1 will take the bet, having been convinced that he's getting good odds. He's getting a 3:2 payout on a bet that he thinks is approximately 1:1.
Have the group with a boy born on Wednesday line up, and then repeatedly knock on neighbor #2's door and say:
Do this with the groups for the remaining days of the week and remaining neighbors. As far as we are concerned, we are paying out at 3:2 a bet that is actually 2:1! (A fair payout would be four dollars for every two dollars bet. We are paying out as though the odds were 2/5 or 0.40. Our advantage is 7%, better than the house advantage in a typical casino game. )
As far as any single neighbor is concerned, he's simply seen a repetition of the Tuesday Child problem. Everybody who knocks on his door says the same day of the week. Once you've got all their money, they might get suspicious, but you've got the birth certificates to back it up... and the (bogus) mathematical analysis. Some people are just lucky, you will tell them.
The Tuesday's Child Game
In this post I discussed the so-called Tuesday's Child problem in probability theory. The theory (with which I disagree) is that the probability of that the speaker has two boys is 13/27. I think the probability is actually 1/3. I've made a crude Java applet, the Tuesday's Child Game, to demonstrate the point.
Since the theory claims the odds are approximately even that the speaker has two boys, if I give you anything better than even odds, you are probabilistically ahead of the game. The applet pays off at 3:2 so,
according to the theory, you should win a lot of money.
If you are still interested in this problem, and can get $100 before going broke, let me know.
Since the theory claims the odds are approximately even that the speaker has two boys, if I give you anything better than even odds, you are probabilistically ahead of the game. The applet pays off at 3:2 so,
according to the theory, you should win a lot of money.
If you are still interested in this problem, and can get $100 before going broke, let me know.
Wednesday, July 7, 2010
Tuesday's Child
Here is a post at the corner concerning a probability question. It leads to a number of other posts and to Derbyshire's analysis here. If you follow the chain of posts back, it leads to other sites and considerable debate over the interpretation of this problem.
The analysis is tricky only if the problem is interpreted in something other than its straightforward, plain meaning. The statement of the problem is:
"I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?"
Now the problems all come from that "born on a Tuesday" clause in the middle sentence. Take that out, and everyone agrees on the answer:
"I have two children. One is a boy. What is the probability I have two boys?"
This is the classic coin-flip enumeration problem. Having children is like flipping a coin, with heads = boys and tails = girls. The possible outcomes for two consecutive coin flips are:
Heads - Heads
Heads - Tails
Tails - Heads
Tails - Tails
or, in the boy girl terms:
Boy - Boy
Boy - Girl
Girl - Boy
Girl - Girl
Since we know that at least one of the children is a boy, the last case is ruled out and the probability that the speaker has two boys is one in three.
Returning to the original problem, the analysts all seem to think the phrase "born on a Tuesday" is very significant, but they can't agree on its significance. I don't think it adds anything to the problem at all. In the straightforward, obvious interpretation, it is only a statement after the fact of birth concerning the day of birth. It's like saying the boy was 8 lbs at birth, or was born with blue eyes. It doesn't say anything about the prior possibilities of weight or birthdays; it is only a statement about what in fact occurred. It doesn't say that one or both boys couldn't have been born on a Wednesday. If that had happened, the consequence would be that the problem would say:
"I have two children. One is a boy born on a Wednesday. What is the probability I have two boys?"
The answer to this question is the same as the answer to the Tuesday question and to the simpler question that does not refer to a day at all: 1/3.
Derbyshire calculates the probability as 13/27. He can only get there by interpreting the "Tuesday clause" as affecting the prior probabilities of birth. In other words, the case of two boys born on Wednesday need not be included in our enumeration of cases because it wasn't possible for both boys to be born on Wednesday, since we know one was born on Tuesday! I hope everyone can see the post facto fallacy in this reasoning. Anyone who would buy this line of reasoning is playing the role of the father in the following comic scenario:
One day you get a letter from the town correcting your son's birth certificate. He was born two seconds after midnight so he was actually born on a Wednesday rather than a Tuesday. With a heavy heart, you sit your son down and tell him the unfortunate news: "I'm sorry to tell you this son, but I'm not your real father. My son could only have been born on a Tuesday, and I've just learned you were born on a Wednesday."
By the way, this problem is not comparable to the Monty Hall problem. The Monty Hall problem is a genuinely counter-intuitive probability result. The Tuesday's Child problem is more like a riddle or joke that depends on deceptive or ambiguous language.
The analysis is tricky only if the problem is interpreted in something other than its straightforward, plain meaning. The statement of the problem is:
"I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?"
Now the problems all come from that "born on a Tuesday" clause in the middle sentence. Take that out, and everyone agrees on the answer:
"I have two children. One is a boy. What is the probability I have two boys?"
This is the classic coin-flip enumeration problem. Having children is like flipping a coin, with heads = boys and tails = girls. The possible outcomes for two consecutive coin flips are:
Heads - Heads
Heads - Tails
Tails - Heads
Tails - Tails
or, in the boy girl terms:
Boy - Boy
Boy - Girl
Girl - Boy
Girl - Girl
Since we know that at least one of the children is a boy, the last case is ruled out and the probability that the speaker has two boys is one in three.
Returning to the original problem, the analysts all seem to think the phrase "born on a Tuesday" is very significant, but they can't agree on its significance. I don't think it adds anything to the problem at all. In the straightforward, obvious interpretation, it is only a statement after the fact of birth concerning the day of birth. It's like saying the boy was 8 lbs at birth, or was born with blue eyes. It doesn't say anything about the prior possibilities of weight or birthdays; it is only a statement about what in fact occurred. It doesn't say that one or both boys couldn't have been born on a Wednesday. If that had happened, the consequence would be that the problem would say:
"I have two children. One is a boy born on a Wednesday. What is the probability I have two boys?"
The answer to this question is the same as the answer to the Tuesday question and to the simpler question that does not refer to a day at all: 1/3.
Derbyshire calculates the probability as 13/27. He can only get there by interpreting the "Tuesday clause" as affecting the prior probabilities of birth. In other words, the case of two boys born on Wednesday need not be included in our enumeration of cases because it wasn't possible for both boys to be born on Wednesday, since we know one was born on Tuesday! I hope everyone can see the post facto fallacy in this reasoning. Anyone who would buy this line of reasoning is playing the role of the father in the following comic scenario:
One day you get a letter from the town correcting your son's birth certificate. He was born two seconds after midnight so he was actually born on a Wednesday rather than a Tuesday. With a heavy heart, you sit your son down and tell him the unfortunate news: "I'm sorry to tell you this son, but I'm not your real father. My son could only have been born on a Tuesday, and I've just learned you were born on a Wednesday."
By the way, this problem is not comparable to the Monty Hall problem. The Monty Hall problem is a genuinely counter-intuitive probability result. The Tuesday's Child problem is more like a riddle or joke that depends on deceptive or ambiguous language.
Is Free Will a Contingent Possibility
With respect to this post at the Secular Right, Kierkegaard cannot be bettered:
Freedom is never possible. It is either actual or it is not at all.
Put another way... anyone who wonders if he is free (or even could possibly wonder if he is free) is already free.
Freedom is never possible. It is either actual or it is not at all.
Put another way... anyone who wonders if he is free (or even could possibly wonder if he is free) is already free.
Tuesday, July 6, 2010
Toy Story and Religion
I just saw the wonderful Toy Story 3 with my wife and daughter, and it put me in mind of an argument I read years ago at the Internet Infidels. I haven't been able to find the article (it was in the "Agora", which they no longer seem to have), but the gist of it was straightforward. Toy Story, the argument goes, is a parable of atheism. It is the story of Buzz Lightyear, a man living in a false world of imaginary Space Rangers and Evil Emperors, finally brought back to reality when his illusions are punctured. Buzz hangs on to his illusions as long as he can but, finally summoning the courage to find out the truth one way or the other, puts them to empirical test. One of his "special powers" is supposed to be an ability to fly, so he jumps off a second floor bannister in an attempt to prove it. Naturally, he falls to the floor, and is broken both physically and spiritually. But the story has a happy ending as Buzz is not only physically repaired, but learns to accept the non-dramatic and mundane truth that he is but a child's toy. Would that the Buzz Lightyears attending Mass every weekend could follow his example.
The argument is a good example of how atheist arguments can be perfectly sound but miss the target. The Christian can accept the argument in its entirety, and even applaud with the atheist Buzz's breakthrough to a true understanding of his nature. For it is not in his dreamworld as a Space Ranger battling Emperor Zurg that Buzz has found religion (or, at least, religion in the sense of a metaphysical religion like Christianity), but rather when he recognizes the true cause and source of his being; and that cause is a Creator who made him in light of a final cause: To be of service to a child in providing him joy in the form of a toy. And it is only when Buzz comes to terms with his destiny (a destiny created for him) that he can be truly happy.
Buzz Lightyear is no product of an atheist universe. If Toy Story were an atheist parable, then Buzz and the other toys would be the accidental result of a brute physical process. In those terms, their destiny as a child's plaything would have as much purchase as any other destiny; which is to say, none. Indeed, it would have no more purchase than Buzz's Space Ranger worldview. We can reimagine Toy Story in atheist terms in the following way: Finally tiring of Woody's attempts to "enlighten" him out of his Space Ranger fantasy, Buzz pulls Woody aside and lets him in on something. Of course, Buzz says, I know there is not an Emperor Zurg in the sense you think I think there is, and that I can't defy gravity. So what? Your insistence that I am "meant" to be a child's plaything is as much a fantasy as my Space Ranger worldview. The difference between us is that I know whatever purpose I give my life is purely of my own fantastic creation, while you are under the illusion that you "know" the "true meaning" of every toy's existence. You are, in a word, naive.
Why isn't the atheist version of Toy Story produced? It certainly isn't because Hollywood is afraid of offending religious believers. It's just because few people would want to see it. The story is boring. It's a story that can be told only once, and it was told long ago. It's the story of the discovery that, in the end, there isn't really anything worth discovering; a discovery that, if it puts an end to anything, it puts an end to storytelling.
The argument is a good example of how atheist arguments can be perfectly sound but miss the target. The Christian can accept the argument in its entirety, and even applaud with the atheist Buzz's breakthrough to a true understanding of his nature. For it is not in his dreamworld as a Space Ranger battling Emperor Zurg that Buzz has found religion (or, at least, religion in the sense of a metaphysical religion like Christianity), but rather when he recognizes the true cause and source of his being; and that cause is a Creator who made him in light of a final cause: To be of service to a child in providing him joy in the form of a toy. And it is only when Buzz comes to terms with his destiny (a destiny created for him) that he can be truly happy.
Buzz Lightyear is no product of an atheist universe. If Toy Story were an atheist parable, then Buzz and the other toys would be the accidental result of a brute physical process. In those terms, their destiny as a child's plaything would have as much purchase as any other destiny; which is to say, none. Indeed, it would have no more purchase than Buzz's Space Ranger worldview. We can reimagine Toy Story in atheist terms in the following way: Finally tiring of Woody's attempts to "enlighten" him out of his Space Ranger fantasy, Buzz pulls Woody aside and lets him in on something. Of course, Buzz says, I know there is not an Emperor Zurg in the sense you think I think there is, and that I can't defy gravity. So what? Your insistence that I am "meant" to be a child's plaything is as much a fantasy as my Space Ranger worldview. The difference between us is that I know whatever purpose I give my life is purely of my own fantastic creation, while you are under the illusion that you "know" the "true meaning" of every toy's existence. You are, in a word, naive.
Why isn't the atheist version of Toy Story produced? It certainly isn't because Hollywood is afraid of offending religious believers. It's just because few people would want to see it. The story is boring. It's a story that can be told only once, and it was told long ago. It's the story of the discovery that, in the end, there isn't really anything worth discovering; a discovery that, if it puts an end to anything, it puts an end to storytelling.
Saturday, June 26, 2010
Philosophy, Results, Kierkegaard and Socrates
In his recent article in the Fortnightly Review, Philosophy as a personal journey., Anthony O'Hear reflects on the meaning for philosophy that it has been unable to produce "results" with respect to its most fundamental questions:
It is in light of the failure of philosophy to produce results that O'Hear develops his interpretation of philosophy as a personal journey. This is his last paragraph:
I am afraid that, as edifying as I find O'Hear's article, I cannot agree with this paragraph. In fact, I think the paragraph clearly contradicts itself. On the one hand, O'Hear tells us that we cannot arrive at a set of truths that will command the assent of the rational and the reasonable. On the other hand, he proposes just this truth as one that every reasonable man should accept as the basis of philosophy. In other words, the proposal that "philosophy cannot arrive at a set of truths that will command the assent of the reasonable" is itself a purported reasonable truth that the proposal denies.
O'Hear's proposal is of the type that formed the original basis of Enlightenment philosophy, and was exposed by Kierkegaard (and, through him, Socrates) as failing to respect the true nature of subjectivity. Enlightenment philosophers concluded, like O'Hear, that the long history of philosophy proved the futility of the classical philosophical approach. Rather than continuing the fruitless dialog, they imagined various ways to found philosophy anew, from the rationalism of Descartes to the empiricism of Hume. But what all the Enlightenment philosophers failed to recognize is that if knowledge as it had been traditionally conceived was not possible, then their knowledge of the futility of philosophy was also not possible. Remember, it was their alleged conclusion to the futility of philosophy that justified their breaking with tradition and creating a new foundation to philosophy. Their knowledge of the futility of philosophy was therefore both logically and temporally prior to the "new" knowledge they arrived at through their new methods. To take a specific case, Descartes in the beginning of the Discourse on Method discusses his reasons for abandoning classical philosophy and inventing the Method; reasons that, naturally, refer to the futility of philosophy. But as soon as the Method of universal doubt is proposed, then Descartes' doubt of classical philosophy should also be subject to doubt. But it never is; like O'Hear, the futility of philosophy is the one non-futile result that Descartes knows in the old-fashioned way.
Socrates has not been improved on in his understanding of ignorance. If I am ignorant - and surely I am if philosophy is futile - then I am ignorant. Whether anyone else is ignorant, or whether everyone throughout history was necessarily ignorant (as Englightenment-inspired philosophers suppose), must be one of the things of which I am ignorant. This is the authentic Socratic way in which subjectivity enters philosophy and true philosophy is born; and it is the way philosophy is born in all philosophers following the Socratic tradition. When Aristotle says that philosophy begins in wonder, he doesn't mean only or even primarily that the historical origin of philosophy happened when certain men of leisure began to wonder. He means that philosophy is only alive to the extent that it is born in wonder in each individual soul. It is one thing to speculate about the nature or reality of final causes as an abstract problem that has no necessary relation to my life; quite another to recognize that the question of final causes, if truly asked, must primarily involve the question of the final cause of my own being. If I have a final cause, which means a purpose that informs my existence whether I recognize it or not, then this cause judges every moment of my existence - including the moments when I speculate about final causes.
This, I believe, is the primary lesson of Plato's Crito. Socrates is in prison and is told by Criton that all the arrangements have been made for Socrates to escape prison and repair to another city, where he would be welcomed and could continue to philosophize. The guards are sympathetic and the populace generally recognizes the injustice of his conviction. But Socrates will have none of it. He recognizes the truth of his subjectivity with respect to the laws of Athens. It doesn't matter whether, in an abstract sense, the jury decided his case correctly. Justice for Socrates means that he must respect the decision of the jury whatever it is. Were he to avail himself of the opportunity to escape, and continue to "philosophize" in another city, his philosophy would be reduced to a language game, and himself to a comic figure, spending his time in apparently serious conversations about justice, when he has made it clear that whatever he thinks about justice, it doesn't include justice for Socrates.
In what sense, then, can philosophy have "results"? Not in the sense that it can produce answers that must be recognized by any rational person, if by "rational" we mean the existentially indifferent "objective" reason characteristic of modernity. But it can produce answers that are true for everyone and for all time, if we acknowledge that those answers will be recognized only by those who have first absorbed the subjective truth necessary to philosophize; or, in Kierkegaard's words, if they have become subjective thinkers.
But, and here is a second worry, given that, notoriously, most of the big disputes in philosophy remain unresolved – and have been unresolved since the time of the ancient Greeks who first raised them in systematic form – what can we actually learn from philosophy?
It is in light of the failure of philosophy to produce results that O'Hear develops his interpretation of philosophy as a personal journey. This is his last paragraph:
Of course, some of the people who write and practice philosophy in these ways will see their tightly focused work as contributing to a larger vision, but it seems to me that the overall direction is false to the true nature of the subject. And although we can all agree that our endeavours are directed to the truth, and guided by reasons and arguments that bear on the truth of what each of us believes, we each have to face the fact that we will not achieve complete rational convergence on premisses, because it is not there to be achieved. Nor will we come to a set of truths which will be so evident that they will command the assent of all who embark on the journey and pursue it in a rational and reasonable manner, aiming as best they can to seek the truth. It is just this picture which our earlier considerations on the nature and history of philosophical disagreement seem to undermine. In the beginning and at the end, philosophy is a personal journey, crucial to the examined life Socrates thought so integral to human flourishing.
I am afraid that, as edifying as I find O'Hear's article, I cannot agree with this paragraph. In fact, I think the paragraph clearly contradicts itself. On the one hand, O'Hear tells us that we cannot arrive at a set of truths that will command the assent of the rational and the reasonable. On the other hand, he proposes just this truth as one that every reasonable man should accept as the basis of philosophy. In other words, the proposal that "philosophy cannot arrive at a set of truths that will command the assent of the reasonable" is itself a purported reasonable truth that the proposal denies.
O'Hear's proposal is of the type that formed the original basis of Enlightenment philosophy, and was exposed by Kierkegaard (and, through him, Socrates) as failing to respect the true nature of subjectivity. Enlightenment philosophers concluded, like O'Hear, that the long history of philosophy proved the futility of the classical philosophical approach. Rather than continuing the fruitless dialog, they imagined various ways to found philosophy anew, from the rationalism of Descartes to the empiricism of Hume. But what all the Enlightenment philosophers failed to recognize is that if knowledge as it had been traditionally conceived was not possible, then their knowledge of the futility of philosophy was also not possible. Remember, it was their alleged conclusion to the futility of philosophy that justified their breaking with tradition and creating a new foundation to philosophy. Their knowledge of the futility of philosophy was therefore both logically and temporally prior to the "new" knowledge they arrived at through their new methods. To take a specific case, Descartes in the beginning of the Discourse on Method discusses his reasons for abandoning classical philosophy and inventing the Method; reasons that, naturally, refer to the futility of philosophy. But as soon as the Method of universal doubt is proposed, then Descartes' doubt of classical philosophy should also be subject to doubt. But it never is; like O'Hear, the futility of philosophy is the one non-futile result that Descartes knows in the old-fashioned way.
Socrates has not been improved on in his understanding of ignorance. If I am ignorant - and surely I am if philosophy is futile - then I am ignorant. Whether anyone else is ignorant, or whether everyone throughout history was necessarily ignorant (as Englightenment-inspired philosophers suppose), must be one of the things of which I am ignorant. This is the authentic Socratic way in which subjectivity enters philosophy and true philosophy is born; and it is the way philosophy is born in all philosophers following the Socratic tradition. When Aristotle says that philosophy begins in wonder, he doesn't mean only or even primarily that the historical origin of philosophy happened when certain men of leisure began to wonder. He means that philosophy is only alive to the extent that it is born in wonder in each individual soul. It is one thing to speculate about the nature or reality of final causes as an abstract problem that has no necessary relation to my life; quite another to recognize that the question of final causes, if truly asked, must primarily involve the question of the final cause of my own being. If I have a final cause, which means a purpose that informs my existence whether I recognize it or not, then this cause judges every moment of my existence - including the moments when I speculate about final causes.
This, I believe, is the primary lesson of Plato's Crito. Socrates is in prison and is told by Criton that all the arrangements have been made for Socrates to escape prison and repair to another city, where he would be welcomed and could continue to philosophize. The guards are sympathetic and the populace generally recognizes the injustice of his conviction. But Socrates will have none of it. He recognizes the truth of his subjectivity with respect to the laws of Athens. It doesn't matter whether, in an abstract sense, the jury decided his case correctly. Justice for Socrates means that he must respect the decision of the jury whatever it is. Were he to avail himself of the opportunity to escape, and continue to "philosophize" in another city, his philosophy would be reduced to a language game, and himself to a comic figure, spending his time in apparently serious conversations about justice, when he has made it clear that whatever he thinks about justice, it doesn't include justice for Socrates.
In what sense, then, can philosophy have "results"? Not in the sense that it can produce answers that must be recognized by any rational person, if by "rational" we mean the existentially indifferent "objective" reason characteristic of modernity. But it can produce answers that are true for everyone and for all time, if we acknowledge that those answers will be recognized only by those who have first absorbed the subjective truth necessary to philosophize; or, in Kierkegaard's words, if they have become subjective thinkers.
Labels:
Aristotle,
general philosophy,
Kierkegaard,
Socrates
Thursday, June 24, 2010
Dirk Pitt is Fictional, and so are Grand Conspiracies
If there is one thing that the oil disaster in Gulf has demonstrated, it's that there is no real life character corresponding to Clive Cussler's Dirk Pitt. If you don't know, Dirk Pitt is Cussler's recurring hero, a sort of combination of James Bond and Jacques Cousteau. Pitt works for the fictional NUMA (National Underwater and Marine Agency) of the Federal Government, and regularly finds himself involved whenever any underwater derring-do is called for. Well, the Gulf disaster is just the sort of crisis Pitt and his trusty sidekick Al Giordino would solve in thrilling fashion, just barely escaping with their lives and Pitt (never Al) landing the requisite hot babe. Unfortunately for us and the Gulf coast, this disaster has shown there is no one like Pitt on the vast Government reservation.
It also shows the silliness of conspiracy theories like the 9/11 "truth" movement - the idea that G.W. Bush somehow orchestrated the 9/11 attacks without leaving a trace of evidence. If the government can't stop an oil leak in a couple of months, how could it possibly pull off a massive conspiracy like 9/11? They just aren't that good.
It also shows the silliness of conspiracy theories like the 9/11 "truth" movement - the idea that G.W. Bush somehow orchestrated the 9/11 attacks without leaving a trace of evidence. If the government can't stop an oil leak in a couple of months, how could it possibly pull off a massive conspiracy like 9/11? They just aren't that good.
Conservatism in a Nutshell
From the Front Porch Republic:
Included in that definition is the reason why philosophy as classically conceived is necessary to conservatism (as opposed to the sort of scientistic materialism/determinism that is popular at the Secular Right.) Conservatism is only possible if we know what is important, but the secularist typically denies that such transcendent knowledge is possible. The classical conservative fights to preserve his family, his nation, his system of justice and the rule of law because he knows such things are worth preserving, not merely because he is subject to certain genetically determined "affinities" with respect to them.
What the secularist denies is the possibility of the education of the sentiments. Yes, we have tender feelings towards those we know and are like us, and we may feel nothing at all towards strangers. But, through reason and revelation, we may know the truth about justice and judge our sentiments according to it. We may cross to the other side of the road when we see the man lying in a ditch, but cannot we learn something from the Samaritan who stops to assist him? And is what we learn from him worth preserving, and worth establishing in a basis of education for future generations? Even if we feel nothing for the man in the ditch now, we may educate our sentiments to feel shame when we ignore him. And we may educate our children to the same. This is the essence of conservatism.
The secularist, denying the possibility of the transcendent knowledge of justice, denies the possibility of this sort of education. And without such education, we are left following whatever "affinities" nature, or nature's manipulators, happens to endow us with. This is not the freedom the secularist hoped for when he abandoned classical philosophy and religion, but slavery.
To “conserve,” however, is a fairly simple thing. While “liberals” and “progressives” keep changing what lovely things they see in the future, “conserving” means knowing what’s important and trying to save it..
Included in that definition is the reason why philosophy as classically conceived is necessary to conservatism (as opposed to the sort of scientistic materialism/determinism that is popular at the Secular Right.) Conservatism is only possible if we know what is important, but the secularist typically denies that such transcendent knowledge is possible. The classical conservative fights to preserve his family, his nation, his system of justice and the rule of law because he knows such things are worth preserving, not merely because he is subject to certain genetically determined "affinities" with respect to them.
What the secularist denies is the possibility of the education of the sentiments. Yes, we have tender feelings towards those we know and are like us, and we may feel nothing at all towards strangers. But, through reason and revelation, we may know the truth about justice and judge our sentiments according to it. We may cross to the other side of the road when we see the man lying in a ditch, but cannot we learn something from the Samaritan who stops to assist him? And is what we learn from him worth preserving, and worth establishing in a basis of education for future generations? Even if we feel nothing for the man in the ditch now, we may educate our sentiments to feel shame when we ignore him. And we may educate our children to the same. This is the essence of conservatism.
The secularist, denying the possibility of the transcendent knowledge of justice, denies the possibility of this sort of education. And without such education, we are left following whatever "affinities" nature, or nature's manipulators, happens to endow us with. This is not the freedom the secularist hoped for when he abandoned classical philosophy and religion, but slavery.
Tuesday, June 22, 2010
Goldberg on Determinism, and Derb's Conservatism
Jonah Goldberg has an excellent post over at the Corner that deftly eviscerates John Derbyshire's genetic determinism. It makes me wonder in what sense Derbyshire is a conservative. In fact, I wonder if Derbyshire is not actually a post-modernist in conservative drag.
Take this article at Taki's magazine linked to from the corner. At first blush, it looks like a strong statement of the "rational right" position on Israel. But look a little closer at Derb's reasons for supporting Israel. He writes of our attachments rippling "out in overlapping chains of diminishing concentric circles: family, extended family, town, state, religion, ethny, nation." The Israelis are closer to us in these concentric rings than say, the Congo, because we share a tradition with them as well as beliefs in things like democracy and the rule of law. Israel is organized on principles that Derb "agrees with", and is "inhabited by people I could leave at ease with."
Derb is such a gifted and smooth writer that it is easy to overlook the precision with which he writes. But it's what Derb has successfully avoided saying that is significant. He hasn't said that the traditions and principles that we share with Israel are objectively true; or reflect a transcendent order that judges not only the USA and Israel, but all nations, including Israel's Arab enemies. No, his point is entirely subjective, and is made in terms of our experienced affinities, severed from any rational foundation (a foundation that, given Derb's genetic determinism, I suspect he does not think exists.) There is a crucial difference between supporting Israel because we "agree" on certain principles that have no further significance than our agreement, and supporting Israel because we recognize that transcendent principles of justice and duty demand that we do.
Really, Derb's support of Israel is post-modern in character. Academic post-modernists "see through" all traditions, deny any rationally knowable transcendent order, and so undermine any reason we might have to prefer our own civilization to another (or even to barbarism.) But if we no longer have reasons, we still have affinities. If there is no reason to prefer one culture to another, then my pre-rational inclinations are elevated to decisive significance. We should support Israel because the Israelis are sort of like us and therefore we have tender feelings for them (or more tender than we do, say, for the Congo.) Derb has simply taken the post-modernist position more seriously than the post-modernists, without the sentimentality.
But it is in no sense "conservative", if by that term we include the notion that there is some good worth preserving; a good that endures across time, space and opinion... in other words, a transcendent good, which is just what the post-modernist denies. The post-modernist can't be a conservative because he allows nothing that might be conserved.
Take this article at Taki's magazine linked to from the corner. At first blush, it looks like a strong statement of the "rational right" position on Israel. But look a little closer at Derb's reasons for supporting Israel. He writes of our attachments rippling "out in overlapping chains of diminishing concentric circles: family, extended family, town, state, religion, ethny, nation." The Israelis are closer to us in these concentric rings than say, the Congo, because we share a tradition with them as well as beliefs in things like democracy and the rule of law. Israel is organized on principles that Derb "agrees with", and is "inhabited by people I could leave at ease with."
Derb is such a gifted and smooth writer that it is easy to overlook the precision with which he writes. But it's what Derb has successfully avoided saying that is significant. He hasn't said that the traditions and principles that we share with Israel are objectively true; or reflect a transcendent order that judges not only the USA and Israel, but all nations, including Israel's Arab enemies. No, his point is entirely subjective, and is made in terms of our experienced affinities, severed from any rational foundation (a foundation that, given Derb's genetic determinism, I suspect he does not think exists.) There is a crucial difference between supporting Israel because we "agree" on certain principles that have no further significance than our agreement, and supporting Israel because we recognize that transcendent principles of justice and duty demand that we do.
Really, Derb's support of Israel is post-modern in character. Academic post-modernists "see through" all traditions, deny any rationally knowable transcendent order, and so undermine any reason we might have to prefer our own civilization to another (or even to barbarism.) But if we no longer have reasons, we still have affinities. If there is no reason to prefer one culture to another, then my pre-rational inclinations are elevated to decisive significance. We should support Israel because the Israelis are sort of like us and therefore we have tender feelings for them (or more tender than we do, say, for the Congo.) Derb has simply taken the post-modernist position more seriously than the post-modernists, without the sentimentality.
But it is in no sense "conservative", if by that term we include the notion that there is some good worth preserving; a good that endures across time, space and opinion... in other words, a transcendent good, which is just what the post-modernist denies. The post-modernist can't be a conservative because he allows nothing that might be conserved.
Thursday, June 17, 2010
Into the Wild and Worthwhile Risk
The Story of Chris McCandless(Into the Wild) , on which I've posted a number of times, continues to fascinate me. I think what holds me is the search for what was missing in his story. McCandless was a young man of obvious virtue and passion, yet his life ended in seemingly pointless tragedy in an abandoned bus in the wilderness of Alaska. How did he end up there?
I found a clue tonight reading an old copy of Peter Benchley's The Deep. (I've always liked the film version and decided to read the book, which is a quick read and turned out to be better than I expected. One of the better adventure stories I've read in some time, in fact.) At one point, the salty old diver Treece (played by Robert Shaw in the film) offers some advice to the younger David Sanders, who killed a shark with a knife underwater after he thought the shark was about to attack his wife. It turns out that this was a foolish move, because the shark was not really a threat and Sander's attack only attracted many more sharks, forcing the divers to surface. Treece engages in some perceptive analysis:
Treece is teaching nothing other than Aristotle's distinction between the truly courageous and the merely reckless. The difference is that courage is conditioned by the virtue of prudence, whereas the reckless are dangerous actions not ordered to right reason. Treece puts it succinctly: True courage is only displayed in actions that are dangerous but must be done and, further, done in the knowledge that you know what you are doing.
But to know something must be done implicitly implies a knowledge of the good, i.e. an end that is desirable in itself. The man who displays virtue in the pursuit of the good has acted nobly. But the noble is just one of those ancient concepts that modern thought has "debunked", only to discover that, debunked or not, it is necessary. It is necessary to order passionate souls like Chris McCandless into constructive paths. This is where the contemporary university failed Chris McCandless so comprehensively. His university education should have educated his soul into a true appreciation of the good and the noble; instead, it "educated" him into the modern conceit that there isn't any true good or nobility that can we really can know. In effect, he was educated into anti-prudence. Yet the passion in his soul didn't go away merely because its object was denied; it was only given a prophylactic. So the rest of his tragic life was spent in the pursuit of extreme adventures that would, somehow, allow him to "break through" to the other side, whatever that might be. But when the denial of prudence itself becomes mistaken for a virtue, then the pursuit of pointless dangers becomes a substitute for the noble.
This accounts for the curious combination of thorough technical preparation in the service of foolish ends that characterized McCandless's adventures. He didn't die in the bus from lack of preparation; he extensively researched Alaskan flora and fauna, knew what he could eat and couldn't eat (almost - it appears he died from eating the wrong seeds), and survived for some time on his own. In fact, he would have succeeded (at what? - that's the problem) but for one slip up. But his prudence was truncated; it extended to the preparation and conduct of his adventures, but had nothing to say about their ends. This is the difference between a life that might have ended nobly and heroically, but instead ended foolishly and tragically. I see Chris's tragic end as a consequence of the peculiarly modern suffocation of the soul.
I found a clue tonight reading an old copy of Peter Benchley's The Deep. (I've always liked the film version and decided to read the book, which is a quick read and turned out to be better than I expected. One of the better adventure stories I've read in some time, in fact.) At one point, the salty old diver Treece (played by Robert Shaw in the film) offers some advice to the younger David Sanders, who killed a shark with a knife underwater after he thought the shark was about to attack his wife. It turns out that this was a foolish move, because the shark was not really a threat and Sander's attack only attracted many more sharks, forcing the divers to surface. Treece engages in some perceptive analysis:
"It's natural enough, Treece said. "A lot people want to prove something to themselves, and when they do something they think's impressive, then they're impressed themselves. The mistake is, what you do isn't the same as what you are. You like to do things just to see if you can. Right?"
Though there was no reproach in Treece's voice Sanders was embarrassed. "Sometimes, I guess..."
"What I'm getting at..." Treece paused. "The feeling's a lot richer when you do something right, when you know something has to be done and you know what you're doing, and then you do something hairy. Life's full of chances to hurt yourself or someone else." Treece took a drink. "In the next few days, you'll have more chances to hurt yourself than most men get in a lifetime. It's learning things and doing things right that make it worthwhile, make a man easy with himself. When I was young, nobody could tell me anything. I knew it all. It took a lot of mistakes to teach me that I didn't know goose shit from tapioca... That's the only hitch in learning: it's humbling... Anyway, that's a long way around saying that it's crazy to do things just to prove you can do 'em. The more you learn, the more you'll find yourself doing things you never thought you could do in a million years."
Treece is teaching nothing other than Aristotle's distinction between the truly courageous and the merely reckless. The difference is that courage is conditioned by the virtue of prudence, whereas the reckless are dangerous actions not ordered to right reason. Treece puts it succinctly: True courage is only displayed in actions that are dangerous but must be done and, further, done in the knowledge that you know what you are doing.
But to know something must be done implicitly implies a knowledge of the good, i.e. an end that is desirable in itself. The man who displays virtue in the pursuit of the good has acted nobly. But the noble is just one of those ancient concepts that modern thought has "debunked", only to discover that, debunked or not, it is necessary. It is necessary to order passionate souls like Chris McCandless into constructive paths. This is where the contemporary university failed Chris McCandless so comprehensively. His university education should have educated his soul into a true appreciation of the good and the noble; instead, it "educated" him into the modern conceit that there isn't any true good or nobility that can we really can know. In effect, he was educated into anti-prudence. Yet the passion in his soul didn't go away merely because its object was denied; it was only given a prophylactic. So the rest of his tragic life was spent in the pursuit of extreme adventures that would, somehow, allow him to "break through" to the other side, whatever that might be. But when the denial of prudence itself becomes mistaken for a virtue, then the pursuit of pointless dangers becomes a substitute for the noble.
This accounts for the curious combination of thorough technical preparation in the service of foolish ends that characterized McCandless's adventures. He didn't die in the bus from lack of preparation; he extensively researched Alaskan flora and fauna, knew what he could eat and couldn't eat (almost - it appears he died from eating the wrong seeds), and survived for some time on his own. In fact, he would have succeeded (at what? - that's the problem) but for one slip up. But his prudence was truncated; it extended to the preparation and conduct of his adventures, but had nothing to say about their ends. This is the difference between a life that might have ended nobly and heroically, but instead ended foolishly and tragically. I see Chris's tragic end as a consequence of the peculiarly modern suffocation of the soul.
Monday, June 14, 2010
Is the "priest shortage" a blessing?
I sometimes wonder if the so-called "priest shortage" in the American Church is not a blessing. We have a presumptuous attitude to the Eucharist in this country: Everyone takes Communion and hardly anyone goes to Confession. But then it is a very dangerous thing to eat of the Body and Blood of the Lord unworthily: See 1 Corinthians 11:23-27. Perhaps the declining number of priests is God's merciful way of helping us avoid such a serious sin...
That, by the way, reveals something about the nature of the priest shortage. There is no shortage of priests when I (not frequently enough) go to Confession. Usually there is no line and, sometimes, the priest is startled to see someone show up. So as far as the Sacrament of Confession goes, there is no shortage. In fact, we've got more priests than we need. Now we should not be taking the other Sacraments like Marriage or the Eucharist unless we are first taking the Sacrament of Confession. So, really, there is no shortage for those Sacraments either, for only as many Catholics going to Confession should be going to Eucharist. Again, I wonder if the "priest shortage" is God's way of guiding us into a right approach to the Sacraments, his way of putting an end to the abuse of the Sacraments.
That, by the way, reveals something about the nature of the priest shortage. There is no shortage of priests when I (not frequently enough) go to Confession. Usually there is no line and, sometimes, the priest is startled to see someone show up. So as far as the Sacrament of Confession goes, there is no shortage. In fact, we've got more priests than we need. Now we should not be taking the other Sacraments like Marriage or the Eucharist unless we are first taking the Sacrament of Confession. So, really, there is no shortage for those Sacraments either, for only as many Catholics going to Confession should be going to Eucharist. Again, I wonder if the "priest shortage" is God's way of guiding us into a right approach to the Sacraments, his way of putting an end to the abuse of the Sacraments.
Thursday, June 10, 2010
Mark Steyn misses the boat
It's not like the great Mark Steyn to miss the obvious. But he does just that in his book America Alone: The End of the World As We Know It. On page 143, he discusses the famous "Christmas Truce" that spontaneously occurred on the Western Front in 1914:
Steyn mentions the carols and the day, but misses their obvious significance. The truce didn't happen because of common humanity, but common religion. If the truce happened merely because of common humanity, then it might have occurred on any day... but it happened on Christmas Day. And they might have sung any old songs, but they sung Christmas carols.
If "common humanity" had anything to do with fostering peace, then men would not make war in the first place. Common humanity, in fact, is the primary cause - maybe the only true cause - of war, c.f. Cain and Abel.
One of the most enduring vignettes of the Great War comes from its first Christmas: December 1914. The Germans and British, separated by a few yards of mud on the western front, put up banners to wish each other season's greetings, sang "Silent Night" in the dark in both languages, and eventually scrambled up from their opposing trenches to play a Christmas Day football match in No Man's Land and share some German beer and English plum jam. After this Yuletide interlude, they went back to killing each other.
The many films, books, and plays inspired by that No Man's Land truce all take for granted the story's central truth: that our common humanity transcends the temporary hell of war. When the politicians and generals have done with us, those who are left will live in peace, playing footie (i.e. soccer), singing songs, as they did for a moment in the midst of carnage.
Steyn mentions the carols and the day, but misses their obvious significance. The truce didn't happen because of common humanity, but common religion. If the truce happened merely because of common humanity, then it might have occurred on any day... but it happened on Christmas Day. And they might have sung any old songs, but they sung Christmas carols.
If "common humanity" had anything to do with fostering peace, then men would not make war in the first place. Common humanity, in fact, is the primary cause - maybe the only true cause - of war, c.f. Cain and Abel.
Subscribe to:
Posts (Atom)