Wonderful use of Aristotle to understand the meaning of Christmas. Through Front Porch Republic:
Aristotle on Christmas.
The riddles of God are more satisfying than the solutions of man. - G.K. Chesterton
Friday, December 25, 2015
On Twice A Year Catholics
"Judge not, that ye be not judged."
But does that mean I cannot think? I find it impossible not to think of twice a year Catholics when I am at Christmas Mass, and it is obvious that many of the congregants are unfamiliar with the Mass; and that many of them obviously have no respect for the Mass. Standing with their hands in their pockets, surreptitiously checking their iPhones, chatting with each other like they are at a pub. And of course everyone goes to Communion, during which it is best to keep one's head down in prayer so as at least to avoid seeing how they take Communion.
Do not judge. I think that does not mean I must pretend I do not approve of such behavior. It means that it is not my place to condemn anyone for their behavior. That is the prerogative of God.
We are all sinners. Discovering the reality and nature of our own particular sins is a necessary process on the way to becoming closer to God. Although we are not to condemn others for their sins, it is generally easier to see sins in others rather than ourselves. But in seeing those sins, perhaps we can recognize the same sins in ourselves.
Consider a man, a father, who is divorced and sees his daughter at Christmas. At that time he gives her gifts, talks with her, plays with her, hugs and kisses her. He tells her how much he loves her. But after Christmas and into the New Year, the daughter calls and emails her father but gets no response. In fact this continues throughout the rest of the year; she regularly calls, leaves messages and gets no answer. Then at Christmastime the next year, her father again shows up with gifts, talks with her, plays with her, hugs and kisses her and tells her he loves her. He says he is sorry he didn't return her messages but he was very busy. But he is here now. Surely she understands. And this goes on year after year.
What is the daughter to make of this? Might she think her father is simply a liar and is using and cheating her, showing up once a year to get good feelings about pretending to be the father he is not? Might she not demand that he at least show her enough respect to be honest about their relationship? Instead he forces her to be complicit in the lies he tells himself. This is worse than indifference, for were he indifferent they would at least understand each other in their lack of a relationship. Her dignity would not suffer annual humiliation at his contrived intimacy.
What is Communion but a particular and deep form of intimacy that God has granted us? To take Communion indifferently or by rote or merely as just another part of the Christmas season, is to hug your daughter once a year at Christmas. Traditionally the Church has demanded of us that we make ourselves worthy of the Sacrament of the Mass through prayer and the Sacrament of Reconciliation. Like all the Church's rules, this is for our own benefit so we don't find ourselves taking hugs from God without the prior respect for God that makes such intimacy true rather than a lie.
But does that mean I cannot think? I find it impossible not to think of twice a year Catholics when I am at Christmas Mass, and it is obvious that many of the congregants are unfamiliar with the Mass; and that many of them obviously have no respect for the Mass. Standing with their hands in their pockets, surreptitiously checking their iPhones, chatting with each other like they are at a pub. And of course everyone goes to Communion, during which it is best to keep one's head down in prayer so as at least to avoid seeing how they take Communion.
Do not judge. I think that does not mean I must pretend I do not approve of such behavior. It means that it is not my place to condemn anyone for their behavior. That is the prerogative of God.
We are all sinners. Discovering the reality and nature of our own particular sins is a necessary process on the way to becoming closer to God. Although we are not to condemn others for their sins, it is generally easier to see sins in others rather than ourselves. But in seeing those sins, perhaps we can recognize the same sins in ourselves.
Consider a man, a father, who is divorced and sees his daughter at Christmas. At that time he gives her gifts, talks with her, plays with her, hugs and kisses her. He tells her how much he loves her. But after Christmas and into the New Year, the daughter calls and emails her father but gets no response. In fact this continues throughout the rest of the year; she regularly calls, leaves messages and gets no answer. Then at Christmastime the next year, her father again shows up with gifts, talks with her, plays with her, hugs and kisses her and tells her he loves her. He says he is sorry he didn't return her messages but he was very busy. But he is here now. Surely she understands. And this goes on year after year.
What is the daughter to make of this? Might she think her father is simply a liar and is using and cheating her, showing up once a year to get good feelings about pretending to be the father he is not? Might she not demand that he at least show her enough respect to be honest about their relationship? Instead he forces her to be complicit in the lies he tells himself. This is worse than indifference, for were he indifferent they would at least understand each other in their lack of a relationship. Her dignity would not suffer annual humiliation at his contrived intimacy.
What is Communion but a particular and deep form of intimacy that God has granted us? To take Communion indifferently or by rote or merely as just another part of the Christmas season, is to hug your daughter once a year at Christmas. Traditionally the Church has demanded of us that we make ourselves worthy of the Sacrament of the Mass through prayer and the Sacrament of Reconciliation. Like all the Church's rules, this is for our own benefit so we don't find ourselves taking hugs from God without the prior respect for God that makes such intimacy true rather than a lie.
So then, whoever eats the bread or drinks the cup of the Lord in an unworthy manner will be guilty of sinning against the body and blood of the Lord. (1 Cor. 11:27)It is not for me to condemn once or twice a year Catholics. But I can learn from them the danger of taking Communion lightly, and renew my resolve to prepare myself properly for Mass.
Saturday, December 19, 2015
Chesterton, the Internet and Community
What would G.K. Chesterton have thought of the Internet?
We can get an idea from his essay On Certain Modern Writers originally published in Heretics and recently republished in the excellent collection In Defense of Sanity. Chesterton discusses Christianity, the family and community in this essay, and makes the point that small communities like the family force different types of people to know and get along with each other. Ironically, it is the small community that is truly more broad than the large one:
Back to the theme of this post, it seems clear Chesterton would have some problems with the Internet - the chief being that it is an ideal clique-forming ground, beyond anything Chesterton might have imagined. Now one doesn't need to be in physical proximity to spend all his time with the like-minded. A few clicks of the mouse and you can find somewhere online where everyone thinks exactly like you do. The everyday encounters that bring us into contact with a variety of people - like shopping, taking the bus, going to the library - may also be minimized via online shopping. "Sociability, like all good things, is full of discomforts, dangers, and renunciations" Chesterton tells us. On the internet, as soon as you experience any discomfort or any degree of renunciation, there is always another more comfortable page to repair to.
There is no neighbor on the Internet in any real sense, and thus no place for the second great commandment. "We make our friends; we make our enemies; but God makes our next-door neighbour." God commands us to love the man who is given to us in our circumstances; but what if our circumstances (e.g. online existence) are such that no one is given to us?
We have, in Chesterton's words, "a society for the prevention of Christian knowledge."
Of course I can't leave off here without discussing the irony of making this point on the Internet itself. The Internet itself is neither good nor evil - Chesterton would certainly agree with that - it is our use of it that makes it turn for good or ill. And so it is when internet "communities" begin to displace real communities that we begin to have a problem, and especially when people begin to think of online "communities" as real communities.
We can get an idea from his essay On Certain Modern Writers originally published in Heretics and recently republished in the excellent collection In Defense of Sanity. Chesterton discusses Christianity, the family and community in this essay, and makes the point that small communities like the family force different types of people to know and get along with each other. Ironically, it is the small community that is truly more broad than the large one:
In a large community we can choose our companions. In a small community our companions are chosen for us. Thus in all extensive and highly civilized societies groups come into existence founded upon what is called sympathy; and shut out the real world more sharply than the gates of a monastery. There is nothing really narrow about the clan; the thing which is really narrow is the clique. The men of the clan live together because they all wear the same tartan or are all descended from the same sacred cow; but in their souls, by the divine luck of things, there will always be more colors than in any tartan. But the men of the clique live together because they have the same kind of soul, and their narrowness is a narrowness of spiritual coherence and contentment, like that which exists in hell.That last sentence contains a typical Chesterton surprise. We might tend to think of "spiritual coherence and contentment" as something good, or at least not positively bad; and Chesterton plays on that conceit, leading us along in the sentence until shocking us at the end with his view that it is actually hellacious. What is hell like? It is a place where everyone has the same kind of soul, and far from being a place of discord and conflict, in fact there is "coherence" and "contentment." It is place where everyone is content in his sins. It is not a happy place, however, and so we can conclude that Chesterton sees a distinction between contentment and happiness. I am picturing Chesterton's hell as one of drabness and dullness, where everyone is "content" with his situation only because he doesn't have the energy to do anything about it. All the souls are the same because they are all worn down to the nub. Heaven, then, must be a place of glorious diversity (in the real sense, not the PC sense, of that word) where the souls are very different, and express the energy of their difference yet in the unity of God.
Back to the theme of this post, it seems clear Chesterton would have some problems with the Internet - the chief being that it is an ideal clique-forming ground, beyond anything Chesterton might have imagined. Now one doesn't need to be in physical proximity to spend all his time with the like-minded. A few clicks of the mouse and you can find somewhere online where everyone thinks exactly like you do. The everyday encounters that bring us into contact with a variety of people - like shopping, taking the bus, going to the library - may also be minimized via online shopping. "Sociability, like all good things, is full of discomforts, dangers, and renunciations" Chesterton tells us. On the internet, as soon as you experience any discomfort or any degree of renunciation, there is always another more comfortable page to repair to.
There is no neighbor on the Internet in any real sense, and thus no place for the second great commandment. "We make our friends; we make our enemies; but God makes our next-door neighbour." God commands us to love the man who is given to us in our circumstances; but what if our circumstances (e.g. online existence) are such that no one is given to us?
We have, in Chesterton's words, "a society for the prevention of Christian knowledge."
Of course I can't leave off here without discussing the irony of making this point on the Internet itself. The Internet itself is neither good nor evil - Chesterton would certainly agree with that - it is our use of it that makes it turn for good or ill. And so it is when internet "communities" begin to displace real communities that we begin to have a problem, and especially when people begin to think of online "communities" as real communities.
Friday, December 11, 2015
Original Sin, Paradise and Irish Music
In a comment to my recent post concerning Chesterton and Original Sin, M asked the pertinent question: If this is our true home but we don't know how to live here, how do we learn?
The short answer is we can't, at least on our own. That's the problem with the Fall - we fell in our entire nature right down to our core, so there is no place we can fall back on from which to pull ourselves up. Any attempt we make is doomed to fail because the attempt can only come from fallen nature, and so is already affected by the problem it is trying to cure. That's why our attempts to find a way to live always have a ring of artificiality to them. They must, because we are trying to construct a way to live from degraded blueprints and with degraded carpenters.
The only answer is for someone to save us - which, of course, God has accomplished in the Incarnation. Christ shows us what it really means to live naturally, in our home, and gives us the grace to do it, if we will but accept it. Just how far we have fallen is indicated by the shock with which we apprehend the crucifix:
Christ is the perfectly natural man, but the way He lives is not something that comes naturally to us (anymore). And it never quite will, as long as redemption is not complete. The best we can do is imitate him, ask for His grace, and hope we can through Him learn to live again in a truly natural manner. In the meantime, we can console ourselves with the knowledge that the strangeness we feel, the feeling of never quite fitting in or knowing quite what to do, is a consequence of the Fall, and will be with us to some degree for the rest of this life - but it is not the end of the story, and we can look forward to truly being home when history is finished.
And we can even in this life get a taste of what paradise - another word for living in our true home - is like. We know that in paradise we will live in the presence of God and no longer feel the longing that we do in this life, that something isn't there that should be but we can't quite say what. God will fill us and we will rest satisfied in Him. It's difficult for us to imagine how this would be possible, how we could rest in God without becoming bored (another indication of our fallen nature). For me, I imagine paradise as a "dynamic restfulness", active yet not going anywhere or feeling the need to go anywhere. One way I get an idea for this is playing Irish music; when you get the rhythm right in a reel, it feels effortless and as though you could ride the rhythm all day without trying but without getting bored. It's that "dynamic restfulness" I strive for in my playing and when I approach it, I feel I am getting a little taste of heaven. This is the Irish reel Lucky In Love:
The short answer is we can't, at least on our own. That's the problem with the Fall - we fell in our entire nature right down to our core, so there is no place we can fall back on from which to pull ourselves up. Any attempt we make is doomed to fail because the attempt can only come from fallen nature, and so is already affected by the problem it is trying to cure. That's why our attempts to find a way to live always have a ring of artificiality to them. They must, because we are trying to construct a way to live from degraded blueprints and with degraded carpenters.
The only answer is for someone to save us - which, of course, God has accomplished in the Incarnation. Christ shows us what it really means to live naturally, in our home, and gives us the grace to do it, if we will but accept it. Just how far we have fallen is indicated by the shock with which we apprehend the crucifix:
Christ is the perfectly natural man, but the way He lives is not something that comes naturally to us (anymore). And it never quite will, as long as redemption is not complete. The best we can do is imitate him, ask for His grace, and hope we can through Him learn to live again in a truly natural manner. In the meantime, we can console ourselves with the knowledge that the strangeness we feel, the feeling of never quite fitting in or knowing quite what to do, is a consequence of the Fall, and will be with us to some degree for the rest of this life - but it is not the end of the story, and we can look forward to truly being home when history is finished.
And we can even in this life get a taste of what paradise - another word for living in our true home - is like. We know that in paradise we will live in the presence of God and no longer feel the longing that we do in this life, that something isn't there that should be but we can't quite say what. God will fill us and we will rest satisfied in Him. It's difficult for us to imagine how this would be possible, how we could rest in God without becoming bored (another indication of our fallen nature). For me, I imagine paradise as a "dynamic restfulness", active yet not going anywhere or feeling the need to go anywhere. One way I get an idea for this is playing Irish music; when you get the rhythm right in a reel, it feels effortless and as though you could ride the rhythm all day without trying but without getting bored. It's that "dynamic restfulness" I strive for in my playing and when I approach it, I feel I am getting a little taste of heaven. This is the Irish reel Lucky In Love:
Sunday, December 6, 2015
On Living Together
I am old-fashioned enough to still be surprised at the matter-of-fact way couples allow it to be known that they are living together without benefit of clergy. It now seems to be the conventional wisdom that couples live together for months or even years before getting married, if they ever do. A young man at work has been living with his girlfriend for four years. He has even gone on cruises with his parents and her; apparently the parents see nothing amiss in this relationship. This is all related matter of factly over the lunch table.
The idea seems to be that you should get to know each other in a living-together arrangement before getting married. That way, the thinking goes, there won't be surprises when (if) you eventually do get married. Supposedly this will put the marriage on a firmer basis. The statistics say otherwise.
So does common sense and, frankly, simple decency. I thank God that I had the sense not to go down this path when I was 23 and foolish in many ways - but not that way. Instead I married the woman I loved - without ever having lived with her - and have stayed married for 29 years.
I instinctively sensed at the time that to ask her to live with me would be disrespectful. It was to ask her to upset the basic arrangements of her life - where she lived and how, the independence of her own apartment - and restructure her life according to mine, presumably for some extended period of time. It meant a raft of simple things like letting everyone know the new telephone number at which you can be reached, and changing your mailing address. There was an "overhead" investment that would act to discourage her from ending the arrangements should she so desire; not to mention the embarrassment of admitting failure after, say, two years of living with someone.
Yet with no commitment from me that this fundamental restructuring would lead anywhere. This is to put the woman you supposedly love at a disadvantage. It is to take her out for a test drive like she is a used car. To really love someone is to wish the best for her, and to presume to take several of the best years of someone's life, years when she is young and single and looking for the right man, as exclusively your own yet with the explicit proviso that you may discard her at any time - how can a man do this to the woman he loves?
It doesn't matter if she "agrees" with it. Simply because someone shows no respect for himself or herself does not give one license to disrespect him or her as well. At bottom, such a relationship is one that mimics the appearance of genuine self-giving marriage, but is at heart really a relationship of two people using each other rather than giving themselves to each other. That is the whole point of avoiding marriage, isn't it? I'll see how you work for me for a time and decide then if it's been worth it.
And then, if such a couple finally does get married, the character of their relationship has already been formed. They have been living together for all appearances as man and wife. Now that they really are man and wife, will their relationship suddenly change from the one of mutual use it has been, to the mutual self-giving of genuine marriage? I doubt it very much. In fact, I suspect they would have difficulty even conceiving the self-giving involved in genuine marriage. Instead, while the formality of marriage would add more "overhead" to the relationship in the sense of making it more difficult to break up, it wouldn't change the fundamental possibility of that breakup, which has been foundational in their relationship since the beginning.
Consider also that everyone shows the best sides of themselves when getting to know someone. From the first instance of meeting, we try to put our best face forward and hide our less attractive aspects. As we get to know someone, we gradually reveal more of ourselves, including those less attractive elements, doing so to the degree that we believe we can trust the one to whom we are revealing it. Now the whole point of living with someone without marriage is to hold open the option of leaving them at any time; in other words, it puts a lack of trust at the center of the relationship. In such circumstances, people will hide those unattractive elements. And I'm sure they can do so for years at a time.
In other words, you can live with someone for a long time without truly knowing her should she choose not to reveal herself. The point of the living together arrangements, however, is a sort of truth in advertising: I insist on knowing exactly what I'm buying before I do so in marriage. Imagine a man's perplexity after five years of living with someone, that after a year of marriage he's discovering sides of his wife's personality he never dreamed were there. She thinks, of course, that now that they are married she has the level of trust necessary to finally reveal herself completely. For his part, he may feel he's been taken advantage of: I was supposed to find out all this beforehand, and she held it back from me, so she's gone back on our arrangement.
Of course, demanding that someone reveal her deepest self to you in an arrangement constructed so that you can examine that self and decide if you like it or not, and then decide whether or not to discard her, is deeply disrespectful. Again, it doesn't matter if both parties are doing it to each other. Mutual disrespect is a very poor form of equality and certainly no basis for marriage.
The fact is that genuine marriage involves tremendous risk; that is one of the things that makes it so exciting. Real marriage is an adventure that involves much deeper risk than rock climbing or skydiving. You don't really know your marriage partner until you have been married for a time and they have fully revealed themselves. And both of you know this going in; to some degree, you are marrying a stranger.
What sense, then, does the marriage vow make? How can you promise yourself to a person you don't really know, and won't really know perhaps for years? The French philosopher Gabriel Marcel addressed this question in his book Creative Fidelity. The title neatly summarizes his answer: In marriage, the partners create the conditions under which they remain faithful. What they are vowing themselves to is not just a person, but a mutual journey of discovery and self-creation, where the partners discover each other and themselves, changing and growing in the process. I am not the man I was when I married at age 23; and I am not the man I would have been had I not married or even married someone other than Tricia. She has been a dynamic element of my self-creation over the last 29 years, and I of hers.
That sounds very abstract, but it is extremely concrete in practice. It means being able to confront and discuss aspects of your partner's personality that you find difficult and, perhaps, even impossible to live with over the long term. Will they do what is necessary to develop that aspect of themselves for the sake of the marriage? And of course it runs the other way as well: I discover things about myself through her that I had not noticed, but are unpleasant for her. Am I willing to work on those things for the sake of her happiness, or will I demand that she take me as I am? Not all things can be changed. The ongoing negotiation and development, in the context of love, is the substance of marriage.
This dynamic process of growth is stunted if the partners have gone into marriage after a trial period of living together; for they have already sent each other the message that they reserve the right to bail out if they find they don't like what they see, instead of sending the message that they are committed to the process of change and growth no matter what.
The result is not a more secure marriage, but a marriage in which the trust necessary for the deepest communication will be difficult to find.
The idea seems to be that you should get to know each other in a living-together arrangement before getting married. That way, the thinking goes, there won't be surprises when (if) you eventually do get married. Supposedly this will put the marriage on a firmer basis. The statistics say otherwise.
So does common sense and, frankly, simple decency. I thank God that I had the sense not to go down this path when I was 23 and foolish in many ways - but not that way. Instead I married the woman I loved - without ever having lived with her - and have stayed married for 29 years.
Jan. 3, 1987 |
I instinctively sensed at the time that to ask her to live with me would be disrespectful. It was to ask her to upset the basic arrangements of her life - where she lived and how, the independence of her own apartment - and restructure her life according to mine, presumably for some extended period of time. It meant a raft of simple things like letting everyone know the new telephone number at which you can be reached, and changing your mailing address. There was an "overhead" investment that would act to discourage her from ending the arrangements should she so desire; not to mention the embarrassment of admitting failure after, say, two years of living with someone.
Yet with no commitment from me that this fundamental restructuring would lead anywhere. This is to put the woman you supposedly love at a disadvantage. It is to take her out for a test drive like she is a used car. To really love someone is to wish the best for her, and to presume to take several of the best years of someone's life, years when she is young and single and looking for the right man, as exclusively your own yet with the explicit proviso that you may discard her at any time - how can a man do this to the woman he loves?
It doesn't matter if she "agrees" with it. Simply because someone shows no respect for himself or herself does not give one license to disrespect him or her as well. At bottom, such a relationship is one that mimics the appearance of genuine self-giving marriage, but is at heart really a relationship of two people using each other rather than giving themselves to each other. That is the whole point of avoiding marriage, isn't it? I'll see how you work for me for a time and decide then if it's been worth it.
And then, if such a couple finally does get married, the character of their relationship has already been formed. They have been living together for all appearances as man and wife. Now that they really are man and wife, will their relationship suddenly change from the one of mutual use it has been, to the mutual self-giving of genuine marriage? I doubt it very much. In fact, I suspect they would have difficulty even conceiving the self-giving involved in genuine marriage. Instead, while the formality of marriage would add more "overhead" to the relationship in the sense of making it more difficult to break up, it wouldn't change the fundamental possibility of that breakup, which has been foundational in their relationship since the beginning.
Consider also that everyone shows the best sides of themselves when getting to know someone. From the first instance of meeting, we try to put our best face forward and hide our less attractive aspects. As we get to know someone, we gradually reveal more of ourselves, including those less attractive elements, doing so to the degree that we believe we can trust the one to whom we are revealing it. Now the whole point of living with someone without marriage is to hold open the option of leaving them at any time; in other words, it puts a lack of trust at the center of the relationship. In such circumstances, people will hide those unattractive elements. And I'm sure they can do so for years at a time.
In other words, you can live with someone for a long time without truly knowing her should she choose not to reveal herself. The point of the living together arrangements, however, is a sort of truth in advertising: I insist on knowing exactly what I'm buying before I do so in marriage. Imagine a man's perplexity after five years of living with someone, that after a year of marriage he's discovering sides of his wife's personality he never dreamed were there. She thinks, of course, that now that they are married she has the level of trust necessary to finally reveal herself completely. For his part, he may feel he's been taken advantage of: I was supposed to find out all this beforehand, and she held it back from me, so she's gone back on our arrangement.
Of course, demanding that someone reveal her deepest self to you in an arrangement constructed so that you can examine that self and decide if you like it or not, and then decide whether or not to discard her, is deeply disrespectful. Again, it doesn't matter if both parties are doing it to each other. Mutual disrespect is a very poor form of equality and certainly no basis for marriage.
The fact is that genuine marriage involves tremendous risk; that is one of the things that makes it so exciting. Real marriage is an adventure that involves much deeper risk than rock climbing or skydiving. You don't really know your marriage partner until you have been married for a time and they have fully revealed themselves. And both of you know this going in; to some degree, you are marrying a stranger.
What sense, then, does the marriage vow make? How can you promise yourself to a person you don't really know, and won't really know perhaps for years? The French philosopher Gabriel Marcel addressed this question in his book Creative Fidelity. The title neatly summarizes his answer: In marriage, the partners create the conditions under which they remain faithful. What they are vowing themselves to is not just a person, but a mutual journey of discovery and self-creation, where the partners discover each other and themselves, changing and growing in the process. I am not the man I was when I married at age 23; and I am not the man I would have been had I not married or even married someone other than Tricia. She has been a dynamic element of my self-creation over the last 29 years, and I of hers.
That sounds very abstract, but it is extremely concrete in practice. It means being able to confront and discuss aspects of your partner's personality that you find difficult and, perhaps, even impossible to live with over the long term. Will they do what is necessary to develop that aspect of themselves for the sake of the marriage? And of course it runs the other way as well: I discover things about myself through her that I had not noticed, but are unpleasant for her. Am I willing to work on those things for the sake of her happiness, or will I demand that she take me as I am? Not all things can be changed. The ongoing negotiation and development, in the context of love, is the substance of marriage.
This dynamic process of growth is stunted if the partners have gone into marriage after a trial period of living together; for they have already sent each other the message that they reserve the right to bail out if they find they don't like what they see, instead of sending the message that they are committed to the process of change and growth no matter what.
The result is not a more secure marriage, but a marriage in which the trust necessary for the deepest communication will be difficult to find.
Saturday, December 5, 2015
Chesterton and Original Sin
From the introduction to The Defendant in the collection of essays In Defense of Sanity:
Even better is the ox who forgets the meadow. Unlike the fish, for whom the entire sea is all home to it, or should be, the meadow is peculiarly the home for an ox. An ox in the city or on a mountain is not home. But the ox who forgets the meadow is still not home in the city or on the mountain; like the unfallen ox that finds itself in the city, it would search for home. But while the unfallen ox would recognize the meadow as home should it find it, the fallen ox may find the meadow but would, tragically, not recognize it as home... it would wander right through home and continue to pine for the home it already found.
The great fall for man means that he has lost the knowledge of how to live as man in the world; he feels that he is not at home, or that he should be home but somehow isn't. So what does he do? What can he do? A man at home lives naturally; he doesn't have to figure out how to live. Since we are not at home - or at least we have forgotten how to live at home - we must construct ways of living. And these ways are at some level false simply because they are constructed - they can never replace the natural way of living of unfallen man.
Rousseau noticed this artificiality but rejected Original Sin; for him, the social constructions of man are the fall rather than a consequence of the fall. This has the convenient consequence that the fall lies outside us rather than in us, and in dealing with it we don't have to change.
But the truth is that there is no state of nature that is our true home, and in which we would be at peace could we find it. We are already in our true home. We just don't know how to live here.
This is the great fall, the fall by which the fish forgets the sea, the ox forgets the meadow, the clerk forgets the city, every man forgets his environment and, in the fullest and most literal sense, forgets himself. This is the real fall of Adam, and it is a spiritual fall.The rest of the animal kingdom has an advantage on us: They are what they are and can be nothing else. A bear cannot fail to be bearlike, or a worm wormlike. But man can fail to be human. Chesterton's wonderful description of Original Sin, imagining what it might be like should an animal suffer it, illuminates what it means for us. Imagine a fish that forgets the sea; meaning, I think, a fish who forgets how to live in the sea as a fish. Such a fish is never home, for the only home it could possibly know, the sea, is foreign to it. It must live its entire existence as a stranger in its own home.
Even better is the ox who forgets the meadow. Unlike the fish, for whom the entire sea is all home to it, or should be, the meadow is peculiarly the home for an ox. An ox in the city or on a mountain is not home. But the ox who forgets the meadow is still not home in the city or on the mountain; like the unfallen ox that finds itself in the city, it would search for home. But while the unfallen ox would recognize the meadow as home should it find it, the fallen ox may find the meadow but would, tragically, not recognize it as home... it would wander right through home and continue to pine for the home it already found.
The great fall for man means that he has lost the knowledge of how to live as man in the world; he feels that he is not at home, or that he should be home but somehow isn't. So what does he do? What can he do? A man at home lives naturally; he doesn't have to figure out how to live. Since we are not at home - or at least we have forgotten how to live at home - we must construct ways of living. And these ways are at some level false simply because they are constructed - they can never replace the natural way of living of unfallen man.
Rousseau noticed this artificiality but rejected Original Sin; for him, the social constructions of man are the fall rather than a consequence of the fall. This has the convenient consequence that the fall lies outside us rather than in us, and in dealing with it we don't have to change.
But the truth is that there is no state of nature that is our true home, and in which we would be at peace could we find it. We are already in our true home. We just don't know how to live here.
Saturday, November 28, 2015
Matthew 11:28-30
Come unto me all ye that labor and are heavy laden, and I will give you rest. Take my yoke upon you, and learn of me; for I am meek and lowly of heart: and ye shall find rest unto your souls. For my yoke is easy, and my burden is light.
What is it to be "heavy laden"? For me, it has been the effort to make my life meaningful by filling it up. You get one shot at life, I thought, and I didn't want to waste it. So I've got to be something and do things, and I've got to start right now, because life is passing by even as I speak. This was how I thought as a young man.
But be what and do what? That's not so easy a question to answer. For there is the problem of opportunity cost. To do something or to be something is implicitly to choose not to do or be all the other things you could have done or been. To become an engineer is not to become an English teacher, an historian, or a philosopher. Suppose you choose the wrong thing? You will have invested long time and effort to become something you should never have been. And there is no "do-over." You can't get those years back that you invested in becoming something you were not.
I always envied the nerds who knew who and what they were: engineers. No anxious struggle about life's direction for them. I, on the other hand, could be interested in just about anything but not overwhelming passionate about anything. So I would flit from thing to thing, hoping to land on one that I would somehow know was "me", like finding the girl you "just knew" was the one for you.
In his Either:Or, Kierkegaard discusses something he calls the "rotation method": This is a way someone bored with life keeps himself from going crazy. What he does is pursue an interest for a while until it becomes fatigued and he is bored with it. He then moves onto another interest, going from one to another until he eventually, after enough time, comes back to the first which has become interesting again through neglect. This was essentially what I did. My best friend used to ask me what "kick" I was currently on.
There is an alternative. And that is, instead of trying to fill up your life with either things you are becoming or things you are doing, to recognize the futility of that approach, and instead empty your life. But isn't that just giving up on life itself? Yes, it is and would be, and is why the great philosophers like Aristotle did not recommend it. But the fact of Christ changes everything.
For by emptying yourself and taking on the burden offered by Christ, you open yourself to the possibility that Christ Himself will fill you, and satisfy you in a way not possible for anything on Earth. As Kierkegaard would say, it is the difference between filling yourself with the eternal versus the merely temporary.
That sounds all well and good, but how do I know such satisfaction is an actual reality rather than, say, merely a pious hope? For if it is merely a pious hope, then the apparent death that would happen if I empty myself is an actual death. The rotation method may be unsatisfying and ultimately lead to despair, but at least it is something, and I get at least the satisfaction that I am trying.
This is where the matter of faith comes in. Faith in this context does not mean a blind belief in something you know to be false or have no reason to believe is true. It means to act take a chance and act on trust. Is the Gospel true? Did Jesus Christ really rise from the dead and show that a life of self-emptying is really a life of true fulfillment rather than a living death? I cannot prove that in any absolute sense. But then I don't think that is necessary. At least it wasn't for me.
It was enough for me to establish that the Gospel was at least plausible. Furthermore, I was and am firmly convinced that something highly unusual happened in Palestine in the first century. For the events that launched the Christian religion form a hinge point in history, one that turned the world from an eternal cycle of civilizational births and deaths, with one epoch not so different from any prior one, to a world launched in history, one condemned to development and change, and charging through time to some denouement to happen when no one knows. (See Chesterton's The Everlasting Man for the classic development of this theme.)
The conviction that something transcendent happened at the origin of the Christian religion, and my own recognition of the futility of trying to make life significant by filling it up, was enough to allow me to make the act of faith in renouncing the life I had been following and instead attempt to empty it and follow Christ. Yes, there was a bit of Pascal's Wager going on here.
What does it mean to embrace a self-emptying life in the name of Christ? It means to sacrifice all things you might have become or done for the sake of following Christ, and that means living for others rather than yourself. For me, it meant that instead of pursuing various hobbies obsessively I would spend that time coaching youth soccer or playing games with my children. It meant accepting a professional career that I might not have been passionate about, but was competent enough at to be successful enough to support a family. And it is to accept that as the years go on, working at a job that is just a job, and getting older and slower, missing the experiences I might have had, that in fact I was not slowly dying but rather accumulating treasure in Heaven, which is Christ Himself.
There are consolations. The vanity of earthly pursuits becomes more obvious as one grows older. And we find that there are earthly rewards as well: Matthew 6:33. But these rewards also constitute a temptation, for they renew the possibility of life as self-fulfillment: I have filled my life with family rather than experiences or personal development. If we are following Christ, we devote ourselves to our family for His sake, not our own. If we give in to the temptation to the latter, then we are open to grasping after our family (e.g. helicopter parenting, or forcing our children to take their freedom when they are older rather than giving it to them as free equals.)
And it's not like flipping a switch. More like a slow process where one gradually weans oneself from the temptation to grasp at life rather than renounce it for Christ. And I am constantly tempted to grasp, especially in retrospect. The last few years I have taken up long distance running as a way to avoid getting old faster than necessary. Running a weekly 5k fun run here in town, I find myself envying the younger men (in their 30's and 40's) who did not wait until they were 50 to take the sport seriously. I wonder what I could have done had I taken running more seriously back then. But at that time I was changing diapers, or coaching youth soccer teams, or going to little league games, or playing board games with my daughter. I imagine an alternative history in which I have filled my life with such things and am happy, a history that I know is a lie, and thank God that he gave me the grace to see the futility of that life before I had misspent it.
What is it to be "heavy laden"? For me, it has been the effort to make my life meaningful by filling it up. You get one shot at life, I thought, and I didn't want to waste it. So I've got to be something and do things, and I've got to start right now, because life is passing by even as I speak. This was how I thought as a young man.
But be what and do what? That's not so easy a question to answer. For there is the problem of opportunity cost. To do something or to be something is implicitly to choose not to do or be all the other things you could have done or been. To become an engineer is not to become an English teacher, an historian, or a philosopher. Suppose you choose the wrong thing? You will have invested long time and effort to become something you should never have been. And there is no "do-over." You can't get those years back that you invested in becoming something you were not.
I always envied the nerds who knew who and what they were: engineers. No anxious struggle about life's direction for them. I, on the other hand, could be interested in just about anything but not overwhelming passionate about anything. So I would flit from thing to thing, hoping to land on one that I would somehow know was "me", like finding the girl you "just knew" was the one for you.
In his Either:Or, Kierkegaard discusses something he calls the "rotation method": This is a way someone bored with life keeps himself from going crazy. What he does is pursue an interest for a while until it becomes fatigued and he is bored with it. He then moves onto another interest, going from one to another until he eventually, after enough time, comes back to the first which has become interesting again through neglect. This was essentially what I did. My best friend used to ask me what "kick" I was currently on.
There is an alternative. And that is, instead of trying to fill up your life with either things you are becoming or things you are doing, to recognize the futility of that approach, and instead empty your life. But isn't that just giving up on life itself? Yes, it is and would be, and is why the great philosophers like Aristotle did not recommend it. But the fact of Christ changes everything.
For by emptying yourself and taking on the burden offered by Christ, you open yourself to the possibility that Christ Himself will fill you, and satisfy you in a way not possible for anything on Earth. As Kierkegaard would say, it is the difference between filling yourself with the eternal versus the merely temporary.
That sounds all well and good, but how do I know such satisfaction is an actual reality rather than, say, merely a pious hope? For if it is merely a pious hope, then the apparent death that would happen if I empty myself is an actual death. The rotation method may be unsatisfying and ultimately lead to despair, but at least it is something, and I get at least the satisfaction that I am trying.
This is where the matter of faith comes in. Faith in this context does not mean a blind belief in something you know to be false or have no reason to believe is true. It means to act take a chance and act on trust. Is the Gospel true? Did Jesus Christ really rise from the dead and show that a life of self-emptying is really a life of true fulfillment rather than a living death? I cannot prove that in any absolute sense. But then I don't think that is necessary. At least it wasn't for me.
It was enough for me to establish that the Gospel was at least plausible. Furthermore, I was and am firmly convinced that something highly unusual happened in Palestine in the first century. For the events that launched the Christian religion form a hinge point in history, one that turned the world from an eternal cycle of civilizational births and deaths, with one epoch not so different from any prior one, to a world launched in history, one condemned to development and change, and charging through time to some denouement to happen when no one knows. (See Chesterton's The Everlasting Man for the classic development of this theme.)
The conviction that something transcendent happened at the origin of the Christian religion, and my own recognition of the futility of trying to make life significant by filling it up, was enough to allow me to make the act of faith in renouncing the life I had been following and instead attempt to empty it and follow Christ. Yes, there was a bit of Pascal's Wager going on here.
What does it mean to embrace a self-emptying life in the name of Christ? It means to sacrifice all things you might have become or done for the sake of following Christ, and that means living for others rather than yourself. For me, it meant that instead of pursuing various hobbies obsessively I would spend that time coaching youth soccer or playing games with my children. It meant accepting a professional career that I might not have been passionate about, but was competent enough at to be successful enough to support a family. And it is to accept that as the years go on, working at a job that is just a job, and getting older and slower, missing the experiences I might have had, that in fact I was not slowly dying but rather accumulating treasure in Heaven, which is Christ Himself.
There are consolations. The vanity of earthly pursuits becomes more obvious as one grows older. And we find that there are earthly rewards as well: Matthew 6:33. But these rewards also constitute a temptation, for they renew the possibility of life as self-fulfillment: I have filled my life with family rather than experiences or personal development. If we are following Christ, we devote ourselves to our family for His sake, not our own. If we give in to the temptation to the latter, then we are open to grasping after our family (e.g. helicopter parenting, or forcing our children to take their freedom when they are older rather than giving it to them as free equals.)
And it's not like flipping a switch. More like a slow process where one gradually weans oneself from the temptation to grasp at life rather than renounce it for Christ. And I am constantly tempted to grasp, especially in retrospect. The last few years I have taken up long distance running as a way to avoid getting old faster than necessary. Running a weekly 5k fun run here in town, I find myself envying the younger men (in their 30's and 40's) who did not wait until they were 50 to take the sport seriously. I wonder what I could have done had I taken running more seriously back then. But at that time I was changing diapers, or coaching youth soccer teams, or going to little league games, or playing board games with my daughter. I imagine an alternative history in which I have filled my life with such things and am happy, a history that I know is a lie, and thank God that he gave me the grace to see the futility of that life before I had misspent it.
Saturday, October 24, 2015
Brute Facts
A typical argument for atheism goes like this (in simplified form): Both the atheist and the theist start with a "brute fact", i.e. something that "just is." The theist argues from the existence of the universe ("What caused the universe?") to God (something that "just is.") The atheist responds that if we must accept something that "just is", why not say it is the universe rather than hypothesizing something beyond it like God? That merely, ala Ockham, multiplies hypothetical entities unnecessarily. The universe "just is" and there is no need for God.
It isn't true that the cosmological arguments for God put forward by the great classical philosophers like Aquinas considered God as something that "just is." Indeed, the whole point of the arguments are to establish the existence of something that is much more than something that "just is."
But that is beside the point of the present post, which is to explore the notion of brute facts or things that "just are." My conclusion is that brute facts are intellectually dangerous things, and destroy far more than their deployers suppose. They want to aim the cannon of brute facts at God, but the consequent explosion blows up not just God but our understanding of the universe itself.
Consider what it is to be a "brute" fact. Something that is "brute" is something unintelligible; that is why animals are called "brutes", because they do not possess reason. A "brute" fact is a fact that is unintelligible beyond the bare fact that it is. Clearly, if a fact is brute, there is no point in asking anything more about it, since there is nothing more about it that we can know.
Here is the rub. How do we know a brute fact for what it is when we encounter it? What distinguishes brute facts from intelligible facts? Intelligible facts are facts for which we can find an explanation, you say. But there is nothing to say that brute facts can't appear to have an explanation when they really don't. That, in fact, is the whole point of the atheist's brute fact argument against the theist: His argument is not that God doesn't really explain the universe should He exist, but that the universe in fact does not stand in need of an explanation in the first place because it is brute.
Newton's theory of gravitation appears to explain why the moon orbits the earth and planets orbit the sun. Perhaps, however, those celestial movements are really only brute facts; then Newton's theory only appears to explain the solar system. You scoff because it is clear that Newton's theory does in fact explain the solar system; it is ridiculous to suppose that it is just by chance that all the planets and their moons happen to orbit in accordance with Newton's theory.
And I would agree, but only because I do not accept the notion of brute facts. For smuggled in your reply is the assumption that you have some idea of the nature of brute facts: Brute facts wouldn't appear to happen in such a way that they conform with some intelligible law. In doing so, however, you have implicitly denied the notion of brute facts, for brute facts are facts about which you can say nothing at all further than the fact that they are (or might be). We can't say what they are like or what they are unlike or how they might appear or how it is impossible for them to appear. Any supposition along any of these lines is to contradict the brute nature of the supposed brute fact: It is to concede that the fact is in some measure intelligible; if we can say how brute facts cannot appear to us, then we have conceded that brute facts are in some measure knowable beyond the fact that they are, and therefore are not brute.
One of the virtues of David Hume was that he took the notion of brute facts seriously. And he saw that if we allow the notion of brute facts through the door, then we have destroyed the intelligibility of causality altogether and not just for the universe or God. For we never see causality itself, says Hume, only one event following another. And if we don't presuppose that the universe is intelligible, that is, if we take it that brute facts might be lurking around every corner, then the fact that one type of event tends to follow another might just be one of those brute facts waiting to temp us into false conclusions about causality. We might mistake our becoming accustomed to breaking glass following the flight of a brick for insight into a casual relationship between flying bricks and broken glass, when in fact their relationship might just be a brute fact.
Kant, of course, noticed that Hume's position not only undermined the traditional arguments for God but also any possibility of an actual understanding of the universe, including that of modern science. Kant furthered the Humean project by offering an explanation as to why we tend to (falsely) infer causality into the universe. Kant reflects on the fact of experience, and claims that the only way we can have connected experience is for our cognitive faculties to organize it out of the blooming, buzzing confusion around us. In other words, our minds are constructed so as to read into nature notions like causality and substance so that we can deal with it. A very clever advance on Hume, which saved science from Hume's skepticism, but at the price of recasting the subject of science from being nature itself to merely how nature appears to us given our cognitive apparatus.
The point here is to be wary when an atheist deploys the brute fact artillery. For those who start firing with brute facts typically do not understand that their shells will land on them as much as anyone else. In particular, they don't realize that the brute facts they deploy to destroy God will destroy the science they love so much as well.
It isn't true that the cosmological arguments for God put forward by the great classical philosophers like Aquinas considered God as something that "just is." Indeed, the whole point of the arguments are to establish the existence of something that is much more than something that "just is."
But that is beside the point of the present post, which is to explore the notion of brute facts or things that "just are." My conclusion is that brute facts are intellectually dangerous things, and destroy far more than their deployers suppose. They want to aim the cannon of brute facts at God, but the consequent explosion blows up not just God but our understanding of the universe itself.
Consider what it is to be a "brute" fact. Something that is "brute" is something unintelligible; that is why animals are called "brutes", because they do not possess reason. A "brute" fact is a fact that is unintelligible beyond the bare fact that it is. Clearly, if a fact is brute, there is no point in asking anything more about it, since there is nothing more about it that we can know.
Here is the rub. How do we know a brute fact for what it is when we encounter it? What distinguishes brute facts from intelligible facts? Intelligible facts are facts for which we can find an explanation, you say. But there is nothing to say that brute facts can't appear to have an explanation when they really don't. That, in fact, is the whole point of the atheist's brute fact argument against the theist: His argument is not that God doesn't really explain the universe should He exist, but that the universe in fact does not stand in need of an explanation in the first place because it is brute.
Newton's theory of gravitation appears to explain why the moon orbits the earth and planets orbit the sun. Perhaps, however, those celestial movements are really only brute facts; then Newton's theory only appears to explain the solar system. You scoff because it is clear that Newton's theory does in fact explain the solar system; it is ridiculous to suppose that it is just by chance that all the planets and their moons happen to orbit in accordance with Newton's theory.
And I would agree, but only because I do not accept the notion of brute facts. For smuggled in your reply is the assumption that you have some idea of the nature of brute facts: Brute facts wouldn't appear to happen in such a way that they conform with some intelligible law. In doing so, however, you have implicitly denied the notion of brute facts, for brute facts are facts about which you can say nothing at all further than the fact that they are (or might be). We can't say what they are like or what they are unlike or how they might appear or how it is impossible for them to appear. Any supposition along any of these lines is to contradict the brute nature of the supposed brute fact: It is to concede that the fact is in some measure intelligible; if we can say how brute facts cannot appear to us, then we have conceded that brute facts are in some measure knowable beyond the fact that they are, and therefore are not brute.
One of the virtues of David Hume was that he took the notion of brute facts seriously. And he saw that if we allow the notion of brute facts through the door, then we have destroyed the intelligibility of causality altogether and not just for the universe or God. For we never see causality itself, says Hume, only one event following another. And if we don't presuppose that the universe is intelligible, that is, if we take it that brute facts might be lurking around every corner, then the fact that one type of event tends to follow another might just be one of those brute facts waiting to temp us into false conclusions about causality. We might mistake our becoming accustomed to breaking glass following the flight of a brick for insight into a casual relationship between flying bricks and broken glass, when in fact their relationship might just be a brute fact.
Kant, of course, noticed that Hume's position not only undermined the traditional arguments for God but also any possibility of an actual understanding of the universe, including that of modern science. Kant furthered the Humean project by offering an explanation as to why we tend to (falsely) infer causality into the universe. Kant reflects on the fact of experience, and claims that the only way we can have connected experience is for our cognitive faculties to organize it out of the blooming, buzzing confusion around us. In other words, our minds are constructed so as to read into nature notions like causality and substance so that we can deal with it. A very clever advance on Hume, which saved science from Hume's skepticism, but at the price of recasting the subject of science from being nature itself to merely how nature appears to us given our cognitive apparatus.
The point here is to be wary when an atheist deploys the brute fact artillery. For those who start firing with brute facts typically do not understand that their shells will land on them as much as anyone else. In particular, they don't realize that the brute facts they deploy to destroy God will destroy the science they love so much as well.
Friday, October 23, 2015
Dalrymple on Original Sin
Theodore Dalrymple is always worth reading. Besides the grace of his prose style, he is remarkably Chestertonian in his ability to throw off phrases that capture succinctly profound insights. For instance, in his recent work Admirable Evasions: How Psychology Undermines Morality he writes this:
What a wonderful encapsulation of the Doctrine of Original Sin! Dalrymple does not call it that and is not talking specifically about sin, but that doesn't matter. For what is Original Sin but the creation of problems where none needed to be created - in the Garden of Eden for instance? And how many of our problems - political, economic, cultural and personal - are not problems that descended upon us but instead are self-created?
It is an essentially conservative insight. If we are problem-creating animals, then we must constantly be on guard that in attempting to solve problems we merely create more.
... for Man is not so much a problem-solving animal as a problem-creating one.(Another reason for loving Dalrymple and his style is his refusal to bow to politically correct grammar. Thank God he didn't write "... for human beings are not so much problem-solving animals...")
What a wonderful encapsulation of the Doctrine of Original Sin! Dalrymple does not call it that and is not talking specifically about sin, but that doesn't matter. For what is Original Sin but the creation of problems where none needed to be created - in the Garden of Eden for instance? And how many of our problems - political, economic, cultural and personal - are not problems that descended upon us but instead are self-created?
It is an essentially conservative insight. If we are problem-creating animals, then we must constantly be on guard that in attempting to solve problems we merely create more.
Saturday, September 26, 2015
Chesterton and Kierkegaard on the Difference of Christ
What difference does Christ make?
This question has many answers in many different contexts. Two of my favorite writers, G.K. Chesterton and Soren Kierkegaard, focus on the difference Christ makes in terms of human possibility.
Man is different from other animals insofar as he lives self-reflected in a world. Beavers and dogs don't worry about how they relate to the world; they just exist as they are unselfconsciously in the world. They are the world. But man knows himself as who he is in relation to the world. Kierkegaard describes this difference in The Sickness Unto Death in terms of the self as "a relation which relates itself to itself." The fact that man by nature relates himself to the world means his existence, unlike that of non-rational animals, is a dialectic of possibility and necessity. I understand who I am (or think I understand), and I also understand the world and my place in it, and in terms of that relationship life presents a present reality of necessity and a horizon of possibility. I exist as a relationship to the world, but I can know that relationship and (perhaps) change it - I can relate myself to the relationship which constitutes my self in the world.
But I can do that only in terms of the possibilities available to me, and those are constituted by my philosophy. What sort of possibilities are available to the natural but pre-Christian man, that is, the pagan man? Chesterton in Orthodoxy describes the pagan world as a world of pink. The great pagan virtue is moderation; a little of everything but not too much of anything. Red and white mixed together, not too much of each. This is a natural and sensible policy, and in the pagan world it produced great men like Aristotle and Marcus Aurelius. The ideal gentleman is a little bit of a warrior and a bit of a scholar as well. He drinks wine but not too much; he loves others but not too much of that either. For love is a form of madness and madness is unbalanced. Above all he maintains self-control, for he knows that the world contains good things as well as evil things, and that it ends in death. He keeps these facts before him and holds himself well so that he is neither carried away by good fortune, nor destroyed by misfortune, for life inevitably involves both. There is no better wisdom in a world without Christ, especially in a world that cannot imagine Christ. The life of balanced moderation is the best life that the best pagan mind could imagine; it defines the horizon of pagan possibility.
What has changed with Christ? The Gospel of John tells us that His first miracle occurred at Cana, and involved the replenishment of wine at a wedding feast that had run dry. We can assume that the host of the feast had on hand an appropriate amount of wine for the celebrations. It would seem, then, that any additional wine would violate the principle of moderation; we've gone from having a sensible good time to getting drunk in excess. But this is why it is a miracle, for a miracle is more than merely the suspension of ordinary physical expectations; it is a sign and revelation of a new order of existence, an order that breaks through the old pagan compromises and proposes a way of life that answers to the transcendent meaning of Christ. The exhaustion of the wine at Cana symbolizes the exhaustion of pagan virtue and the existential hopes it offered. The party is over; it is expected to be over and the celebrants are prepared to go home; no one can imagine the party continuing, or at least continuing with any propriety. But Christ can imagine it, and through His grace he turns water into wine, that the party may continue, theoretically indefinitely. From that moment forward the horizon of pagan hope has been forever shattered, for the possibility that it is not the final limit, that there is a way of life that is not bound by pagan compromises, has been permanently introduced into the human imagination.
Chesterton describes the difference as a world of pink becoming a world of bold reds and whites; reds for the warriors and whites for the monks. There were warriors in the ancient world, of course, and pacifists as well. But the pure warrior, like the pure pacifist, could not express an ideal human type because he violated the principle of moderation or balance. More significantly, the warrior and the pacifist had nothing to do with each other. Each might despise the other and, if they didn't, by the nature of things they at least expressed different philosophies of life. But in Christendom the martial Knight was as much an expression of the authentic Christian life as was the peaceful Monk. Far from expressing opposite philosophies of life, they both expressed different ways of performing the same mission: Redeeming the world in the name of Christ. Chesterton states the difference this way: In the ancient world the balance of existential possibilities was expressed in the single individual of the moderate, virtuous gentleman. In Christendom, the balance of possibilities occurred in the Church as a whole rather than individuals:
This question has many answers in many different contexts. Two of my favorite writers, G.K. Chesterton and Soren Kierkegaard, focus on the difference Christ makes in terms of human possibility.
Man is different from other animals insofar as he lives self-reflected in a world. Beavers and dogs don't worry about how they relate to the world; they just exist as they are unselfconsciously in the world. They are the world. But man knows himself as who he is in relation to the world. Kierkegaard describes this difference in The Sickness Unto Death in terms of the self as "a relation which relates itself to itself." The fact that man by nature relates himself to the world means his existence, unlike that of non-rational animals, is a dialectic of possibility and necessity. I understand who I am (or think I understand), and I also understand the world and my place in it, and in terms of that relationship life presents a present reality of necessity and a horizon of possibility. I exist as a relationship to the world, but I can know that relationship and (perhaps) change it - I can relate myself to the relationship which constitutes my self in the world.
But I can do that only in terms of the possibilities available to me, and those are constituted by my philosophy. What sort of possibilities are available to the natural but pre-Christian man, that is, the pagan man? Chesterton in Orthodoxy describes the pagan world as a world of pink. The great pagan virtue is moderation; a little of everything but not too much of anything. Red and white mixed together, not too much of each. This is a natural and sensible policy, and in the pagan world it produced great men like Aristotle and Marcus Aurelius. The ideal gentleman is a little bit of a warrior and a bit of a scholar as well. He drinks wine but not too much; he loves others but not too much of that either. For love is a form of madness and madness is unbalanced. Above all he maintains self-control, for he knows that the world contains good things as well as evil things, and that it ends in death. He keeps these facts before him and holds himself well so that he is neither carried away by good fortune, nor destroyed by misfortune, for life inevitably involves both. There is no better wisdom in a world without Christ, especially in a world that cannot imagine Christ. The life of balanced moderation is the best life that the best pagan mind could imagine; it defines the horizon of pagan possibility.
What has changed with Christ? The Gospel of John tells us that His first miracle occurred at Cana, and involved the replenishment of wine at a wedding feast that had run dry. We can assume that the host of the feast had on hand an appropriate amount of wine for the celebrations. It would seem, then, that any additional wine would violate the principle of moderation; we've gone from having a sensible good time to getting drunk in excess. But this is why it is a miracle, for a miracle is more than merely the suspension of ordinary physical expectations; it is a sign and revelation of a new order of existence, an order that breaks through the old pagan compromises and proposes a way of life that answers to the transcendent meaning of Christ. The exhaustion of the wine at Cana symbolizes the exhaustion of pagan virtue and the existential hopes it offered. The party is over; it is expected to be over and the celebrants are prepared to go home; no one can imagine the party continuing, or at least continuing with any propriety. But Christ can imagine it, and through His grace he turns water into wine, that the party may continue, theoretically indefinitely. From that moment forward the horizon of pagan hope has been forever shattered, for the possibility that it is not the final limit, that there is a way of life that is not bound by pagan compromises, has been permanently introduced into the human imagination.
Chesterton describes the difference as a world of pink becoming a world of bold reds and whites; reds for the warriors and whites for the monks. There were warriors in the ancient world, of course, and pacifists as well. But the pure warrior, like the pure pacifist, could not express an ideal human type because he violated the principle of moderation or balance. More significantly, the warrior and the pacifist had nothing to do with each other. Each might despise the other and, if they didn't, by the nature of things they at least expressed different philosophies of life. But in Christendom the martial Knight was as much an expression of the authentic Christian life as was the peaceful Monk. Far from expressing opposite philosophies of life, they both expressed different ways of performing the same mission: Redeeming the world in the name of Christ. Chesterton states the difference this way: In the ancient world the balance of existential possibilities was expressed in the single individual of the moderate, virtuous gentleman. In Christendom, the balance of possibilities occurred in the Church as a whole rather than individuals:
This was the big fact about Christian ethics; the discovery of the new balance. Paganism had been like a pillar of marble, upright because proportioned with symmetry. Christianity was like a huge and ragged and romantic rock, which, though it sways on its pedestal at a a touch, yet, because its exaggerated excrescencies exactly balance each other, is enthroned there for a thousand years. In a Gothic cathedral the columns were all different, but they were all necessary. Every support seemed an accidental and fantastic support; every buttress was a flying buttress. So in Christendom apparent accidents balanced. Becket wore a hair shirt under his gold an crimson, and there is much to be said for the combination; for Becket got the benefit of the hair shirt while the people in the street got the benefit of the crimson and gold. It is at least better than the manner of the modern millionnaire, who has the black and the drab outwardly for others, and the gold next his heart. But the balance was not walkways in one man's body ad in Becket's; the balance was often distributed over the whole body of Christendom. Because a man prayed and fasted on the Northern snows, flowers could be flung at his festival in the Southern cities; and because fanatics drank water on the sands of Syria, men could still drink cider in the orchards of England. This is what makes Christendom at once so much more perplexing and so much more interesting than the Pagan empire; just as Amiens Cathedral is not better but more interesting than the Parthenon. - Orthodoxy, Ch. 6For both Chesterton and SK, the advent of Christ permanently changed the nature of existence and of the world - and that whether you believe in Christ or not. The key point they share in this regard is that Christ revealed possibilities that were unimagined prior to the Incarnation. After the Incarnation, those possibilities cannot be eradicated from the human spirit, even if Christ Himself is later denied. The price of denying Christ cannot be a simple return to the pre-Christian world, for the possibilities he revealed will remain in the human imagination- it is only their fulfillment that will become impossible, since that fulfillment is only possible with the grace of God. The result is that post-Christian life can never be a simple return to paganism; it will instead be one of melancholy and despair.
Saturday, September 5, 2015
The Linda Problem and the Conjunction Fallacy
Over at his Neurologica blog, Dr. Steven Novella has an interesting post concerning probability and the "conjunction fallacy". The conjunction fallacy arises from not realizing that the conjunction of two propositions can never be more likely than each proposition taken separately, i.e. "A and B is true" can't be more likely to be true than "A is true." I was hoping to comment on his blog about it, but Wordpress won't let me register, giving me "internal server error" messages every time I try. So I'll just post my commentary here.
The specific case taken in Novella's blog post involves a study that posed the following problem:
Participants are given information about a hypothetical woman named Linda:
It's unfortunate that the scientists did not follow up with interviews investigating the thought processes of the participants (at least I couldn't find where they had). They just immediately conclude that the participants are guilty of the conjunction fallacy and blame it on a reliance on "intuition". But I suspect that if the participants were asked straightforwardly about the conjunction fallacy - "Is both A and B being true more probable than A by itself being true?" - most everyone would come up with the right answer. So there is likely more going on here than a simple failure to understand the conjunction fallacy.
What's going on, I think, is that the Linda Problem is actually a poorly formed question in probability. Likelihoods have meaning only in the context of an implied probabilistic experiment. We understand what "there is a 50% chance it will rain tomorrow" means because we supply for ourselves the implied probabilistic context: "Given days with meteorological conditions like today, half the time the next day is rainy and half the time it is not." The question about tomorrow's weather is really a continuation of the experiment and we estimate the probability based on prior outcomes.
But it's the case that once a probabilistic experiment has occurred, the probability of the outcome for that experiment goes to 1 and all other outcomes go to 0. Once the dice are rolled and come up 7, the likelihood that the outcome of that experiment was 7 is 1 and that it was anything else is 0.
Now consider the statement T, "Linda is a bank teller." There is no probabilistic context here. Linda is what she is and is nothing else, like a dice roll that has already happened. So the probability that Linda is a bank teller is either 1 or 0 depending on whether she actually is a bank teller. Same with her being a feminist, and same with the conjunction of her being both a bank teller and a feminist. They are all either 1 or 0.
Of course, if Linda is not a bank teller, then the probability of T is 0 and T ^ F is 0. But if she is a bank teller but not a feminist then the probability of T is 1, F is 0, and T ^ F is 0. So in that sense it is strictly true that the third statement can never be more probable than the first.
But this is a degenerate use of "likelihood." Likelihood adds nothing to the analysis which is strictly logical. And in fact the participants are not "failed" for a failure to estimate probabilities correctly, but for the alleged failure to perceive the logical necessity of the conditional "If T^F then T". So there is a bit of a bait and switch going on.
People generally approach test questions in good faith. They assume the questions are well-formed, and when they aren't, they provide their own context in an attempt to interpret the question as well-formed. In this case, being explicitly told that the question is probabilistic - and intuiting that any probabilistic question requires a probabilistic context within which the concept of likelihood makes sense - they supply the probabilistic context that is not provided by the question. Really they should say that the problem is degenerate and the likelihood of each statement is either 1 or 0, but we can't say which.
To come up with any other numbers requires a probabilistic background against which to generate a non-degenerate likelihood. This is where the much maligned "intuition" comes in. Forced to generate their own background, the participants likely tell themselves something like "If I was trying to find Linda, would I be more likely to find her starting with the general population of bank tellers, or focusing on the population that is both bank tellers and feminists?" And they reasonably conclude the latter is the better choice. Formally what they are doing is saying "Given a random draw on the populations of bank tellers, or the population that is both bank tellers and feminists, I'm more likely to come up with Linda making a draw on the latter." And they are right about that.
An intelligent participant would be understandably irritated if told later that he got the question wrong because he doesn't understand that if Linda is a bank teller and a feminist, then she is a bank teller. What the experimenters did was bait (or force) the participants into treating the question as a probabilistic one (requiring a probabilistic context), then graded them as though they had asked them a logical question about conjunction.
The specific case taken in Novella's blog post involves a study that posed the following problem:
Participants are given information about a hypothetical woman named Linda:
- (E) Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
After reading the description of that target E, they requested the participants to estimate the probability of a number of statements that were true referring to E. Three statements are included as follows:
- (T) Linda is a bank teller.
- (F) Linda is active in the feminist movement.
- (T ∧ F) Linda is a bank teller and is active in the feminist movement.
It's unfortunate that the scientists did not follow up with interviews investigating the thought processes of the participants (at least I couldn't find where they had). They just immediately conclude that the participants are guilty of the conjunction fallacy and blame it on a reliance on "intuition". But I suspect that if the participants were asked straightforwardly about the conjunction fallacy - "Is both A and B being true more probable than A by itself being true?" - most everyone would come up with the right answer. So there is likely more going on here than a simple failure to understand the conjunction fallacy.
What's going on, I think, is that the Linda Problem is actually a poorly formed question in probability. Likelihoods have meaning only in the context of an implied probabilistic experiment. We understand what "there is a 50% chance it will rain tomorrow" means because we supply for ourselves the implied probabilistic context: "Given days with meteorological conditions like today, half the time the next day is rainy and half the time it is not." The question about tomorrow's weather is really a continuation of the experiment and we estimate the probability based on prior outcomes.
But it's the case that once a probabilistic experiment has occurred, the probability of the outcome for that experiment goes to 1 and all other outcomes go to 0. Once the dice are rolled and come up 7, the likelihood that the outcome of that experiment was 7 is 1 and that it was anything else is 0.
Now consider the statement T, "Linda is a bank teller." There is no probabilistic context here. Linda is what she is and is nothing else, like a dice roll that has already happened. So the probability that Linda is a bank teller is either 1 or 0 depending on whether she actually is a bank teller. Same with her being a feminist, and same with the conjunction of her being both a bank teller and a feminist. They are all either 1 or 0.
Of course, if Linda is not a bank teller, then the probability of T is 0 and T ^ F is 0. But if she is a bank teller but not a feminist then the probability of T is 1, F is 0, and T ^ F is 0. So in that sense it is strictly true that the third statement can never be more probable than the first.
But this is a degenerate use of "likelihood." Likelihood adds nothing to the analysis which is strictly logical. And in fact the participants are not "failed" for a failure to estimate probabilities correctly, but for the alleged failure to perceive the logical necessity of the conditional "If T^F then T". So there is a bit of a bait and switch going on.
People generally approach test questions in good faith. They assume the questions are well-formed, and when they aren't, they provide their own context in an attempt to interpret the question as well-formed. In this case, being explicitly told that the question is probabilistic - and intuiting that any probabilistic question requires a probabilistic context within which the concept of likelihood makes sense - they supply the probabilistic context that is not provided by the question. Really they should say that the problem is degenerate and the likelihood of each statement is either 1 or 0, but we can't say which.
To come up with any other numbers requires a probabilistic background against which to generate a non-degenerate likelihood. This is where the much maligned "intuition" comes in. Forced to generate their own background, the participants likely tell themselves something like "If I was trying to find Linda, would I be more likely to find her starting with the general population of bank tellers, or focusing on the population that is both bank tellers and feminists?" And they reasonably conclude the latter is the better choice. Formally what they are doing is saying "Given a random draw on the populations of bank tellers, or the population that is both bank tellers and feminists, I'm more likely to come up with Linda making a draw on the latter." And they are right about that.
An intelligent participant would be understandably irritated if told later that he got the question wrong because he doesn't understand that if Linda is a bank teller and a feminist, then she is a bank teller. What the experimenters did was bait (or force) the participants into treating the question as a probabilistic one (requiring a probabilistic context), then graded them as though they had asked them a logical question about conjunction.
Thursday, September 3, 2015
The Cult of Suffering and Assisted Suicide
Andrew Stuttaford at the Secular Right has a post on what he calls the Cult of Suffering and assisted suicide.
I was struck by Stuttaford's objection to a certain Sister Constance Veit:
Life begins in suffering - birth is a traumatic experience - and involves suffering of some sort until death. Until very recently, regular and persistent pain was a fact of life. Imagine having a toothache before novocaine or a kidney stone before modern surgery. My grandfather's generation would pull their own teeth with a pair of pliers. And I remember reading about an instrument people once inserted in themselves all the way up to their kidneys in order to crush kidney stones so they could later be passed in excruciating pain.
And yet, historically, persistent suffering of a physical variety was not what generally drove people to suicide. Those reasons were typically emotional - Romeo and Juliet or stockbrokers jumping off buildings after the 1929 crash - or matters of honor: Roman (or, recently, Japanese) generals doing themselves in after a defeat, or pederasts caught in the act (King George V: "Good grief! I thought chaps like that shot themselves.") If persistent suffering were something that could only be answered with death, everyone would have killed himself 200 years ago. So much for the human race.
The problem with suffering is that it is a fact of life that doesn't go away whatever your philosophy. (Well, that is not quite true: Death makes it go away.) Mr. Stuttaford speaks of "empathy for your fellow man" but I wonder what his "empathy" actually means in practice. The Little Sisters of the Poor minister to the dying who are beyond hope of recovery. Whatever Stuttaford thinks of their empathy, they at least make sure the dying do not die alone or friendless. And they offer them the hope that their suffering is not meaningless. Does Stuttaford spend any time with the dying, or does his "empathy" extend only so far as the abstract position that they should be offered a lethal syringe? I find such "empathy" far more horrifying than anything Sister Veit says - and in fact is not empathy at all but merely an embrace of the Cult of Death. To that I prefer the Cult of Suffering.
I was struck by Stuttaford's objection to a certain Sister Constance Veit:
That last paragraph is, I have to say, disgusting. Sister Veit's argument that those wrestling with the later stages of a cruel disease are on a "mission" on behalf of the rest of us, a mission that they never asked to be on, is an expression of fanaticism, terrifying in its absence of empathy for her fellow man.The "a mission that they never asked to be on" reminds of Chesterton's discussion of this point in the chapter "The Flag of the World" in Orthodoxy:
A man belongs to this world before he begins to ask if it is nice to belong to it. He has fought for the flag, and often won heroic victories for the flag, long before he has ever enlisted. To put shortly what seems the essential matter, he has a loyalty long before he has any admiration.In other words, we are born on a mission, and have accepted that mission, long before we ever have the chance to "ask" whether we want to be on it. GKC calls this the "primary loyalty" to life and, like all primary principles, it can be difficult to defend because it is generally what one argues from rather than what one argues to. Historically this primary loyalty was taken for granted as obvious and commonsensical, like patriotism and loyalty to one's country - in this case, "cosmic patriotism."
Life begins in suffering - birth is a traumatic experience - and involves suffering of some sort until death. Until very recently, regular and persistent pain was a fact of life. Imagine having a toothache before novocaine or a kidney stone before modern surgery. My grandfather's generation would pull their own teeth with a pair of pliers. And I remember reading about an instrument people once inserted in themselves all the way up to their kidneys in order to crush kidney stones so they could later be passed in excruciating pain.
And yet, historically, persistent suffering of a physical variety was not what generally drove people to suicide. Those reasons were typically emotional - Romeo and Juliet or stockbrokers jumping off buildings after the 1929 crash - or matters of honor: Roman (or, recently, Japanese) generals doing themselves in after a defeat, or pederasts caught in the act (King George V: "Good grief! I thought chaps like that shot themselves.") If persistent suffering were something that could only be answered with death, everyone would have killed himself 200 years ago. So much for the human race.
The problem with suffering is that it is a fact of life that doesn't go away whatever your philosophy. (Well, that is not quite true: Death makes it go away.) Mr. Stuttaford speaks of "empathy for your fellow man" but I wonder what his "empathy" actually means in practice. The Little Sisters of the Poor minister to the dying who are beyond hope of recovery. Whatever Stuttaford thinks of their empathy, they at least make sure the dying do not die alone or friendless. And they offer them the hope that their suffering is not meaningless. Does Stuttaford spend any time with the dying, or does his "empathy" extend only so far as the abstract position that they should be offered a lethal syringe? I find such "empathy" far more horrifying than anything Sister Veit says - and in fact is not empathy at all but merely an embrace of the Cult of Death. To that I prefer the Cult of Suffering.
Saturday, August 29, 2015
The Quotable Chesterton
"A great classic means a man whom one can praise without having read." - G.K. Chesterton
It's virtually a cliche to point out that Chesterton is among the most quotable of authors. But it's easy to misunderstand the Chesterton quote taken out of context. For instance, take the quote above, from his essay "Tom Jones and Morality" in All Things Considered. Our first reaction to it may be to think that GKC is being ironic and taking a swipe at people who talk up a classic without having read it. But in context it is clear that GKC means no such thing and intends just what he says.
Chesterton's point is ultimately conservative in the best sense of the word. A great classic becomes so based on the developed opinion of mankind over many decades or centuries. We can praise a classic without having read it based on trust in that common, longstanding opinion. I can, Chesterton says, talk of "great poets" like Pindar without ever having read Pindar because "a man has got as much right to employ in his speech the established and traditional facts of human history as he has to employ any other piece of common human information." And the status of great classics is one of those "established and traditional facts.
While GKC defends the right of men to praise a classic without having read it, he disputes a right to condemn a classic without having read it. The reason should be obvious. Praising a classic is submitting to the historically developed consensus concerning a work; condemning one is contradicting that tradition and, so, going it on your own. If you are going to contradict the received opinion, you've got to have some reasons for doing so, and it is hard to see how you could have good ones without having read the work in question.
GKC never wrote pithy quotes for the sake of being quoted. His wit is always a spur to more considered reflection - a reason for us to be careful of a GKC quote absent context.
It's virtually a cliche to point out that Chesterton is among the most quotable of authors. But it's easy to misunderstand the Chesterton quote taken out of context. For instance, take the quote above, from his essay "Tom Jones and Morality" in All Things Considered. Our first reaction to it may be to think that GKC is being ironic and taking a swipe at people who talk up a classic without having read it. But in context it is clear that GKC means no such thing and intends just what he says.
Chesterton's point is ultimately conservative in the best sense of the word. A great classic becomes so based on the developed opinion of mankind over many decades or centuries. We can praise a classic without having read it based on trust in that common, longstanding opinion. I can, Chesterton says, talk of "great poets" like Pindar without ever having read Pindar because "a man has got as much right to employ in his speech the established and traditional facts of human history as he has to employ any other piece of common human information." And the status of great classics is one of those "established and traditional facts.
While GKC defends the right of men to praise a classic without having read it, he disputes a right to condemn a classic without having read it. The reason should be obvious. Praising a classic is submitting to the historically developed consensus concerning a work; condemning one is contradicting that tradition and, so, going it on your own. If you are going to contradict the received opinion, you've got to have some reasons for doing so, and it is hard to see how you could have good ones without having read the work in question.
GKC never wrote pithy quotes for the sake of being quoted. His wit is always a spur to more considered reflection - a reason for us to be careful of a GKC quote absent context.
Friday, August 21, 2015
Science Discovers Socrates
When I first began to seriously read philosophy, and by that I mean reading Plato, Aristotle and Aquinas directly and not through summaries or interpretations of them, perhaps the most thrilling discovery I made was the extent to which they anticipated just about every important philosophical position that might be taken. What I had thought were modern views that the ancients were too ignorant or naive to conceive, had in fact been explored by them, and were often treated more intelligently than they were by their supposed modern betters. This is no more true than in Plato.
For instance, the objection that philosophy is just a verbal game that never really proves anything seems like a modern objection based on a review of the long history of philosophy. But we find that this is actually an ancient objection, and in fact was at the heart of the charges against Socrates at his trial. Socrates, it was claimed, just played verbal games making the weaker argument appear the stronger, misleading his young followers. Or the objection that there is no objective morality, and that "right" and "wrong" are in fact defined by whomever is the strongest and able to impose his views. We like to think that it was the naive ancients who believed in things like ghosts and objective morality, whereas we moderns, wiser through science and cultural experience, no longer fall for such things. But the idea that "right" and "wrong" have no objective foundation is a very ancient opinion and is the subject of the Platonic dialog Gorgias, in which Socrates has a spirited argument with a defender of such a view.
I recently had, once again, the experience of reading an intelligent modern author (and scientist) elaborate what he thought was a novel insight but was one which, naturally, had been explored by Plato thousands of years ago. I refer you to Dr. Steven Novella's Neurologica blog, in which he wrote a post discussing Expertise and the Illusion of Knowledge. The post begins with:
Most of his blog post is concerned with the scientific investigation of the illusion of knowledge, and it is only at the end of the post, and almost in passing, that Dr. Novella approaches but never actually raises the truly decisive question:
It's not enough to merely mention the Dunning-Kruger effect and move on, as though simple awareness of the effect is sufficient to inoculate one from it. The scientists used made up terms and fake concepts (like "annualized credit") to measure the extent to which subjects claimed knowledge they could not possibly have (since there was nothing to know), and perhaps it would be a good start to make sure we ourselves are not trading in deliberately bogus concepts. But that's not really the problem that faces us. The problem for us is that, even trading in legitimate concepts, we can end up believing we know things to be true that we don't.
Before discussing Plato's answer to the question of how we know when we truly know, let's consider modern approaches to the question. Descartes could be said to have launched the modern era by proposing universal doubt as the true way to found epistemology (or, the science of how we know what we know). Doubt all that you know, and what can survive that doubt can be confidently embraced as truly known. Famously, Descartes concluded the one thing that survived universal doubt was the fact of his own thinking - cogito ergo sum. From that nugget, Descartes reconstructed the world of common sense, including the existence of God.
Unfortunately, it turned out that Descartes's procedure wasn't the pure doubt he thought it was. Why, for instance, is thinking the crucial existential act? I dance therefore I am, I pray therefore I am, I eat therefore I am all work as well. In fact, as Kierkegaard tells us, the simple I is sufficient to establish existence. I anything therefore I am works because it is really I am that comes first and anything else comes later. What Descartes's approach does is falsely privilege thought over existence, as though existence were held in suspense until thought ratified it. Instead, the truth of our own existence is immediately known to us, and the conclusion we should draw is not that existence is the one thing thought can safely conclude, but that it was foolish for thought to ever doubt existence in the first place.
This may sound like one of those philosophical points that really has no bearing on anything anyone is really interested in but it is far from that. For the Cartesian move can be summed up in the principle that doubt is its own justification or, in other words, that we are justified in doubting something by the simple fact that it can be doubted. This Cartesian attitude has become deeply embedded in the modern consciousness, not just in philosophers, but in the common man as well. And it has terrible effects because it is false to human nature.
Human nature is incarnate - we existence as embodied beings in time and space. Time starts running for us as soon as we are born and does not stop for us until we die, and every moment of that time existence makes demands on us, whether we doubt those demands or not. As children, we must be fed, kept warm and educated. A child cannot doubt and, in any event, should not doubt what he is presented with. A baby who somehow was able to doubt the value of the food he was given and refused to eat until the nature and necessity of food was established for him would soon die; a child who doubts his parent's admonishments to not wander off with strangers may very likely find himself in an unanticipated but dreadful situation. So by the time a child has grown old enough to learn of Descartes and considers flirting with the process of universal doubt, he has already spent many years not doubting and, in fact, could only have arrived at the position of being able to doubt through that non-doubt (which I will give the name faith for purposes of brevity.) Will he then embrace doubt, including doubt of the very life story that brought him to the place at which he could doubt? This isn't a bold move into sure knowledge, but the deliberate forgetting of that which made us who and what we are; the consequence of which is the tendency of modern man to wander through life not knowing what he is doing.
Really the situation is this: To get through life, we must believe many things, simply to get on with our day. Universal doubt is an existential impossibility and is the arbitrary decision to take one side of the analysis of error Kierkegaard poses at the beginning of Works of Love: One can go wrong by believing that which is false, but one can also go wrong by failing to believe that which is true. The modern man following Descartes assumes the downside is all in falling into the former error. But falling into the latter error is arguably worse, Kierkegaard tells us, because through it we close ourselves off to the best things in life, which can only be had through faith.
One way to think of the Cartesian approach is as an attempt to find an absolute starting point for philosophy; a point which can be embraced by any man, anywhere as the start of his thought. Descartes, the mathematician, is naturally thinking of things like geometry, which has an absolute starting point in Euclid's postulates. Anyone, anywhere at anytime who wishes to take up geometry must do it, if he is to do it legitimately at all, with these same postulates. Can the same be said of thought in general? If so, then we could get past the endless dialog of opinion that was characteristic of philosophy and so distressed the founders of modern thought.
The problem, as we've seen, is that we have already embraced many things, things that have made us who we are, by the time we arrive at a place where we could undertake Cartesian doubt. Geometry can start anytime we want, but life has already started and conditioned us by the time we become philosophically aware. And it continues to condition us even as we ponder it. What this means is that, unlike geometry, there can be no absolute starting point to philosophy. The ancient dialog of opinion that characterizes classical philosophy is not a peculiar feature of that philosophy, but is reflective of the substance of philosophy, which is human existence.
When we arrive at the point at which philosophical consciousness is possible, we have already been conditioned by our upbringing and education. We already have a set of beliefs about the world and ourselves, about what the nature of the world is and who we are, about what is good and evil, about what is important and not important. Philosophical awareness, whether of the Socratic or Cartesian variety, can begin to happen when we realize not all that we think we know we do in fact know. The Socratic approach to this realization, unlike the Cartesian approach, is not therefore to throw everything we believe overboard. It is, rather, to understand that human nature is such that we must accept as true many things that have not as yet survived our critical scrutiny. It is to continue to live and commit ourselves in light of those beliefs, and to gradually but methodically subject those beliefs to philosophical scrutiny.
Notice how subjective this process is. By subjective I merely mean that every individual will have had a different experience and be equipped with a different set of opinions by the time he comes to philosophical consciousness. And philosophy for him can only mean working through the set of opinions that are peculiar to him. Thus there is no absolute starting point to philosophy because there is no absolute starting point to life. The only universal starting point was established by Plato and, brilliantly, explored in his writing. By writing his philosophy in dialog form, following the examination of opinion by Socrates, Plato communicates the truth that philosophy can only mean working through the opinions particular to a man - you - and not some abstract set of opinions or truths falsely claimed to be a priori universal.
Some of the conundrums that puzzle we moderns show how far we have strayed from the Socratic viewpoint. For instance, one often hears the assertion: If you had grown up a Jew you would be a Jew now, or you had grown up a Muslim, you would be a Muslim now. The only reason you are a Christian is because you were born into a Christian family. The implication, Cartesian in spirit, is that we can only really find the truth by abstracting ourselves out of the existential commitments into which we are born. But this is simply false. Socrates did not discard the social and cultural obligations into which he was born as an Athenian. In fact, his last words poignantly show that his obligations were on his mind right to the end: "Crito, we owe a cock to Asclepius." What he did do was philosophically investigate those commitments as he lived them, as shown, for instance, in the dialog Euthyphro. That I might be a Muslim today were I born in a Muslim country simply shows that - hopefully - I would not think human obligation goes away just because I doubt it even if I were a Muslim. It is true that I might not have experienced the philosophical freedom I do now were I born Muslim, but this does nothing to undermine the philosophical freedom I do have having been born here. In other words, the fact that I might have been born a Muslim and never really challenged it philosophically does nothing to show that there is anything philosophically suspect in being born a Catholic and staying a Catholic. It might just be - and I think it is - that Catholicism is the one religion that really can withstand philosophical scrutiny.
So what is Plato's answer to the question of how we distinguish what we know from what we only think we know? The answer is that we know something to the extent that we can answer for it; that is, that it can withstand philosophical scrutiny in the form of Socratic cross-examination. This answer is both subjective and not absolute; I know something to the extent that I can provide reasons for it that can withstand scrutiny. And it is not absolute because cross-examination never has an absolute end. Our views can always be subject to further challenge. Put another way: I know something when I can provide a good answer to the question - How do you know that?
Returning to Dr. Novella and his post, his last sentence admonishing his readers to take account of the Dunning-Krueger effect (historically known as the Socratic insight) in their own thinking constitutes a Socratic moment. If we stop and ponder the implications of the realization of our own ignorance, we may find ourselves open to a truly philosophical adventure - one in which there is no absolute starting point but which has an absolute end in the truth. One way to short circuit this adventure is by positing an absolute starting point to thought - be it Cartesian universal doubt, or, as seems to be the case with Dr. Novella, the value of science. But the value of science, and indeed what constitutes science vs. the pseudoscience Novella battles in his blog, are not themselves scientific but meta-scientific (i.e. philosophical) questions. And as such they can only be resolved through the dialog of opinion.
So embrace the Dunning-Krueger effect, but turn to Plato to discover what it truly means.
For instance, the objection that philosophy is just a verbal game that never really proves anything seems like a modern objection based on a review of the long history of philosophy. But we find that this is actually an ancient objection, and in fact was at the heart of the charges against Socrates at his trial. Socrates, it was claimed, just played verbal games making the weaker argument appear the stronger, misleading his young followers. Or the objection that there is no objective morality, and that "right" and "wrong" are in fact defined by whomever is the strongest and able to impose his views. We like to think that it was the naive ancients who believed in things like ghosts and objective morality, whereas we moderns, wiser through science and cultural experience, no longer fall for such things. But the idea that "right" and "wrong" have no objective foundation is a very ancient opinion and is the subject of the Platonic dialog Gorgias, in which Socrates has a spirited argument with a defender of such a view.
I recently had, once again, the experience of reading an intelligent modern author (and scientist) elaborate what he thought was a novel insight but was one which, naturally, had been explored by Plato thousands of years ago. I refer you to Dr. Steven Novella's Neurologica blog, in which he wrote a post discussing Expertise and the Illusion of Knowledge. The post begins with:
In general people think they know more than they do. This is arguably worse than mere ignorance - having the illusion of knowledge.Anyone familiar with Plato will immediately see that Dr. Novella is practically quoting Socrates in the Apology. But he does not seem to be familiar with Plato, and he goes on to describe the scientific investigation that backs up the assertion of the illusion of knowledge, as though the possibility of the illusion of knowledge had not already been decisively established for Western culture twenty five hundred years ago in Athens.
Most of his blog post is concerned with the scientific investigation of the illusion of knowledge, and it is only at the end of the post, and almost in passing, that Dr. Novella approaches but never actually raises the truly decisive question:
As always, I encourage my readers to apply these lessons not only to others but to themselves. The Dunning-Kruger effect and the illusion of knowledge apply to everyone, not just to others.The horrifying thing about the illusion of knowledge is that when you have it, you don't know you do. That is why it is an illusion. And the the question of questions is: How do I know when I truly know something as distinct from when I only think I know it?
It's not enough to merely mention the Dunning-Kruger effect and move on, as though simple awareness of the effect is sufficient to inoculate one from it. The scientists used made up terms and fake concepts (like "annualized credit") to measure the extent to which subjects claimed knowledge they could not possibly have (since there was nothing to know), and perhaps it would be a good start to make sure we ourselves are not trading in deliberately bogus concepts. But that's not really the problem that faces us. The problem for us is that, even trading in legitimate concepts, we can end up believing we know things to be true that we don't.
Before discussing Plato's answer to the question of how we know when we truly know, let's consider modern approaches to the question. Descartes could be said to have launched the modern era by proposing universal doubt as the true way to found epistemology (or, the science of how we know what we know). Doubt all that you know, and what can survive that doubt can be confidently embraced as truly known. Famously, Descartes concluded the one thing that survived universal doubt was the fact of his own thinking - cogito ergo sum. From that nugget, Descartes reconstructed the world of common sense, including the existence of God.
Unfortunately, it turned out that Descartes's procedure wasn't the pure doubt he thought it was. Why, for instance, is thinking the crucial existential act? I dance therefore I am, I pray therefore I am, I eat therefore I am all work as well. In fact, as Kierkegaard tells us, the simple I is sufficient to establish existence. I anything therefore I am works because it is really I am that comes first and anything else comes later. What Descartes's approach does is falsely privilege thought over existence, as though existence were held in suspense until thought ratified it. Instead, the truth of our own existence is immediately known to us, and the conclusion we should draw is not that existence is the one thing thought can safely conclude, but that it was foolish for thought to ever doubt existence in the first place.
This may sound like one of those philosophical points that really has no bearing on anything anyone is really interested in but it is far from that. For the Cartesian move can be summed up in the principle that doubt is its own justification or, in other words, that we are justified in doubting something by the simple fact that it can be doubted. This Cartesian attitude has become deeply embedded in the modern consciousness, not just in philosophers, but in the common man as well. And it has terrible effects because it is false to human nature.
Human nature is incarnate - we existence as embodied beings in time and space. Time starts running for us as soon as we are born and does not stop for us until we die, and every moment of that time existence makes demands on us, whether we doubt those demands or not. As children, we must be fed, kept warm and educated. A child cannot doubt and, in any event, should not doubt what he is presented with. A baby who somehow was able to doubt the value of the food he was given and refused to eat until the nature and necessity of food was established for him would soon die; a child who doubts his parent's admonishments to not wander off with strangers may very likely find himself in an unanticipated but dreadful situation. So by the time a child has grown old enough to learn of Descartes and considers flirting with the process of universal doubt, he has already spent many years not doubting and, in fact, could only have arrived at the position of being able to doubt through that non-doubt (which I will give the name faith for purposes of brevity.) Will he then embrace doubt, including doubt of the very life story that brought him to the place at which he could doubt? This isn't a bold move into sure knowledge, but the deliberate forgetting of that which made us who and what we are; the consequence of which is the tendency of modern man to wander through life not knowing what he is doing.
Really the situation is this: To get through life, we must believe many things, simply to get on with our day. Universal doubt is an existential impossibility and is the arbitrary decision to take one side of the analysis of error Kierkegaard poses at the beginning of Works of Love: One can go wrong by believing that which is false, but one can also go wrong by failing to believe that which is true. The modern man following Descartes assumes the downside is all in falling into the former error. But falling into the latter error is arguably worse, Kierkegaard tells us, because through it we close ourselves off to the best things in life, which can only be had through faith.
One way to think of the Cartesian approach is as an attempt to find an absolute starting point for philosophy; a point which can be embraced by any man, anywhere as the start of his thought. Descartes, the mathematician, is naturally thinking of things like geometry, which has an absolute starting point in Euclid's postulates. Anyone, anywhere at anytime who wishes to take up geometry must do it, if he is to do it legitimately at all, with these same postulates. Can the same be said of thought in general? If so, then we could get past the endless dialog of opinion that was characteristic of philosophy and so distressed the founders of modern thought.
The problem, as we've seen, is that we have already embraced many things, things that have made us who we are, by the time we arrive at a place where we could undertake Cartesian doubt. Geometry can start anytime we want, but life has already started and conditioned us by the time we become philosophically aware. And it continues to condition us even as we ponder it. What this means is that, unlike geometry, there can be no absolute starting point to philosophy. The ancient dialog of opinion that characterizes classical philosophy is not a peculiar feature of that philosophy, but is reflective of the substance of philosophy, which is human existence.
When we arrive at the point at which philosophical consciousness is possible, we have already been conditioned by our upbringing and education. We already have a set of beliefs about the world and ourselves, about what the nature of the world is and who we are, about what is good and evil, about what is important and not important. Philosophical awareness, whether of the Socratic or Cartesian variety, can begin to happen when we realize not all that we think we know we do in fact know. The Socratic approach to this realization, unlike the Cartesian approach, is not therefore to throw everything we believe overboard. It is, rather, to understand that human nature is such that we must accept as true many things that have not as yet survived our critical scrutiny. It is to continue to live and commit ourselves in light of those beliefs, and to gradually but methodically subject those beliefs to philosophical scrutiny.
Notice how subjective this process is. By subjective I merely mean that every individual will have had a different experience and be equipped with a different set of opinions by the time he comes to philosophical consciousness. And philosophy for him can only mean working through the set of opinions that are peculiar to him. Thus there is no absolute starting point to philosophy because there is no absolute starting point to life. The only universal starting point was established by Plato and, brilliantly, explored in his writing. By writing his philosophy in dialog form, following the examination of opinion by Socrates, Plato communicates the truth that philosophy can only mean working through the opinions particular to a man - you - and not some abstract set of opinions or truths falsely claimed to be a priori universal.
Some of the conundrums that puzzle we moderns show how far we have strayed from the Socratic viewpoint. For instance, one often hears the assertion: If you had grown up a Jew you would be a Jew now, or you had grown up a Muslim, you would be a Muslim now. The only reason you are a Christian is because you were born into a Christian family. The implication, Cartesian in spirit, is that we can only really find the truth by abstracting ourselves out of the existential commitments into which we are born. But this is simply false. Socrates did not discard the social and cultural obligations into which he was born as an Athenian. In fact, his last words poignantly show that his obligations were on his mind right to the end: "Crito, we owe a cock to Asclepius." What he did do was philosophically investigate those commitments as he lived them, as shown, for instance, in the dialog Euthyphro. That I might be a Muslim today were I born in a Muslim country simply shows that - hopefully - I would not think human obligation goes away just because I doubt it even if I were a Muslim. It is true that I might not have experienced the philosophical freedom I do now were I born Muslim, but this does nothing to undermine the philosophical freedom I do have having been born here. In other words, the fact that I might have been born a Muslim and never really challenged it philosophically does nothing to show that there is anything philosophically suspect in being born a Catholic and staying a Catholic. It might just be - and I think it is - that Catholicism is the one religion that really can withstand philosophical scrutiny.
So what is Plato's answer to the question of how we distinguish what we know from what we only think we know? The answer is that we know something to the extent that we can answer for it; that is, that it can withstand philosophical scrutiny in the form of Socratic cross-examination. This answer is both subjective and not absolute; I know something to the extent that I can provide reasons for it that can withstand scrutiny. And it is not absolute because cross-examination never has an absolute end. Our views can always be subject to further challenge. Put another way: I know something when I can provide a good answer to the question - How do you know that?
Returning to Dr. Novella and his post, his last sentence admonishing his readers to take account of the Dunning-Krueger effect (historically known as the Socratic insight) in their own thinking constitutes a Socratic moment. If we stop and ponder the implications of the realization of our own ignorance, we may find ourselves open to a truly philosophical adventure - one in which there is no absolute starting point but which has an absolute end in the truth. One way to short circuit this adventure is by positing an absolute starting point to thought - be it Cartesian universal doubt, or, as seems to be the case with Dr. Novella, the value of science. But the value of science, and indeed what constitutes science vs. the pseudoscience Novella battles in his blog, are not themselves scientific but meta-scientific (i.e. philosophical) questions. And as such they can only be resolved through the dialog of opinion.
So embrace the Dunning-Krueger effect, but turn to Plato to discover what it truly means.
Saturday, July 18, 2015
Relating Ourselves to Indirect Knowledge, Pt. 2
In part 1of this series, I began a discussion of how we can use reason to relate ourselves to indirect knowledge. Indirect knowledge is, briefly, knowledge that we ourselves do not know the immediate reasons for its truth. Instead, someone else knows the reasons, and we are related to that knowledge through them by their mediation. Examples include complicated mathematical proofs (like the one recently demonstrated for Fermat's Last Theorem). We might not be able to follow the logic, but the mathematicians can, and we can appreciate what the mathematical geniuses have done. Or scientific claims like global warming, in which we cannot possibly conduct or review the science ourselves, but instead must trust what the relevant experts say about it.
Relating ourselves to indirect knowledge is very different than relating ourselves directly to knowledge. The latter involves a consideration of truth immediately in terms of the fundamental reasons for something's being true or not. There is no mediator. In the former, the crucial question is how we judge the mediator, since we must take his word respecting the fundamental reasons for the truth of falsity of something. In my earlier post, I pointed to Socrates as an example of how to evaluate mediators, and used his example in the Apology: We must test a mediator to discover whether he himself is able to separate his knowledge from his opinions, and so give us only his expert knowledge and not also, in addition, his non-expert and perhaps poorly founded opinions masquerading as expert knowledge. I gave Carl Sagan as a classic example of the expert who fails Socratic examination. In such cases, an expert can still be useful, but we must be very careful to separate what he genuinely knows through his expertise (the wheat) versus the mass of non-expert opinion he gives along with it (the chaff).
We may also consider that indirect knowledge can never contradict direct knowledge. There is only one truth and it is the same for us as it is for everyone else. Thus we know 2+2=4 directly, and any purportedly expert theory that ends up contradicting that truth (implicitly as well as explicitly) must be suspect; for whatever the expert knows, he can't know that 2+2 equals something other than 4. That's an obvious and trivial example, and better examples are not hard to find. Let's look at what Jerry Coyne tells us about truth, fact and knowledge on pages 186 and 195 of Faith vs Fact:
(Begin quote)
For consistency, I'll again use the Oxford English Dictionary's definitions, which correspond roughly to most people's vernacular use. "Truth" is "conformity with fact; agreement with reality; accuracy, correctness, verity (of statement or thought.)" Because we're discussing facts about the universe, I'll use "fact" as Stephen Jay Gould defined "scientific facts": those "confirmed to such a degree that it would be perverse to withhold provisional assent." Note that these definitions imply the use of independent confirmation - a necessary ingredient for determining what's real - and consensus, that is, the ability of any reasonable person familiar with the method of study to agree on what it confirms... Finally, "knowledge" is simply the public acceptance of facts; as the Dictionary puts it, "The apprehension of fact or truth with the mind; clear and certain perception of fact or truth; the state or condition of knowing fact or truth." What is true may exist without being recognized, but once it is it becomes knowledge. Similarly, knowledge isn't knowledge unless it is factual, so "private knowledge" that comes through revelation or intuition isn't really knowledge, for it's missing the crucial ingredient of verification and consensus...
"I'm hungry," my friend tells me, and that too is seen as extrascientific knowledge. And indeed, any feeling that you have, any notion or revelation, can be seen as subjective truth or knowledge. What that means is that it's true that you feel that way. What that doesn't mean is that the epistemic content of your feeling is true. That requires independent verification by others. Often someone claiming hunger actually eats very little, giving rise to the bromide "Your eyes are bigger than your stomach."
(Emphases in original and end quote).
Socrates once put forward the observation that flute-playing implies a flute player. Similarly, knowledge implies a knower. There is no knowledge without someone knowing that knowledge or, in other words, knowledge is the substance of the act of knowing. What this means is that, contra Coyne, all knowledge is subjective, meaning that all knowledge is knowledge only because it is known by someone, somewhere, at some time. The fact that all knowledge is subjective is a piece of primary knowledge - it is something we can know directly for ourselves simply by reflection on the nature of things. Thinkers like Coyne like to speak of the abstraction "science", as though it is a disembodied process generating results all on its own, but we should remember that science is but the activity of scientists, and to the extent that anything is known by science, it is known by individual scientists here and there.
The "independent confirmation" of which Coyne writes is a useful and wonderful thing, but he fails to realize that it is dependent on the "subjective truth or knowledge" that he disparages. "I'm hungry" is certainly one thing we can say; another is "I hear or have read your experience in confirming my scientific experiment." The latter is as subjective as the former. Coyne claims that the former needs independent verification of its epistemic content (that content apparently being "I need food.") Well what about the latter? The epistemic content of the latter is that "it is a fact that you have confirmed my scientific experiment." This would seem to need independent verification as well. How will I get it? By listening to something else you say or write, or what someone else has said or written? Then those subjective experiences - which as experiences are also of the form "I am hearing you say that..." - are themselves subject to the same requirement of independent verification. We have an infinite regress here, and for a very good reason. Any contact I have with reality will be subjective, simply because I am me, and science can escape that truth only on pain of indulging in magical thinking. Introducing a radical divide between our subjective experience and its epistemic content destroys not only Coyne's intended target of religious belief, but the very possibility of knowledge.
"I'm hungry" does not always mean that I need food. But in the normal course of events it does; that is why nature gave us the feeling. "I hear you saying that you have confirmed my experiment" doesn't always mean I have heard you say that - I could be dreaming, hallucinating or simply have misheard you - let alone that you have in fact confirmed my experiment. But in the normal course of events it does, and in the normal course of events I might reasonably take for granted that you have in fact confirmed my experiment. Subjective experience is not indubitable; the attempt to make it indubitable (as in the thinking of Descartes) only leads to yet more fundamental and dangerous misunderstandings. But it is literally all we have got.
The only basis from which to critique our subjective experience is through yet more subjective experience. Doesn't this just involve us in yet another infinite regress? No, because this involves us in the philosophical process of dialectic. Subjective experience does not go on to infinity, but turns back on itself, We criticize subjective experience A in terms of experience B and B in terms of A, deciding what makes the most sense based on how our theories make sense of experience comprehensively.
For instance, consider the ancient philosophical question of the difference between sleeping and waking. How do I know I'm not sleeping right now? I notice that in certain cognitive states the question of whether I am sleeping or waking never occurs to me, and seems like it could not occur. These states, of course, are when I am sleeping, and in fact when the question occurs to me as to whether I am sleeping or waking, I know I am in the processing of waking up. So the difference between sleeping and waking seems to be that waking is aware of both itself and the state of sleeping, while sleeping is not aware of either itself or the waking state. Now since I am aware of the distinction between the two states, I must be awake. Thus we have the subjective experience of sleeping (experience A) being critiqued from subjective experience B (waking), with both experiences shedding light on the other (from the perspective of B) leading to a comprehensive insight into both experiences.
Or consider the process of science itself. While flute-playing implies a flute-player, and knowledge implies a knower, science implies a scientist. That is, all science occurs in the context of the subjective experience of a scientist. This is a very valuable piece of direct knowledge that is surprisingly often overlooked. Scientists, being people like you and me, can and must take the everyday world of common sense for granted; not just in their everyday life, but in their scientific endeavors as well. If the microbiologist starts wondering whether he's really looking through his microscope, or the physicist that he's really discussing his results with other physicists and not merely a Matrix-like simulation meant to deceive him, then his science will never get started. There is therefore a dialectic between ordinary experience and the specialized experience of the scientist in the lab.
Coyne seems to be in the grip of a mythical belief that the scientific method allows one, in the moment of science, to transcend human nature itself and reach the otherwise unattainable realm of the "objective." Like all truth myths, it isn't recognized as such but serves as an unarticulated background assumption.
And the cure for it is philosophical reflection on direct experience.
Relating ourselves to indirect knowledge is very different than relating ourselves directly to knowledge. The latter involves a consideration of truth immediately in terms of the fundamental reasons for something's being true or not. There is no mediator. In the former, the crucial question is how we judge the mediator, since we must take his word respecting the fundamental reasons for the truth of falsity of something. In my earlier post, I pointed to Socrates as an example of how to evaluate mediators, and used his example in the Apology: We must test a mediator to discover whether he himself is able to separate his knowledge from his opinions, and so give us only his expert knowledge and not also, in addition, his non-expert and perhaps poorly founded opinions masquerading as expert knowledge. I gave Carl Sagan as a classic example of the expert who fails Socratic examination. In such cases, an expert can still be useful, but we must be very careful to separate what he genuinely knows through his expertise (the wheat) versus the mass of non-expert opinion he gives along with it (the chaff).
We may also consider that indirect knowledge can never contradict direct knowledge. There is only one truth and it is the same for us as it is for everyone else. Thus we know 2+2=4 directly, and any purportedly expert theory that ends up contradicting that truth (implicitly as well as explicitly) must be suspect; for whatever the expert knows, he can't know that 2+2 equals something other than 4. That's an obvious and trivial example, and better examples are not hard to find. Let's look at what Jerry Coyne tells us about truth, fact and knowledge on pages 186 and 195 of Faith vs Fact:
(Begin quote)
For consistency, I'll again use the Oxford English Dictionary's definitions, which correspond roughly to most people's vernacular use. "Truth" is "conformity with fact; agreement with reality; accuracy, correctness, verity (of statement or thought.)" Because we're discussing facts about the universe, I'll use "fact" as Stephen Jay Gould defined "scientific facts": those "confirmed to such a degree that it would be perverse to withhold provisional assent." Note that these definitions imply the use of independent confirmation - a necessary ingredient for determining what's real - and consensus, that is, the ability of any reasonable person familiar with the method of study to agree on what it confirms... Finally, "knowledge" is simply the public acceptance of facts; as the Dictionary puts it, "The apprehension of fact or truth with the mind; clear and certain perception of fact or truth; the state or condition of knowing fact or truth." What is true may exist without being recognized, but once it is it becomes knowledge. Similarly, knowledge isn't knowledge unless it is factual, so "private knowledge" that comes through revelation or intuition isn't really knowledge, for it's missing the crucial ingredient of verification and consensus...
"I'm hungry," my friend tells me, and that too is seen as extrascientific knowledge. And indeed, any feeling that you have, any notion or revelation, can be seen as subjective truth or knowledge. What that means is that it's true that you feel that way. What that doesn't mean is that the epistemic content of your feeling is true. That requires independent verification by others. Often someone claiming hunger actually eats very little, giving rise to the bromide "Your eyes are bigger than your stomach."
(Emphases in original and end quote).
Socrates once put forward the observation that flute-playing implies a flute player. Similarly, knowledge implies a knower. There is no knowledge without someone knowing that knowledge or, in other words, knowledge is the substance of the act of knowing. What this means is that, contra Coyne, all knowledge is subjective, meaning that all knowledge is knowledge only because it is known by someone, somewhere, at some time. The fact that all knowledge is subjective is a piece of primary knowledge - it is something we can know directly for ourselves simply by reflection on the nature of things. Thinkers like Coyne like to speak of the abstraction "science", as though it is a disembodied process generating results all on its own, but we should remember that science is but the activity of scientists, and to the extent that anything is known by science, it is known by individual scientists here and there.
The "independent confirmation" of which Coyne writes is a useful and wonderful thing, but he fails to realize that it is dependent on the "subjective truth or knowledge" that he disparages. "I'm hungry" is certainly one thing we can say; another is "I hear or have read your experience in confirming my scientific experiment." The latter is as subjective as the former. Coyne claims that the former needs independent verification of its epistemic content (that content apparently being "I need food.") Well what about the latter? The epistemic content of the latter is that "it is a fact that you have confirmed my scientific experiment." This would seem to need independent verification as well. How will I get it? By listening to something else you say or write, or what someone else has said or written? Then those subjective experiences - which as experiences are also of the form "I am hearing you say that..." - are themselves subject to the same requirement of independent verification. We have an infinite regress here, and for a very good reason. Any contact I have with reality will be subjective, simply because I am me, and science can escape that truth only on pain of indulging in magical thinking. Introducing a radical divide between our subjective experience and its epistemic content destroys not only Coyne's intended target of religious belief, but the very possibility of knowledge.
"I'm hungry" does not always mean that I need food. But in the normal course of events it does; that is why nature gave us the feeling. "I hear you saying that you have confirmed my experiment" doesn't always mean I have heard you say that - I could be dreaming, hallucinating or simply have misheard you - let alone that you have in fact confirmed my experiment. But in the normal course of events it does, and in the normal course of events I might reasonably take for granted that you have in fact confirmed my experiment. Subjective experience is not indubitable; the attempt to make it indubitable (as in the thinking of Descartes) only leads to yet more fundamental and dangerous misunderstandings. But it is literally all we have got.
The only basis from which to critique our subjective experience is through yet more subjective experience. Doesn't this just involve us in yet another infinite regress? No, because this involves us in the philosophical process of dialectic. Subjective experience does not go on to infinity, but turns back on itself, We criticize subjective experience A in terms of experience B and B in terms of A, deciding what makes the most sense based on how our theories make sense of experience comprehensively.
For instance, consider the ancient philosophical question of the difference between sleeping and waking. How do I know I'm not sleeping right now? I notice that in certain cognitive states the question of whether I am sleeping or waking never occurs to me, and seems like it could not occur. These states, of course, are when I am sleeping, and in fact when the question occurs to me as to whether I am sleeping or waking, I know I am in the processing of waking up. So the difference between sleeping and waking seems to be that waking is aware of both itself and the state of sleeping, while sleeping is not aware of either itself or the waking state. Now since I am aware of the distinction between the two states, I must be awake. Thus we have the subjective experience of sleeping (experience A) being critiqued from subjective experience B (waking), with both experiences shedding light on the other (from the perspective of B) leading to a comprehensive insight into both experiences.
Or consider the process of science itself. While flute-playing implies a flute-player, and knowledge implies a knower, science implies a scientist. That is, all science occurs in the context of the subjective experience of a scientist. This is a very valuable piece of direct knowledge that is surprisingly often overlooked. Scientists, being people like you and me, can and must take the everyday world of common sense for granted; not just in their everyday life, but in their scientific endeavors as well. If the microbiologist starts wondering whether he's really looking through his microscope, or the physicist that he's really discussing his results with other physicists and not merely a Matrix-like simulation meant to deceive him, then his science will never get started. There is therefore a dialectic between ordinary experience and the specialized experience of the scientist in the lab.
Coyne seems to be in the grip of a mythical belief that the scientific method allows one, in the moment of science, to transcend human nature itself and reach the otherwise unattainable realm of the "objective." Like all truth myths, it isn't recognized as such but serves as an unarticulated background assumption.
And the cure for it is philosophical reflection on direct experience.
Sunday, June 28, 2015
Recent Supreme Court Actions
While reading Plato's Laws in researching my recent post, I came across a phrase that seemed appropriate to the recent actions of our Supreme Court:
"No human being is competent to wield an irresponsible control over mankind without becoming swollen with pride and unrighteousness." - Laws, Book IV
"No human being is competent to wield an irresponsible control over mankind without becoming swollen with pride and unrighteousness." - Laws, Book IV
Relating Ourselves to Indirect Knowledge, pt. 1
In my last post I brought out the distinction between direct and indirect knowledge, and made the point that we can only evaluate indirect knowledge in light of direct knowledge; here I would like to explore that theme further.
Indirect knowledge is knowledge that we are unable to evaluate in the terms by which it is directly known. For example, it is only the cosmologist who has the time, resources and education to draw scientific conclusions about the physical history of the universe on a cosmic scale. The rest of us, to the extent that we can be related to that knowledge at all, are only related to it through the cosmologist and to the extent that we believe what he tells us about cosmology. The key characteristic of indirect knowledge is, therefore, that it is mediated by another.
Naturally we want to only believe things that are true and avoid believing things that are false. In the case of indirect knowledge, then, this must involve an evaluation of the mediator through which we are related to the knowledge. In direct knowledge, we evaluate the evidential and logical basis for the knowledge ourselves; in indirect knowledge, we evaluate the reliability of the mediator who is, presumably, himself directly related to the evidential and logical basis for the knowledge. (It should be remembered that there might be a chain of mediators; the significant point is that the chain must eventually end in someone directly related to the knowledge. For the purposes of this post, however, there is no significant difference between a chain of one or many so the chain will taken to have only one link for the sake of clarity.)
It would be defeating the purpose, of course, if we tried to evaluate the mediator in terms of a direct relationship to the knowledge itself. For instance, there is no point in me trying to evaluate whether a cosmologist is reliable by reviewing his work in light of an application of cosmological science itself. For if I could do that, I could relate myself directly to cosmological knowledge and wouldn't need the cosmologist in the first place. We only avail ourselves of indirect knowledge when direct knowledge is unavailable to us.
While we can't evaluate a mediator directly in terms of the science he mediates, we can evaluate him in terms of his general human nature as a knower. For the canonical example of how to do this, we turn to Socrates in Plato's Apology. It will be recalled that Socrates was told by the oracle at Delphi that he was the wisest of men. Incredulous at this, Socrates attempted to prove the oracle wrong by finding a man wiser than himself, which he thought would not be difficult to do. Among the individuals he interviewed in this quest were the skilled craftsmen. This is the result:
This observation is even more relevant today than it was in Socrates's time. For as I pointed out in the original post, science becomes more specialized the further it advances. That is, to become a scientific expert today means spending an increasing amount of time on an increasingly narrow domain. Scientists are subject to opportunity cost as much as anyone else; a scientist can only become an expert today on early universe cosmology by spending his time studying that and not other things - for instance genetics, chemistry, electrical engineering or botany, not to mention law, economics, history or philosophy. But, just as in ancient Greece, the expert of today will pretend to a competence outside his narrow area of expertise.
How can we use this principle in our evaluation of indirect knowledge? We should not take for granted that an expert is able to distinguish that which he knows through his expertise and that which holds merely by his opinion or ordinary reason. In other words, he may have no clear self-understanding of what he knows and what he doesn't know and why. Thus what we get from him may be a mix of his expert opinion on the subject on which he is competent - what we want - and his opinions on other subjects on which he has no particular competence better than our own - what we don't want. It is up to us to sort out the one from the other. But beyond that, we should be more likely to rely on the expert testimony of an expert who has the self-awareness to distinguish his expert opinion from his merely ordinary opinion. Such self-awareness indicates that the expert is aware of what it means to know, and we can have more confidence that what he is giving us is in fact only that which is justified by his expert opinion.
The classic example of an expert who is the modern equivalent of the craftsmen Socrates encountered in Athens is Carl Sagan. Sagan, an expert in planetary science, wrote a number of popular books on science (e.g. Cosmos) that explored well beyond his particular competence in astronomy. One of his most popular books, The Demon Haunted World: Science as a Candle in the Dark attempts to trace science as a singular beacon of knowledge in a world haunted by superstition, religion and pseudoscience. The book is of interest here because of a passage on p. 256-257 where Sagan issues some "mea culpas" on instances where he went wrong. The instances include the following: Estimating the atmospheric pressure of Venus incorrectly; incorrectly estimating the water content of Venutian clouds; thinking there might be plate tectonics on Mars when in fact there weren't; attributing the wrong cause to the high temperatures on Titan; overestimating the effect of burning Persian Gulf oil wells on the agriculture in South Asia.
What do these instances all have in common? They are all cases of Sagan admitting error in his particular area of expertise - planetary science. Yet Sagan offered opinions on subjects far beyond planetary science; in the The Demon Haunted World itself he makes assertions about history, religion, philosophy, politics and economics among others. He gives no instances when he was wrong about politics or philosophy. Is this because, bizarrely, he's always right in areas where he's not an expert and only wrong in areas where he is an expert? More likely is that Sagan, in his area of expertise, knows when he is right and when he is wrong, but in areas outside of his expertise, he doesn't really know when he is right and when he is wrong.
And it is not hard to find instances when he is wrong in The Demon Haunted World. He claims on p. 155 that Plato "assigned a high role to demons" and quotes the following in evidence:
The point for present purposes, however, is that Sagan was clearly wrong about Plato, and in a way that a simple reading of the passage in context would have revealed to any intelligent reader. Furthermore, Sagan doesn't know he is wrong, the way he knows he was wrong about atmospheric pressure on Venus. The lesson to take away is to trust whatever Carl Sagan says about strictly scientific issues concerning planetary science, and to take anything else he says with truckload of salt.
The conclusion for now is that the first principle in evaluating indirect knowledge is to consider the mediator in terms of his character as a knower in the general sense: Is he able to distinguish what he knows from what he doesn't know? Does he know the limits of his own expertise, what he really knows through it and what he doesn't? A mediator for whom positive answers can be given is more trustworthy, both in his area of expertise and in the likelihood of not pretending to pass off as expert knowledge that which was not. In any case, it is important to sift through for ourselves what an expert tells us, sorting out what his expertise really justifies and what it does not.
Indirect knowledge is knowledge that we are unable to evaluate in the terms by which it is directly known. For example, it is only the cosmologist who has the time, resources and education to draw scientific conclusions about the physical history of the universe on a cosmic scale. The rest of us, to the extent that we can be related to that knowledge at all, are only related to it through the cosmologist and to the extent that we believe what he tells us about cosmology. The key characteristic of indirect knowledge is, therefore, that it is mediated by another.
Naturally we want to only believe things that are true and avoid believing things that are false. In the case of indirect knowledge, then, this must involve an evaluation of the mediator through which we are related to the knowledge. In direct knowledge, we evaluate the evidential and logical basis for the knowledge ourselves; in indirect knowledge, we evaluate the reliability of the mediator who is, presumably, himself directly related to the evidential and logical basis for the knowledge. (It should be remembered that there might be a chain of mediators; the significant point is that the chain must eventually end in someone directly related to the knowledge. For the purposes of this post, however, there is no significant difference between a chain of one or many so the chain will taken to have only one link for the sake of clarity.)
It would be defeating the purpose, of course, if we tried to evaluate the mediator in terms of a direct relationship to the knowledge itself. For instance, there is no point in me trying to evaluate whether a cosmologist is reliable by reviewing his work in light of an application of cosmological science itself. For if I could do that, I could relate myself directly to cosmological knowledge and wouldn't need the cosmologist in the first place. We only avail ourselves of indirect knowledge when direct knowledge is unavailable to us.
While we can't evaluate a mediator directly in terms of the science he mediates, we can evaluate him in terms of his general human nature as a knower. For the canonical example of how to do this, we turn to Socrates in Plato's Apology. It will be recalled that Socrates was told by the oracle at Delphi that he was the wisest of men. Incredulous at this, Socrates attempted to prove the oracle wrong by finding a man wiser than himself, which he thought would not be difficult to do. Among the individuals he interviewed in this quest were the skilled craftsmen. This is the result:
Last of all, I turned to the skilled craftsmen. I knew quite well that I had practically no technical qualifications myself, and I was sure that I should find them full of impressive knowledge. In this I was not disappointed. They understood things which I did not, and to that extent they were wiser than I was. But, gentlemen, these professional experts seemed to share the same failing which I had noticed in the poets. I mean that on the strength of their technical proficiency they claimed a perfect understanding of every other subject, however important, and I felt that this error more than outweighed their positive wisdom. So I made myself spokesmen for the oracle, and asked myself whether I would rather be as I was - neither wise with their wisdom nor stupid with their stupidity - or possess both qualities as they did. I replied through myself to the oracle that it was best for me to be as I was. (from the Apology, in the Collected Dialogues of Plato edited by Hamilton and Cairns)What Socrates has noticed is that being in expert in one thing does not make one an expert in everything, which is of course common sense. But he has noticed something else which is more significant, and even paradoxical, which is that being an expert in a field has a tendency to make people think they have a competence in other areas that is undeserved. I say it is paradoxical because one would think that through the process of becoming an expert in one field a man would realize how difficult it is to become an expert in any field, and so would tend to a natural humility concerning knowledge outside his own specialized field. Yet the opposite seems to happen; becoming an expert in one field tends to make one think he is an expert everywhere.
This observation is even more relevant today than it was in Socrates's time. For as I pointed out in the original post, science becomes more specialized the further it advances. That is, to become a scientific expert today means spending an increasing amount of time on an increasingly narrow domain. Scientists are subject to opportunity cost as much as anyone else; a scientist can only become an expert today on early universe cosmology by spending his time studying that and not other things - for instance genetics, chemistry, electrical engineering or botany, not to mention law, economics, history or philosophy. But, just as in ancient Greece, the expert of today will pretend to a competence outside his narrow area of expertise.
How can we use this principle in our evaluation of indirect knowledge? We should not take for granted that an expert is able to distinguish that which he knows through his expertise and that which holds merely by his opinion or ordinary reason. In other words, he may have no clear self-understanding of what he knows and what he doesn't know and why. Thus what we get from him may be a mix of his expert opinion on the subject on which he is competent - what we want - and his opinions on other subjects on which he has no particular competence better than our own - what we don't want. It is up to us to sort out the one from the other. But beyond that, we should be more likely to rely on the expert testimony of an expert who has the self-awareness to distinguish his expert opinion from his merely ordinary opinion. Such self-awareness indicates that the expert is aware of what it means to know, and we can have more confidence that what he is giving us is in fact only that which is justified by his expert opinion.
The classic example of an expert who is the modern equivalent of the craftsmen Socrates encountered in Athens is Carl Sagan. Sagan, an expert in planetary science, wrote a number of popular books on science (e.g. Cosmos) that explored well beyond his particular competence in astronomy. One of his most popular books, The Demon Haunted World: Science as a Candle in the Dark attempts to trace science as a singular beacon of knowledge in a world haunted by superstition, religion and pseudoscience. The book is of interest here because of a passage on p. 256-257 where Sagan issues some "mea culpas" on instances where he went wrong. The instances include the following: Estimating the atmospheric pressure of Venus incorrectly; incorrectly estimating the water content of Venutian clouds; thinking there might be plate tectonics on Mars when in fact there weren't; attributing the wrong cause to the high temperatures on Titan; overestimating the effect of burning Persian Gulf oil wells on the agriculture in South Asia.
What do these instances all have in common? They are all cases of Sagan admitting error in his particular area of expertise - planetary science. Yet Sagan offered opinions on subjects far beyond planetary science; in the The Demon Haunted World itself he makes assertions about history, religion, philosophy, politics and economics among others. He gives no instances when he was wrong about politics or philosophy. Is this because, bizarrely, he's always right in areas where he's not an expert and only wrong in areas where he is an expert? More likely is that Sagan, in his area of expertise, knows when he is right and when he is wrong, but in areas outside of his expertise, he doesn't really know when he is right and when he is wrong.
And it is not hard to find instances when he is wrong in The Demon Haunted World. He claims on p. 155 that Plato "assigned a high role to demons" and quotes the following in evidence:
We do not appoint oxen to be the lords of oxen, or goats of goats, but we ourselves are a superior race and rule over them. In like manner God, in his love of mankind, placed over us the demons, who are a superior race, and they with great ease and pleasure to themselves, and no less to us, taking care of us and giving us peace and reverence and order and justice never failing, make the tribes of men happy and united.Sagan gives no attribution for this quote, but a little research shows that it is from Book IV of Plato's Laws. The context of the quote makes clear that Plato is not speaking in his own voice, but is recounting the received tradition concerning how mankind was originally ruled in the ancient, golden age of Cronus. And the continuation of the passage shows that it means pretty much the opposite of what Sagan thinks it means:
So the story teaches us today, and teaches us truly, that when a community is ruled not by God but by man, its members have no refuge from evil and misery. We should do our utmost - this is the moral - to reproduce the life of the age of Cronus, and therefore should order our private households and our public societies alike in obedience to the immortal element within us, giving the name of law to the appointment of understanding.(My translation in Hamilton and Cairns is slightly different than Sagan's, wherever he got it from.) So while the age of Cronus may have been ruled by benevolent demons, ours is not, but we can imitate that golden age by ruling ourselves through the immortal element within us - which for Plato is the soul and in particular the intellectual element of the soul - the "appointment of understanding." Plato, far from giving demons a "high role", is giving them no role at all and instead is urging us to order our affairs through reason. More deeply, Plato is wisely using the tradition of mythology to support the rule of reason; rather doing a Sagan-like move and dismissing any regard for mythology as foolish, Plato acknowledges the wisdom in mythology but turns that respect for tradition to his own purposes. In the present age, Plato argues, respect for tradition cannot take the form it once did - since the present age is manifestly not a golden age, we obviously are not being ruled by benevolent demons even if we once were - and can only take the form of ruling ourselves by the divine element within us, our reason. Sagan, rather than dismissing Plato, could probably have taken some lessons from him in how to influence people.
The point for present purposes, however, is that Sagan was clearly wrong about Plato, and in a way that a simple reading of the passage in context would have revealed to any intelligent reader. Furthermore, Sagan doesn't know he is wrong, the way he knows he was wrong about atmospheric pressure on Venus. The lesson to take away is to trust whatever Carl Sagan says about strictly scientific issues concerning planetary science, and to take anything else he says with truckload of salt.
The conclusion for now is that the first principle in evaluating indirect knowledge is to consider the mediator in terms of his character as a knower in the general sense: Is he able to distinguish what he knows from what he doesn't know? Does he know the limits of his own expertise, what he really knows through it and what he doesn't? A mediator for whom positive answers can be given is more trustworthy, both in his area of expertise and in the likelihood of not pretending to pass off as expert knowledge that which was not. In any case, it is important to sift through for ourselves what an expert tells us, sorting out what his expertise really justifies and what it does not.
Friday, June 26, 2015
Science, Philosophy, Direct and Indirect Knowledge
For many, including Jerry Coyne, the significant distinction in knowledge is between scientific knowledge and all other kinds of knowledge (if there are any; in his Faith vs. Fact, Coyne can barely bring himself to acknowledge anything other than science.)
But the more important distinction for us is between direct and indirect knowledge. Direct knowledge is knowledge that is known immediately by us and on our own authority. Indirect knowledge is knowledge that we are related to only through someone else; it is mediated by those others and therefore always involves the issue of authority, for it is on the basis of authority that we determine whom to listen to or not.
Examples of direct knowledge include things like the fact that you can't be in two places at the same time, that you are younger than your parents, and that dogs are produced by nature but automobiles are only products of human artifice. Some (but not all) of mathematics is direct knowledge. You don't need an authority to tell you that 2+2=4. And if you can follow Euclid's proof that there are an infinite number of primes. then the fact that there are an infinite number of primes is direct knowledge for you.
Suppose you can't follow the proof. Then you can still be related to that fact as knowledge, but only indirectly through the authority of someone else who can follow the proof. A consequence of this is that the same piece of knowledge can be known directly by some and indirectly by others. Everyone knows 2+2=4 on his own authority; but very few people know that Fermat's Last Theorem is true on his own authority, for its proof is so sophisticated that only the most educated mathematicians can follow it.
It can be seen that indirect knowledge depends on direct knowledge. If I'm taking something on the authority of another, it is not unreasonable for him to be taking it on the authority of another as well, but somewhere the chain has to end with someone who simply knows it directly. Otherwise we have a train with nothing but freight cars and no engine. (An example is a child who believes in the Big Bang on the authority of his teacher, who in turn believes it on the authority of cosmologists. But the cosmologists know it directly because they have gone through and understand the scientific case for the Big Bang.)
What about science? Jerry Coyne tells us on page 187 of Faith vs Fact that "I see science as a method not a profession... Any discipline that studies the universe using the methods of 'broad' science is capable in principle of finding truth and producing knowledge. If it doesn't, no knowledge is possible." So to have "science" in the strict sense we must produce it through the method that defines science. Unfortunately, very few of us - actually no one - has the time or resources to develop his entire base of knowledge through the application of scientific methods. We must, to a great degree, rely on the application of the scientific method that others have performed and take their results as a given; or, rather, we can only be related to their scientific knowledge indirectly through appeal to their authority as scientists.
The irony of the advance of science is that the more it advances, the less it becomes directly available to any individual man. Back in the early days of modern science, an intelligent amateur could keep abreast of, and perhaps reproduce, most of the crucial scientific results. It's not hard to reproduce Galileo's experiments with rolling balls and, if he can get his hands on a telescope, verify the existence and movements of Jupiter's satellites for himself. And he can easily reproduce Franklin's experiments with electricity or Pascal's with atmospheric pressure. But as science advances, it requires increasingly expensive and elaborate apparatus to construct experiments; and those experiments themselves require a much larger base of knowledge to understand. A high school student can be brought to an understanding of Galileo's experiments in acceleration in the course of one day's class. He'll need another four or so years of intensive education, at least, to understand how and why recent experiments have demonstrated the existence of the Higgs boson, assuming he is capable of mastering the relevant material at all. And that student, while mastering physics, will not be spending his time mastering biology and genetic science, so that, however much he might end up directly related to knowledge in physics, he will still be indirectly related to all that genetic science produces, and all that the other sciences produce. So the more science advances, the more all of us are indirectly related to scientific knowledge, including scientists themselves.
It thus becomes crucial for us to understand the distinction between direct and indirect knowledge, how they are related, and how to handle each type of knowledge appropriately. I've already discussed the distinction between the two types of knowledge. How are they related? As pointed out above, indirect knowledge is dependent on direct knowledge, since indirect knowledge is really just direct knowledge removed some number of times from the original source.
But indirect knowledge is dependent on direct knowledge in another way, and that is subjectively. By that I mean the only means we have available to evaluate indirect knowledge is through direct knowledge. When a scientist says that the Big Bang is true, how do I know whether to believe him or not? I could appeal to some other instance of indirect knowledge, for instance that other scientists agree with him, but this only pushes the problem back a step, since I now have to think about how to evaluate that piece of indirect knowledge. Again, at some point I must have recourse to something I simply know directly, through which I can evaluate competing claims of indirect knowledge.
The process of analyzing and appropriating direct knowledge is philosophy. The crucial distinction with direct knowledge is that it is not mediated; that is, it must ultimately be known without reliance on anyone else. Kierkegaard discusses this in his analysis of Socrates in Philosophical Fragments. A true teacher - that is, in my terms, a teacher of direct knowledge - is only the occasion by which someone comes to know, and the process has only completed when the teacher has become dispensable. It is for this reason that philosophy does not "progress" or produce "results" - one of the perennial charges against it. A "result" is knowledge that can be appropriated without reproducing the process by which it came to be known - for example, when an engineer uses the facts about electronic devices to design a system without first proving all those facts scientifically for himself. "Results" are therefore by nature indirect knowledge. Philosophy cannot produce "results" without falsifying itself; and everyone who would make progress in philosophy must reproduce for himself the process by which philosophers have come to know - and in the process, make those philosophers dispensable. There are no "results" that can be handed on from Plato's Republic. But someone who reads it may come to know things for himself that he might otherwise not know.
The fact that the teacher becomes dispensable is one characteristic of philosophy; another is that it appeals to direct experience as its evidential basis, on the eminently reasonable principle that it is the only possible basis. For my own, immediate experience is the only direct contact I have with reality (if in fact I have contact with reality at all); anything else is mediated and therefore a subject of indirect knowledge. This too, like the fact that philosophy doesn't produce "results", sometimes puts people off philosophy, for it makes philosophy seem a matter of purely "subjective" preference. And it is subjective, in the sense that it is only I that have access to my own experience. This is true, necessary and unavoidable, nearly tautological, yet is frequently overlooked. From p. 195 of Faith vs Fact:
The fact you feel hungry is a fact concerning reality as much as any other. Whether you really need to eat or not is irrelevant to the truth that you in fact have the feeling . Ultimately, science itself depends on subjective knowledge, because scientists must read meters and look through microscopes - "I am seeing an amoeba through this lens" or "The voltmeter says 5 volts." There is really no way to escape the subjective nature of these experiences. For instance, trying to "independently verify" them as Coyne suggests, for instance, by asking someone else whether they see 5 volts as well, may be a reasonable procedure, but it only works because we take our subjective experience of what someone else tells us - "I am hearing Joe say the voltmeter reads 5 volts" - as itself not in need of independent verification. Otherwise, we are back to the familiar infinite regress that comes up so often in this context.
The philosopher faces the fact that all our knowledge - direct, indirect or otherwise - can ultimately be evaluated only in light of our own personal experience. The philosopher serves as an ultimately dispensable aid in analyzing and discovering the significance and meaning of that experience. The scientist simply takes the meaning of personal experience for granted so he can get on with his science. And he is perfectly justified in this, but he is in danger, like Coyne, of misunderstanding the real relationship between science and philosophy - which is really a misunderstanding of the basic human condition.
Coming next: How direct knowledge is used to evaluate indirect knowledge. Hint: Read Plato's Apology.
But the more important distinction for us is between direct and indirect knowledge. Direct knowledge is knowledge that is known immediately by us and on our own authority. Indirect knowledge is knowledge that we are related to only through someone else; it is mediated by those others and therefore always involves the issue of authority, for it is on the basis of authority that we determine whom to listen to or not.
Examples of direct knowledge include things like the fact that you can't be in two places at the same time, that you are younger than your parents, and that dogs are produced by nature but automobiles are only products of human artifice. Some (but not all) of mathematics is direct knowledge. You don't need an authority to tell you that 2+2=4. And if you can follow Euclid's proof that there are an infinite number of primes. then the fact that there are an infinite number of primes is direct knowledge for you.
Suppose you can't follow the proof. Then you can still be related to that fact as knowledge, but only indirectly through the authority of someone else who can follow the proof. A consequence of this is that the same piece of knowledge can be known directly by some and indirectly by others. Everyone knows 2+2=4 on his own authority; but very few people know that Fermat's Last Theorem is true on his own authority, for its proof is so sophisticated that only the most educated mathematicians can follow it.
It can be seen that indirect knowledge depends on direct knowledge. If I'm taking something on the authority of another, it is not unreasonable for him to be taking it on the authority of another as well, but somewhere the chain has to end with someone who simply knows it directly. Otherwise we have a train with nothing but freight cars and no engine. (An example is a child who believes in the Big Bang on the authority of his teacher, who in turn believes it on the authority of cosmologists. But the cosmologists know it directly because they have gone through and understand the scientific case for the Big Bang.)
What about science? Jerry Coyne tells us on page 187 of Faith vs Fact that "I see science as a method not a profession... Any discipline that studies the universe using the methods of 'broad' science is capable in principle of finding truth and producing knowledge. If it doesn't, no knowledge is possible." So to have "science" in the strict sense we must produce it through the method that defines science. Unfortunately, very few of us - actually no one - has the time or resources to develop his entire base of knowledge through the application of scientific methods. We must, to a great degree, rely on the application of the scientific method that others have performed and take their results as a given; or, rather, we can only be related to their scientific knowledge indirectly through appeal to their authority as scientists.
The irony of the advance of science is that the more it advances, the less it becomes directly available to any individual man. Back in the early days of modern science, an intelligent amateur could keep abreast of, and perhaps reproduce, most of the crucial scientific results. It's not hard to reproduce Galileo's experiments with rolling balls and, if he can get his hands on a telescope, verify the existence and movements of Jupiter's satellites for himself. And he can easily reproduce Franklin's experiments with electricity or Pascal's with atmospheric pressure. But as science advances, it requires increasingly expensive and elaborate apparatus to construct experiments; and those experiments themselves require a much larger base of knowledge to understand. A high school student can be brought to an understanding of Galileo's experiments in acceleration in the course of one day's class. He'll need another four or so years of intensive education, at least, to understand how and why recent experiments have demonstrated the existence of the Higgs boson, assuming he is capable of mastering the relevant material at all. And that student, while mastering physics, will not be spending his time mastering biology and genetic science, so that, however much he might end up directly related to knowledge in physics, he will still be indirectly related to all that genetic science produces, and all that the other sciences produce. So the more science advances, the more all of us are indirectly related to scientific knowledge, including scientists themselves.
It thus becomes crucial for us to understand the distinction between direct and indirect knowledge, how they are related, and how to handle each type of knowledge appropriately. I've already discussed the distinction between the two types of knowledge. How are they related? As pointed out above, indirect knowledge is dependent on direct knowledge, since indirect knowledge is really just direct knowledge removed some number of times from the original source.
But indirect knowledge is dependent on direct knowledge in another way, and that is subjectively. By that I mean the only means we have available to evaluate indirect knowledge is through direct knowledge. When a scientist says that the Big Bang is true, how do I know whether to believe him or not? I could appeal to some other instance of indirect knowledge, for instance that other scientists agree with him, but this only pushes the problem back a step, since I now have to think about how to evaluate that piece of indirect knowledge. Again, at some point I must have recourse to something I simply know directly, through which I can evaluate competing claims of indirect knowledge.
The process of analyzing and appropriating direct knowledge is philosophy. The crucial distinction with direct knowledge is that it is not mediated; that is, it must ultimately be known without reliance on anyone else. Kierkegaard discusses this in his analysis of Socrates in Philosophical Fragments. A true teacher - that is, in my terms, a teacher of direct knowledge - is only the occasion by which someone comes to know, and the process has only completed when the teacher has become dispensable. It is for this reason that philosophy does not "progress" or produce "results" - one of the perennial charges against it. A "result" is knowledge that can be appropriated without reproducing the process by which it came to be known - for example, when an engineer uses the facts about electronic devices to design a system without first proving all those facts scientifically for himself. "Results" are therefore by nature indirect knowledge. Philosophy cannot produce "results" without falsifying itself; and everyone who would make progress in philosophy must reproduce for himself the process by which philosophers have come to know - and in the process, make those philosophers dispensable. There are no "results" that can be handed on from Plato's Republic. But someone who reads it may come to know things for himself that he might otherwise not know.
The fact that the teacher becomes dispensable is one characteristic of philosophy; another is that it appeals to direct experience as its evidential basis, on the eminently reasonable principle that it is the only possible basis. For my own, immediate experience is the only direct contact I have with reality (if in fact I have contact with reality at all); anything else is mediated and therefore a subject of indirect knowledge. This too, like the fact that philosophy doesn't produce "results", sometimes puts people off philosophy, for it makes philosophy seem a matter of purely "subjective" preference. And it is subjective, in the sense that it is only I that have access to my own experience. This is true, necessary and unavoidable, nearly tautological, yet is frequently overlooked. From p. 195 of Faith vs Fact:
"I'm hungry," my friend tells me, and that too is seen as extrascientific knowledge. And indeed, any feeling that you have, any notion or revelation, can be seen as subjective truth or knowledge. What that means is that it's true that you feel that way. What that doesn't mean is that the epistemic content of your feeling is true. That requires independent verification by others. Often someone claiming hunger actually eats very little, giving rise to the bromide "Your eyes are bigger than your stomach."
The fact you feel hungry is a fact concerning reality as much as any other. Whether you really need to eat or not is irrelevant to the truth that you in fact have the feeling . Ultimately, science itself depends on subjective knowledge, because scientists must read meters and look through microscopes - "I am seeing an amoeba through this lens" or "The voltmeter says 5 volts." There is really no way to escape the subjective nature of these experiences. For instance, trying to "independently verify" them as Coyne suggests, for instance, by asking someone else whether they see 5 volts as well, may be a reasonable procedure, but it only works because we take our subjective experience of what someone else tells us - "I am hearing Joe say the voltmeter reads 5 volts" - as itself not in need of independent verification. Otherwise, we are back to the familiar infinite regress that comes up so often in this context.
The philosopher faces the fact that all our knowledge - direct, indirect or otherwise - can ultimately be evaluated only in light of our own personal experience. The philosopher serves as an ultimately dispensable aid in analyzing and discovering the significance and meaning of that experience. The scientist simply takes the meaning of personal experience for granted so he can get on with his science. And he is perfectly justified in this, but he is in danger, like Coyne, of misunderstanding the real relationship between science and philosophy - which is really a misunderstanding of the basic human condition.
Coming next: How direct knowledge is used to evaluate indirect knowledge. Hint: Read Plato's Apology.
Labels:
Coyne,
general philosophy,
Kierkegaard,
Plato,
science
Subscribe to:
Posts (Atom)