Showing posts with label general philosophy. Show all posts
Showing posts with label general philosophy. Show all posts

Sunday, October 14, 2018

Wisdom 7:7-11 and Philosophy

 "Therefore I prayed, and understanding was given me; I called upon God, and the spirit of wisdom came to me. 
I preferred her to scepters and thrones, and I accounted wealth as nothing in comparison with her. 
Neither did I liken to her any priceless gem, because all gold is but a little sand in her sight, and silver will be accounted as clay before her. 
I loved her more than health and beauty, and I chose to have her rather than light, because her radiance never ceases. 
All good things came to me along with her, and in her hands uncounted wealth."

Our modern understanding of morality sees it as a matter of figuring out abstract rules of behavior. A philosophy class in ethics might devote its time to pondering artificial circumstances as a way to tease out such rules: What should one do, for example, if you happen to be standing next to a railway switch with a train bearing down which will run over 5 people, and you could switch it onto a side track where it will only run over 1 person? Do you do nothing and allow the five to die? Or do you switch the track and condemn to death the one, who would have survived without your intervention?

This approach is quite foreign to the ancient approach to morality. The ancient approach did not concern itself with rules so much, as with describing and creating the character of a man who would show good judgment in any circumstance. Good judgment was known as the virtue of phronesis to the Greeks and we understand it under the name of prudence or wisdom, although our understanding of prudence is a more timid version of what the Greeks meant by phronesis. Our prudent man is the sort of man who avoids taking chances, whereas the Greek wiseman was fully prepared to take chances if the situation truly called for it.

The quote from Wisdom that starts this post summarizes the ancient view of wisdom nicely. Wisdom is more valuable than gold, silver, scepters or thrones, because the man who is not wise (i.e. the fool) will not use such wealth in a manner that is truly to his advantage. On the other hand, the wise man who is not rich will nonetheless possess the character virtues that will allow him to become rich; or, better yet, acquire those things that truly make a fulfilled life (which might be other than thrones or riches). So wisdom is a far more valuable thing to have than any earthly possession or title.

The author of the Book of Wisdom would not be surprised by modern studies that show that the lives of winners of large lottery prizes are, after a few years, indistinguishable from what they were before they got lucky. They may now have a rusting snowmobile in their backyard and drive a 10 year old Mercedes, and have travelled to Vegas a few times, but as the years go by they end up pretty much where they would have otherwise. The reason, according to ancient wisdom, is that their original circumstances had more to do with their character than luck; and without a change of character, a little luck will not make a lasting difference. Instead of saving and investing the money they've won, they buy a boat and a trip to the Caymans. Instead of spending the money on more education, they spend it on buying Cristal for their friends at the bar. In a few years, the money is gone and they are back where they were.

There is an assumption lurking behind the modern understanding of morality that is not always acknowledged. The assumption is that the difficult part of morality is finding out exactly what the moral rules are; once they are known, it is assumed, following them is not such a difficult thing. The ancients had the opposite view: The basics of morality are not difficult to know - don't lie, cheat or steal, kill your neighbor or covet his wife. The hard part was following morality once it was known. 

Even more difficult from the ancient view was how, beyond merely avoiding doing evil, to construct your life so that it is as fulfilling as possible. This is something that requires much more than just rule following. It means discovering what human life is really about, and acquiring the virtues necessary to attain it.

Modern morality has nothing to say about this. It views "freedom" as the greatest good, and is indifferent to what one does with that freedom, as long those abstract moral rules are followed. The great modern moral crusades, then, concern themselves with defending the rights of individuals to be or to do what they want with their freedom- change from a boy to a girl, for instance, or call themselves a girl when they look exactly like a boy.

For the average person, who is basically of good character and is wondering what to do with his life, the modern answer of "whatever you want" is disappointingly empty. For those so disappointed, the ancients stand ready to listen and answer.

Monday, July 4, 2016

On the Need for Socrates

That's what I do. I drink and I know things.
- Tyrion Lannister

There is always a need for Socrates. But at some times he is needed more than others.

Now is one of those times.

How can you tell? Because there is very little of a true philosophical spirit about.

The philosopher is a lover of wisdom. As Socrates teaches us, this doesn't mean the philosopher is a wise man. The philosopher is a pursuer of wisdom, and you don't pursue what you already have. So the philosopher is a man not wise who is driven in the attempt to become wise.

The man who is already wise is not a philosopher because he is not driven to pursue wisdom - he's got it already. The man not interested in wisdom is also not a philosopher - he is not wise but doesn't care to become so, and so he does not pursue it.

Both of these latter types are prevalent today. And both hate the philosopher, whom they (ironically) condemn as arrogant and useless.

It is part of received "wisdom" today that the great philosophical questions cannot be definitively answered. Does God exist? If so, what is His nature? What is the nature and content of true morality? What is justice? Is there life after death? Is man really free or just a slave of nature and its laws? What is the best way to organize society? And many others. The futility of philosophy.

The philosopher, allegedly, is the man who thinks he has answers to some or all of these questions. And if he has those answers, then those who disagree with him are wrong. And that is the substance of the charge of arrogance. How can he be so sure he's right and everyone else is wrong? What makes him so special? Shouldn't he be a little more humble? The arrogance of the philosopher.

And while he is out pretending to know what others don't, he could be doing something useful to actually contribute to society. Instead he whiles away his time contemplating questions that can never be really answered, and never producing anything of value. The uselessness of philosophy.

Anyone concerned that these charges might be leveled at him may be consoled that they were the same charges leveled at Socrates. They are the perennial charges against philosophers, and will always be leveled against him as long as man persists. And yet philosophers persist.

The philosophical spirit never quite dies out. For there is always someone, when the received wisdom  concerning the futility of philosophy is proclaimed, who asks the question - how do you know that? How do you know that the great questions cannot be answered? Isn't the dogma that they cannot be answered itself a Great Answer, an arrogant assertion that unjustifiably claims to know that every great thinker throughout history failed? Isn't it possible that someone, somewhere along the way, found at least some answers? How can I dismiss a great philosopher, a Plato, Aristotle, or Aquinas, without ever understanding anything of what he thought?

It's rather the philosopher who is humble, isn't it? For he proclaims himself to be ignorant, but doesn't have the gall to assert that everyone else - including everyone throughout history - must have been ignorant as well. How can one possibly come to this latter conclusion?

The one feeble argument made on its behalf is that philosophers still argue about the same questions they always have, and haven't produced any "results" the way science has, or any definitive answers settled once and for all. This is often thought to be a distinctively modern argument, but of course it was made in Socrates's day against him as well. One might call it the Argument from Disagreement, and it has a peculiar nature.

For one thing, it is self-fulfilling. Merely by disagreeing with a philosophical result, for whatever reason good or bad, I create disagreement and therefore evidence against the result. That certain philosophical results are still debated may only mean that some people are incapable of understanding them or be unwilling to accept them. And that incapacity and/or unwillingness surely can't prove itself merely by existing. It's not enough merely to note disagreement; it is necessary to show that any particular disagreement has a reasonable basis, and that means doing the work of actually understanding the arguments.  But then the whole point of the Argument from Disagreement is to dismiss philosophers without having to go through the work of actually understanding them.

For another thing, there hasn't always been disagreement among philosophers, and there are answers that have received general and enduring agreement. For instance, that harm to another may only be done in self-defense or through civil processes (i.e. a trial) is not something seriously questioned anymore (whereas one of the questions Socrates debated was whether morality consists in doing good to ones friends and evil to ones enemies, a live question at the time. It doesn't, Socrates answered, and his answers form the basis of much of what we take for granted with respect to morality, whether we know it or not).

Instead of the manifestly unsupportable conclusion that everyone in history must have been ignorant concerning the great questions, the philosopher only knows that he himself is ignorant. Whether others are ignorant as well is an open question, and he eagerly learns all he can from the greatest thinkers in the hope that maybe they actually did know something. (Spoiler: They did.)

Something Aristotle taught is that the truth is generally found between two extreme and opposing errors. And when the truth is lost, both the opposing errors become manifest. One of the errors, it seems, is thinking that the truth cannot ever really be found (and if we think about it, we could never reasonably believe this, because then it would constitute the truth we said we couldn't find.) The other extreme is thinking that the truth is found easily and without effort.

These extremes seem opposed, and they are, but they circle around and meet each other. For if we think the truth can never really be found, then all particular attempts to do so are necessarily futile, and we arrive at modern cultural relativism. I don't need to understand Confucius or Lao Tzu, Avicenna or the Bhagavad Vita because they must ultimately be as futile as Socrates and Aristotle. Justice and peace result from an acknowledgement of the relativity of culture, which masquerades as respect for all cultures, but is really a universal disrespect. If everyone would acknowledge that they can't know the truth, and that their way of knowing it is not and cannot be any more successful or legitimate than others, then the source of conflict would disappear. This degenerate form of humanitarian universalism is now culturally dominant, and it's easy to see it's appeal: It's a ready excuse to get out of the hard work of learning. The old Socratic way offered nothing but a lifetime of learning with no promise of result; the new degenerate universalism lets you do what you want without a guilty conscience.

But not really. Ultimately, that guilty conscience is why the philosopher is hated and why he is necessary. For man is a rational animal, meaning his nature is to know. The philosopher, merely by existing, reminds man of that basic fact of his nature and embarrasses him. The philosopher would not embarrass men if they did not already know, in a deep and hidden place, that they are meant to know yet they do not know. And he is hated because he exposes the easy answers that men have constructed to console themselves rather than face the truly terrifying fact that they don't have any idea who they are or what they are doing.

The modern existential philosopher might leave it at that, but he's a degenerate form. The best philosophers - starting with Socrates - offer hope that you might come to know what you are doing.

Let us, then, in the first place, he [Socrates] said, be careful of admitting
into our souls the notion that there is no truth or health or soundness
in any arguments at all; but let us rather say that there is as yet
no health in us, and that we must quit ourselves like men and do our
best to gain health-you and all other men with a view to the whole
of your future life, and I myself with a view to death.
 - Plato, Phaedo


Saturday, October 24, 2015

Brute Facts

A typical argument for atheism goes like this (in simplified form): Both the atheist and the theist start with a "brute fact", i.e. something that "just is." The theist argues from the existence of the universe ("What caused the universe?") to God (something that "just is.") The atheist responds that if we must accept something that "just is", why not say it is the universe rather than hypothesizing something beyond it like God? That merely, ala Ockham, multiplies hypothetical entities unnecessarily. The universe "just is" and there is no need for God.

It isn't true that the cosmological arguments for God put forward by the great classical philosophers like Aquinas considered God as something that "just is." Indeed, the whole point of the arguments are to establish the existence of something that is much more than something that "just is."

But that is beside the point of the present post, which is to explore the notion of brute facts or things that "just are." My conclusion is that brute facts are intellectually dangerous things, and destroy far more than their deployers suppose. They want to aim the cannon of brute facts at God, but the consequent explosion blows up not just God but our understanding of the universe itself.

Consider what it is to be a "brute" fact. Something that is "brute" is something unintelligible; that is why animals are called "brutes", because they do not possess reason. A "brute" fact is a fact that is unintelligible beyond the bare fact that it is. Clearly, if a fact is brute, there is no point in asking anything more about it, since there is nothing more about it that we can know.

Here is the rub. How do we know a brute fact for what it is when we encounter it? What distinguishes brute facts from intelligible facts? Intelligible facts are facts for which we can find an explanation, you say. But there is nothing to say that brute facts can't appear to have an explanation when they really don't.  That, in fact, is the whole point of the atheist's brute fact argument against the theist: His argument is not that God doesn't really explain the universe should He exist, but that the universe in fact does not stand in need of an explanation in the first place because it is brute.

Newton's theory of gravitation appears to explain why the moon orbits the earth and planets orbit the sun. Perhaps, however, those celestial movements are really only brute facts; then Newton's theory only appears to explain the solar system. You scoff because it is clear that Newton's theory does in fact explain the solar system; it is ridiculous to suppose that it is just by chance that all the planets and their moons happen to orbit in accordance with Newton's theory.

And I would agree, but only because I do not accept the notion of brute facts. For smuggled in your reply is the assumption that you have some idea of the nature of brute facts: Brute facts wouldn't appear to happen in such a way that they conform with some intelligible law. In doing so, however, you have implicitly denied the notion of brute facts, for brute facts are facts about which you can say nothing at all further than the fact that they are (or might be). We can't say what they are like or what they are unlike or how they might appear or how it is impossible for them to appear. Any supposition along any of these lines is to contradict the brute nature of the supposed brute fact: It is to concede that the fact is in some measure intelligible; if we can say how brute facts cannot appear to us, then we have conceded that brute facts are in some measure knowable beyond the fact that they are, and therefore are not brute.

One of the virtues of David Hume was that he took the notion of brute facts seriously. And he saw that if we allow the notion of brute facts through the door, then we have destroyed the intelligibility of causality altogether and not just for the universe or God. For we never see causality itself, says Hume, only one event following another. And if we don't presuppose that the universe is intelligible, that is, if we take it that brute facts might be lurking around every corner, then the fact that one type of event tends to follow another might just be one of those brute facts waiting to temp us into false conclusions about causality. We might mistake our becoming accustomed to breaking glass following the flight of a brick for insight into a casual relationship between flying bricks and broken glass, when in fact their relationship might just be a brute fact.

Kant, of course, noticed that Hume's position not only undermined the traditional arguments for God but also any possibility of an actual understanding of the universe, including that of modern science. Kant furthered the Humean project by offering an explanation as to why we tend to (falsely) infer causality into the universe. Kant reflects on the fact of experience, and claims that the only way we can have connected experience is for our cognitive faculties to organize it out of the blooming, buzzing confusion around us. In other words, our minds are constructed so as to read into nature notions like causality and substance so that we can deal with it. A very clever advance on Hume, which saved science from Hume's skepticism, but at the price of recasting the subject of science from being nature itself to merely how nature appears to us given our cognitive apparatus.

The point here is to be wary when an atheist deploys the brute fact artillery. For those who start firing with brute facts typically do not understand that their shells will land on them as much as anyone else. In particular, they don't realize that the brute facts they deploy to destroy God will destroy the science they love so much as well.

Friday, August 21, 2015

Science Discovers Socrates

When I first began to seriously read philosophy, and by that  I mean reading Plato, Aristotle and Aquinas directly and not through summaries or interpretations of them,  perhaps the most thrilling discovery I made was the extent to which they anticipated just about every important philosophical position that might be taken. What I had thought were modern views that the ancients were too ignorant or naive to conceive, had in fact been explored by them, and were often treated more intelligently than they were by their supposed modern betters. This is no more true than in Plato.

For instance, the objection that philosophy is just a verbal game that never really proves anything seems like a modern objection based on a review of the long history of philosophy. But we find that this is actually an ancient objection, and in fact was at the heart of the charges against Socrates at his trial. Socrates, it was claimed, just played verbal games making the weaker argument appear the stronger, misleading his young followers. Or the objection that there is no objective morality, and that "right" and "wrong" are in fact defined by whomever is the strongest and able to impose his views. We like to think that it was the naive ancients who believed in things like ghosts and objective morality, whereas we moderns, wiser through science and cultural experience, no longer fall for such things. But the idea that "right" and "wrong" have no objective foundation is a very ancient opinion and is the subject of the Platonic dialog Gorgias, in which Socrates has a spirited argument with a defender of such a view.

I recently had, once again, the experience of reading an intelligent modern author (and scientist) elaborate what he thought was a novel insight but was one which, naturally, had been explored by Plato thousands of years ago. I refer you to Dr. Steven Novella's Neurologica blog, in which he wrote a post discussing Expertise and the Illusion of Knowledge. The post begins with:
In general people think they know more than they do. This is arguably worse than mere ignorance - having the illusion of knowledge.
Anyone familiar with Plato will immediately see that Dr. Novella is practically quoting Socrates in the Apology. But he does not seem to be familiar with Plato, and he goes on to describe the scientific investigation that backs up the assertion of the illusion of knowledge,  as though the possibility of the illusion of knowledge had not already been decisively established for Western culture twenty five hundred years ago in Athens.

Most of his blog post is concerned with the scientific investigation of the illusion of knowledge, and it is only at the end of the post, and almost in passing, that Dr. Novella approaches but never actually raises the truly decisive question:
As always, I encourage my readers to apply these lessons not only to others but to themselves. The Dunning-Kruger effect and the illusion of knowledge apply to everyone, not just to others.
The horrifying thing about the illusion of knowledge is that when you have it, you don't know you do. That is why it is an illusion. And the the question of questions is: How do I know when I truly know something as distinct from when I only think I know it?

It's not enough to merely mention the Dunning-Kruger effect and move on, as though simple awareness of the effect is sufficient to inoculate one from it. The scientists used made up terms and fake concepts (like "annualized credit") to measure the extent to which subjects claimed knowledge they could not possibly have (since there was nothing to know), and perhaps it would be a good start to make sure we ourselves are not trading in deliberately bogus concepts. But that's not really the problem that faces us. The problem for us is that, even trading in legitimate concepts, we can end up believing we know things to be true that we don't.

Before discussing Plato's answer to the question of how we know when we truly know, let's consider modern approaches to the question. Descartes could be said to have launched the modern era by proposing universal doubt as the true way to found epistemology (or, the science of how we know what we know). Doubt all that you know, and what can survive that doubt can be confidently embraced as truly known. Famously, Descartes concluded the one thing that survived universal doubt was the fact of his own thinking - cogito ergo sum. From that nugget, Descartes reconstructed the world of common sense, including the existence of God.

Unfortunately, it turned out that Descartes's procedure wasn't the pure doubt he thought it was. Why, for instance, is thinking the crucial existential act? I dance therefore I am, I pray therefore I am, I eat therefore I am all work as well. In fact, as Kierkegaard tells us, the simple I is sufficient to establish existence. I anything therefore I am works because it is really I am that comes first and anything else comes later. What Descartes's approach does is falsely privilege thought over existence, as though existence were held in suspense until thought ratified it. Instead, the truth of our own existence is immediately known to us, and the conclusion we should draw is not that existence is the one thing thought can safely conclude, but that it was foolish for thought to ever doubt existence in the first place.

This may sound like one of those philosophical points that really has no bearing on anything anyone is really interested in but it is far from that. For the Cartesian move can be summed up in the principle that doubt is its own justification or, in other words, that we are justified in doubting something by the simple fact that it can be doubted. This Cartesian attitude has become deeply embedded in the modern consciousness, not just in philosophers, but in the common man as well. And it has terrible effects because it is false to human nature.

Human nature is incarnate - we existence as embodied beings in time and space. Time starts running for us as soon as we are born and does not stop for us until we die, and every moment of that time existence makes demands on us, whether we doubt those demands or not. As children, we must be fed, kept warm and educated. A child cannot doubt and, in any event, should not doubt what he is presented with. A baby who somehow was able to doubt the value of the food he was given and refused to eat until the nature and necessity of food was established for him would soon die; a child who doubts his parent's admonishments to not wander off with strangers may very likely find himself in an unanticipated but dreadful situation. So by the time a child has grown old enough to learn of Descartes and considers flirting with the process of universal doubt, he has already spent many years not doubting and, in fact, could only have arrived at the position of being able to doubt through that non-doubt (which I will give the name faith for purposes of brevity.) Will he then embrace doubt, including doubt of the very life story that brought him to the place at which he could doubt? This isn't a bold move into sure knowledge, but the deliberate forgetting of that which made us who and what we are; the consequence of which is the tendency of modern man to wander through life not knowing what he is doing.

Really the situation is this: To get through life, we must believe many things, simply to get on with our day. Universal doubt is an existential impossibility and is the arbitrary decision to take one side of the analysis of error  Kierkegaard poses at the beginning of Works of Love: One can go wrong by believing that which is false, but one can also go wrong by failing to believe that which is true. The modern man following Descartes assumes the downside is all in falling into the former error.  But falling into the latter error is arguably worse, Kierkegaard tells us, because through it we close ourselves off to the best things in life, which can only be had through faith.

One way to think of the Cartesian approach is as an attempt to find an absolute starting point for philosophy; a point which can be embraced by any man, anywhere as the start of his thought. Descartes, the mathematician, is naturally thinking of things like geometry, which has an absolute starting point in Euclid's postulates. Anyone, anywhere at anytime who wishes to take up geometry must do it, if he is to do it legitimately at all, with these same postulates. Can the same be said of thought in general? If so, then we could get past the endless dialog of opinion that was characteristic of philosophy and so distressed the founders of modern thought.

The problem, as we've seen, is that we have already embraced many things, things that have made us who we are, by the time we arrive at a place where we could undertake Cartesian doubt. Geometry can start anytime we want, but life has already started and conditioned us by the time we become philosophically aware. And it continues to condition us even as we ponder it. What this means is that, unlike geometry, there can be no absolute starting point to philosophy. The ancient dialog of opinion that characterizes classical philosophy is not a peculiar feature of that philosophy, but is reflective of the substance of philosophy, which is human existence.

When we arrive at the point at which philosophical consciousness is possible, we have already been conditioned by our upbringing and education. We already have a set of beliefs about the world and ourselves, about what the nature of the world is and who we are, about what is good and evil, about what is important and not important. Philosophical awareness, whether of the Socratic or Cartesian variety, can begin to happen when we realize not all that we think we know we do in fact know. The Socratic approach to this realization, unlike the Cartesian approach, is not therefore to throw everything we believe overboard. It is, rather, to understand that human nature is such that we must accept as true many things that have not as yet survived our critical scrutiny. It is to continue to live and commit ourselves in light of those beliefs, and to gradually but methodically subject those beliefs to philosophical scrutiny.

Notice how subjective this process is. By subjective I merely mean that every individual will have had a different experience and be equipped with a different set of opinions by the time he comes to philosophical consciousness. And philosophy for him can only mean working through the set of opinions that are peculiar to him. Thus there is no absolute starting point to philosophy because there is no absolute starting point to life. The only universal starting point was established by Plato and, brilliantly, explored in his writing. By writing his philosophy in dialog form, following the examination of opinion by Socrates, Plato communicates the truth that philosophy can only mean working through the opinions particular to a man - you - and not some abstract set of opinions or truths falsely claimed to be a priori universal.

Some of the conundrums that puzzle we moderns show how far we have strayed from the Socratic viewpoint. For instance, one often hears the assertion: If you had grown up a Jew you would be a Jew now, or you had grown up a Muslim, you would be a Muslim now. The only reason you are a Christian is because you were born into a Christian family. The implication, Cartesian in spirit, is that we can only really find the truth by abstracting ourselves out of the existential commitments into which we are born. But this is simply false. Socrates did not discard the social and cultural obligations into which he was born as an Athenian. In fact, his last words poignantly show that his obligations were on his mind right to the end: "Crito, we owe a cock to Asclepius." What he did do was philosophically investigate those commitments as he lived them, as shown, for instance, in the dialog Euthyphro. That I might be a Muslim today were I born in a Muslim country simply shows that - hopefully - I would not think human obligation goes away just because I doubt it even if I were a Muslim.  It is true that I might not have experienced the philosophical freedom I do now were I born Muslim, but this does nothing to undermine the philosophical freedom I do have having been born here. In other words, the fact that I might have been born a Muslim and never really challenged it philosophically does nothing to show that there is anything philosophically suspect in being born a Catholic and staying a Catholic. It might just be - and I think it is - that Catholicism is the one religion that really can withstand philosophical scrutiny.

So what is Plato's answer to the question of how we distinguish what we know from what we only think we know? The answer is that we know something to the extent that we can answer for it; that is, that it can withstand philosophical scrutiny in the form of Socratic cross-examination. This answer is both subjective and not absolute; I know something to the extent that I can provide reasons for it that can withstand scrutiny. And it is not absolute because cross-examination never has an absolute end. Our views can always be subject to further challenge. Put another way: I know something when I can provide a good answer to the question - How do you know that?

Returning to Dr. Novella and his post, his last sentence admonishing his readers to take account of the Dunning-Krueger effect (historically known as the Socratic insight) in their own thinking constitutes a Socratic moment. If we stop and ponder the implications of the realization of our own ignorance, we may find ourselves open to a truly philosophical adventure - one in which there is no absolute starting point but which has an absolute end in the truth. One way to short circuit this adventure is by positing an absolute starting point to thought - be it Cartesian universal doubt, or, as seems to be the case with Dr. Novella, the value of science. But the value of science, and indeed what constitutes science vs. the pseudoscience Novella battles in his blog, are not themselves scientific but meta-scientific (i.e. philosophical) questions. And as such they can only be resolved through the dialog of opinion.

So embrace the Dunning-Krueger effect, but turn to Plato to discover what it truly means.

Saturday, July 18, 2015

Relating Ourselves to Indirect Knowledge, Pt. 2

In part 1of this series, I began a discussion of how we can use reason to relate ourselves to indirect knowledge. Indirect knowledge is, briefly, knowledge that we ourselves do not know the immediate reasons for its truth. Instead, someone else knows the reasons, and we are related to that knowledge through them by their mediation. Examples include complicated mathematical proofs (like the one recently demonstrated for  Fermat's Last Theorem). We might not be able to follow the logic, but the mathematicians can, and we can appreciate what the mathematical geniuses have done. Or scientific claims like global warming, in which we cannot possibly conduct or review the science ourselves, but instead must trust what the relevant experts say about it.

Relating ourselves to indirect knowledge is very different than relating ourselves directly to knowledge. The latter involves a consideration of truth immediately in terms of the fundamental reasons for something's being true or not. There is no mediator. In the former, the crucial question is how we judge the mediator, since we must take his word respecting the fundamental reasons for the truth of falsity of something. In my earlier post, I pointed to Socrates as an example of how to evaluate mediators, and used his example in the Apology: We must test a mediator to discover whether he himself is able to separate his knowledge from his opinions, and so give us only his expert knowledge and not also, in addition, his non-expert and perhaps poorly founded opinions masquerading as expert knowledge. I gave Carl Sagan as a classic example of the expert who fails Socratic examination. In such cases, an expert can still be useful, but we must be very careful to separate what he genuinely knows through his expertise (the wheat) versus the mass of non-expert opinion he gives along with it (the chaff).

We may also consider that indirect knowledge can never contradict direct knowledge. There is only one truth and it is the same for us as it is for everyone else. Thus we know 2+2=4 directly, and any purportedly expert theory that ends up contradicting that truth (implicitly as well as explicitly) must be suspect; for whatever the expert knows, he can't know that 2+2 equals something other than 4. That's an obvious and trivial example, and better examples are not hard to find. Let's look at what Jerry Coyne tells us about truth, fact and knowledge on pages 186 and 195 of Faith vs Fact:

(Begin quote)
For consistency, I'll again use the Oxford English Dictionary's definitions, which correspond roughly to most people's vernacular use. "Truth" is "conformity with fact; agreement with reality; accuracy, correctness, verity (of statement or thought.)" Because we're discussing facts about the universe, I'll use "fact" as Stephen Jay Gould defined "scientific facts": those "confirmed to such a degree that it would be perverse to withhold provisional assent." Note that these definitions imply the use of independent confirmation - a necessary ingredient for determining what's real - and consensus, that is, the ability of any reasonable person familiar with the method of study to agree on what it confirms... Finally, "knowledge" is simply the public acceptance of facts; as the Dictionary puts it, "The apprehension of fact or truth with the mind; clear and certain perception of fact or truth; the state or condition of knowing fact or truth." What is true may exist without being recognized, but once it is it becomes knowledge. Similarly, knowledge isn't knowledge unless it is factual, so "private knowledge" that comes through revelation or intuition isn't really knowledge, for it's missing the crucial ingredient of verification and consensus...

"I'm hungry," my friend tells me, and that too is seen as extrascientific knowledge. And indeed, any feeling that you have, any notion or revelation, can be seen as subjective truth or knowledge. What that means is that it's true that you feel that way. What that doesn't mean is that the epistemic content of your feeling is true. That requires independent verification by others. Often someone claiming hunger actually eats very little, giving rise to the bromide "Your eyes are bigger than your stomach."
(Emphases in original and end quote).

Socrates once put forward the observation that flute-playing implies a flute player. Similarly, knowledge implies a knower. There is no knowledge without someone knowing that knowledge or, in other words, knowledge is the substance of the act of knowing. What this means is that, contra Coyne, all knowledge is subjective, meaning that all knowledge is knowledge only because it is known by someone, somewhere, at some time. The fact that all knowledge is subjective is a piece of primary knowledge - it is something we can know directly for ourselves simply by reflection on the nature of things. Thinkers like Coyne like to speak of the abstraction "science", as though it is a disembodied process generating results all on its own, but we should remember that science is but the activity of scientists, and to the extent that anything is known by science, it is known by individual scientists here and there.

The "independent confirmation" of which Coyne writes is a useful and wonderful thing, but he fails to realize that it is dependent on the "subjective truth or knowledge" that he disparages. "I'm hungry" is certainly one thing we can say; another is "I hear or have read your experience in confirming my scientific experiment." The latter is as subjective as the former. Coyne claims that the former needs independent verification of its epistemic content (that content apparently being "I need food.")  Well what about the latter? The epistemic content of the latter is that "it is a fact that you have confirmed my scientific experiment." This would seem to need independent verification as well. How will I get it? By listening to something else you say or write, or what someone else has said or written? Then those subjective experiences - which as experiences are also of the form "I am hearing you say that..." - are themselves subject to the same requirement of independent verification. We have an infinite regress here, and for a very good reason. Any contact I have with reality will be subjective, simply because I am me, and science can escape that truth only on pain of indulging in magical thinking. Introducing a radical divide between our subjective experience and its epistemic content destroys not only Coyne's intended target of religious belief, but the very possibility of knowledge.

"I'm hungry" does not always mean that I need food. But in the normal course of events it does; that is why nature gave us the feeling. "I hear you saying that you have confirmed my experiment" doesn't always mean I have heard you say that - I could be dreaming, hallucinating or simply have misheard you - let alone that you have in fact confirmed my experiment. But in the normal course of events it does, and in the normal course of events I might reasonably take for granted that you have in fact confirmed my experiment. Subjective experience is not indubitable; the attempt to make it indubitable (as in the thinking of Descartes) only leads to yet more fundamental and dangerous misunderstandings. But it is literally all we have got.

The only basis from which to critique our subjective experience is through yet more subjective experience. Doesn't this just involve us in yet another infinite regress? No, because this involves us in the philosophical process of dialectic. Subjective experience does not go on to infinity, but turns back on itself, We criticize subjective experience A in terms of experience B and B in terms of A, deciding what makes the most sense based on how our theories make sense of experience comprehensively.

For instance, consider the ancient philosophical question of the difference between sleeping and waking. How do I know I'm not sleeping right now? I notice that in certain cognitive states the question of whether I am sleeping or waking never occurs to me, and seems like it could not occur. These states, of course, are when I am sleeping, and in fact when the question occurs to me as to whether I am sleeping or waking, I know I am in the processing of waking up. So the difference between sleeping and waking seems to be that waking is aware of both itself and the state of sleeping, while sleeping is not aware of either itself or the waking state. Now since I am aware of the distinction between the two states, I must be awake. Thus we have the subjective experience of sleeping (experience A) being critiqued from subjective experience B (waking), with both experiences shedding light on the other (from the perspective of B) leading to a comprehensive insight into both experiences.

Or consider the process of science itself. While flute-playing implies a flute-player, and knowledge implies a knower, science implies a scientist. That is, all science occurs in the context of the subjective experience of a scientist. This is a very valuable piece of direct knowledge that is surprisingly often overlooked. Scientists, being people like you and me, can and must take the everyday world of common sense for granted; not just in their everyday life, but in their scientific endeavors as well. If the microbiologist starts wondering whether he's really looking through his microscope, or the physicist that he's really discussing his results with other physicists and not merely a Matrix-like simulation meant to deceive him, then his science will never get started. There is therefore a dialectic between ordinary experience and the specialized experience of the scientist in the lab.

Coyne seems to be in the grip of a mythical belief that the scientific method allows one, in the moment of science, to transcend human nature itself and reach the otherwise unattainable realm of the "objective." Like all truth myths, it  isn't recognized as such but serves as an unarticulated background assumption.

And the cure for it is philosophical reflection on direct experience.

Sunday, June 28, 2015

Relating Ourselves to Indirect Knowledge, pt. 1

In my last post I brought out the distinction between direct and indirect knowledge, and made the point that we can only evaluate indirect knowledge in light of direct knowledge; here I would like to explore that theme further.

Indirect knowledge is knowledge that we are unable to evaluate in the terms by which it is directly known. For example, it is only the cosmologist who has the time, resources and education to draw scientific conclusions about the physical history of the universe on a cosmic scale. The rest of us, to the extent that we can be related to that knowledge at all, are only related to it through the cosmologist and to the extent that we believe what he tells us about cosmology. The key characteristic of indirect knowledge is, therefore, that it is mediated by another.

Naturally we want to only believe things that are true and avoid believing things that are false. In the case of indirect knowledge, then, this must involve an evaluation of the mediator through which we are related to the knowledge. In direct knowledge, we evaluate the evidential and logical basis for the knowledge ourselves; in indirect knowledge, we evaluate the reliability of the mediator who is, presumably, himself directly related to the evidential and logical basis for the knowledge. (It should be remembered that there might be a chain of mediators; the significant point is that the chain must eventually end in someone directly related to the knowledge. For the purposes of this post, however, there is no significant difference between a chain of one or many so the chain will taken to have only one link for the sake of clarity.)

It would be defeating the purpose, of course, if we tried to evaluate the mediator in terms of a direct relationship to the knowledge itself. For instance, there is no point in me trying to evaluate whether a cosmologist is reliable by reviewing his work in light of an application of cosmological science itself. For if I could do that, I could relate myself directly to cosmological knowledge and wouldn't need the cosmologist in the first place. We only avail ourselves of indirect knowledge when direct knowledge is unavailable to us.

While we can't evaluate a mediator directly in terms of the science he mediates, we can evaluate him in terms of his general human nature as a knower. For the canonical example of how to do this, we turn to Socrates in Plato's Apology. It will be recalled that Socrates was told by the oracle at Delphi that he was the wisest of men. Incredulous at this, Socrates attempted to prove the oracle wrong by finding a man wiser than himself, which he thought would not be difficult to do. Among the individuals he interviewed in this quest were the skilled craftsmen. This is the result:

Last of all, I turned to the skilled craftsmen. I knew quite well that I had practically no technical qualifications myself, and I was sure that I should find them full of impressive knowledge. In this I was not disappointed. They understood things which I did not, and to that extent they were wiser than I was. But, gentlemen, these professional experts seemed to share the same failing which I had noticed in the poets. I mean that on the strength of their technical proficiency they claimed a perfect understanding of every other subject, however important, and I felt that this error more than outweighed their positive wisdom. So I made myself spokesmen for the oracle, and asked myself whether I would rather be as I was - neither wise with their wisdom nor stupid with their stupidity - or possess both qualities as they did. I replied through myself to the oracle that it was best for me to be as I was. (from the Apology, in the Collected Dialogues of Plato edited by Hamilton and Cairns)
What Socrates has noticed is that being in expert in one thing does not make one an expert in everything, which is of course common sense. But he has noticed something else which is more significant, and even paradoxical, which is that being an expert in a field has a tendency to make people think they have a competence in other areas that is undeserved. I say it is paradoxical because one would think that through the process of becoming an expert in one field a man would realize how difficult it is to become an expert in any field, and so would tend to a natural humility concerning knowledge outside his own specialized field. Yet the opposite seems to happen; becoming an expert in one field tends to make one think he is an expert everywhere.

This observation is even more relevant today than it was in Socrates's time. For as I pointed out in the original post, science becomes more specialized the further it advances. That is, to become a scientific expert today means spending an increasing amount of time on an increasingly narrow domain. Scientists are subject to opportunity cost as much as anyone else; a scientist can only become an expert today on early universe cosmology by spending his time studying that and not other things - for instance genetics, chemistry, electrical engineering or botany, not to mention law, economics, history or philosophy. But, just as in ancient Greece, the expert of today will pretend to a competence outside his narrow area of expertise.

How can we use this principle in our evaluation of indirect knowledge? We should not take for granted that an expert is able to distinguish that which he knows through his expertise and that which holds merely by his opinion or ordinary reason. In other words, he may have no clear self-understanding of what he knows and what he doesn't know and why. Thus what we get from him may be a mix of his expert opinion on the subject on which he is competent - what we want - and his opinions on other subjects on which he has no particular competence better than our own - what we don't want. It is up to us to sort out the one from the other. But beyond that, we should be more likely to rely on the expert testimony of an expert who has the self-awareness to distinguish his expert opinion from his merely ordinary opinion. Such self-awareness indicates that the expert is aware of what it means to know, and we can have more confidence that what he is giving us is in fact only that which is justified by his expert opinion.

The classic example of an expert who is the modern equivalent of the craftsmen Socrates encountered in Athens is Carl Sagan. Sagan, an expert in planetary science, wrote a number of popular books on science (e.g. Cosmos) that explored well beyond his particular competence in astronomy. One of his most popular books, The Demon Haunted World: Science as a Candle in the Dark attempts to trace science as a singular beacon of knowledge in a world haunted by superstition, religion and pseudoscience. The book is of interest here because of a passage on p. 256-257 where Sagan issues some "mea culpas" on instances where he went wrong. The instances include the following: Estimating the atmospheric pressure of Venus incorrectly; incorrectly estimating the water content of Venutian clouds; thinking there might be plate tectonics on Mars when in fact there weren't; attributing the wrong cause to the high temperatures on Titan; overestimating the effect of burning Persian Gulf oil wells on the agriculture in South Asia.

What do these instances all have in common? They are all cases of Sagan admitting error in his particular area of expertise - planetary science. Yet Sagan offered opinions on subjects far beyond planetary science; in the The Demon Haunted World itself he makes assertions about history, religion, philosophy, politics and economics among others. He gives no instances when he was wrong about politics or philosophy. Is this because, bizarrely, he's always right in areas where he's not an expert and only wrong in areas where he is an expert? More likely is that Sagan, in his area of expertise, knows when he is right and when he is wrong, but in areas outside of his expertise, he doesn't really know when he is right and when he is wrong.

And it is not hard to find instances when he is wrong in The Demon Haunted World. He claims on p. 155 that Plato "assigned a high role to demons" and quotes the following in evidence:
We do not appoint oxen to be the lords of oxen, or goats of goats, but we ourselves are a superior race and rule over them. In like manner God, in his love of mankind, placed over us the demons, who are a superior race, and they with great ease and pleasure to themselves, and no less to us, taking care of us and giving us peace and reverence and order and justice never failing, make the tribes of men happy and united.
Sagan gives no attribution for this quote, but a little research shows that it is from Book IV of Plato's Laws. The context of the quote makes clear that Plato is not speaking in his own voice, but is recounting the received tradition concerning how mankind was originally ruled in the ancient, golden age of Cronus. And the continuation of the passage shows that it means pretty much the opposite of what Sagan thinks it means:
So the story teaches us today, and teaches us truly, that when a community is ruled not by God but by man, its members have no refuge from evil and misery. We should do our utmost - this is the moral - to reproduce the life of the age of Cronus, and therefore should order our private households and our public societies alike in obedience to the immortal element within us, giving the name of law to the appointment of understanding.
(My translation in Hamilton and Cairns is slightly different than Sagan's, wherever he got it from.) So while the age of Cronus may have been ruled by benevolent demons, ours is not, but we can imitate that golden age by ruling ourselves through the immortal element within us - which for Plato is the soul and in particular the intellectual element of the soul - the "appointment of understanding." Plato, far from giving demons a "high role", is giving them no role at all and instead is urging us to order our affairs through reason. More deeply, Plato is wisely using the tradition of mythology to support the rule of reason; rather doing a Sagan-like move and dismissing any regard for mythology as foolish, Plato acknowledges the wisdom in mythology but turns that respect for tradition to his own purposes. In the present age, Plato argues, respect for tradition cannot take the form it once did - since the present age is manifestly not a golden age, we obviously are not being ruled by benevolent demons even if we once were - and can only take the form of ruling ourselves by the divine element within us, our reason. Sagan, rather than dismissing Plato, could probably have taken some lessons from him in how to influence people.

The point for present purposes, however, is that Sagan was clearly wrong about Plato, and in a way that a simple reading of the passage in context would have revealed to any intelligent reader. Furthermore, Sagan doesn't know he is wrong, the way he knows he was wrong about atmospheric pressure on Venus. The lesson to take away is to trust whatever Carl Sagan says about strictly scientific issues concerning planetary science, and to take anything else he says with truckload of salt.

The conclusion for now is that the first principle in evaluating indirect knowledge is to consider the mediator in terms of his character as a knower in the general sense: Is he able to distinguish what he knows from what he doesn't know? Does he know the limits of his own expertise, what he really knows through it and what he doesn't? A mediator for whom positive answers can be given is more trustworthy, both in his area of expertise and in the likelihood of not pretending to pass off as expert knowledge that which was not. In any case, it is important to sift through for ourselves what an expert tells us, sorting out what his expertise really justifies and what it does not.

Friday, June 26, 2015

Science, Philosophy, Direct and Indirect Knowledge

For many, including Jerry Coyne, the significant distinction in knowledge is between scientific knowledge and all other kinds of knowledge (if there are any; in his Faith vs. Fact, Coyne can barely bring himself to acknowledge anything other than science.)

But the more important distinction for us is between direct and indirect knowledge. Direct knowledge is knowledge that is known immediately by us and on our own authority. Indirect knowledge is knowledge that we are related to only through someone else; it is mediated by those others and therefore always involves the issue of authority, for it is on the basis of authority that we determine whom to listen to or not.

Examples of direct knowledge include things like the fact that you can't be in two places at the same time, that you are younger than your parents, and that dogs are produced by nature but automobiles are only products of human artifice. Some (but not all) of mathematics is direct knowledge. You don't need an authority to tell you that 2+2=4. And if you can follow Euclid's proof that there are an infinite number of primes. then the fact that there are an infinite number of primes is direct knowledge for you.

Suppose you can't follow the proof. Then you can still be related to that fact as knowledge, but only indirectly through the authority of someone else who can follow the proof. A consequence of this is that the same piece of knowledge can be known directly by some and indirectly by others. Everyone knows 2+2=4 on his own authority; but very few people know that Fermat's Last Theorem is true on his own authority, for its proof is so sophisticated that only the most educated mathematicians can follow it.

It can be seen that indirect knowledge depends on direct knowledge. If I'm taking something on the authority of another, it is not unreasonable for him to be taking it on the authority of another as well, but somewhere the chain has to end with someone who simply knows it directly. Otherwise we have a train with nothing but freight cars and no engine. (An example is a child who believes in the Big Bang on the authority of his teacher, who in turn believes it on the authority of cosmologists. But the cosmologists know it directly because they have gone through and understand the scientific case for the Big Bang.)

What about science? Jerry Coyne tells us on page 187 of Faith vs Fact that "I see science as a method not a profession... Any discipline that studies the universe using the methods of 'broad' science is capable in principle of finding truth and producing knowledge. If it doesn't, no knowledge is possible." So to have "science" in the strict sense we must produce it through the method that defines science. Unfortunately, very few of us - actually no one - has the time or resources to develop his entire base of knowledge through the application of scientific methods. We must, to a great degree, rely on the application of the scientific method that others have performed and take their results as a given; or, rather, we can only be related to their scientific knowledge indirectly through appeal to their authority as scientists.

The irony of the advance of science is that the more it advances, the less it becomes directly available to any individual man. Back in the early days of modern science, an intelligent amateur could keep abreast of, and perhaps reproduce, most of the crucial scientific results. It's not hard to reproduce Galileo's experiments with rolling balls and, if he can get his hands on a telescope, verify the existence and movements of Jupiter's satellites for himself. And he can easily reproduce Franklin's experiments with electricity or Pascal's with atmospheric pressure. But as science advances, it requires increasingly expensive and elaborate apparatus to construct experiments; and those experiments themselves require a much larger base of knowledge to understand. A high school student can be brought to an understanding of Galileo's experiments in acceleration in the course of one day's class. He'll need another four or so years of intensive education, at least, to understand how and why recent experiments have demonstrated the existence of the Higgs boson, assuming he is capable of mastering the relevant material at all. And that student, while mastering physics, will not be spending his time mastering biology and genetic science, so that, however much he might end up directly related to knowledge in physics, he will still be indirectly related to all that genetic science produces, and all that the other sciences produce. So the more science advances, the more all of us are indirectly related to scientific knowledge, including scientists themselves.

It thus becomes crucial for us to understand the distinction between direct and indirect knowledge, how they are related, and how to handle each type of knowledge appropriately. I've already discussed the distinction between the two types of knowledge. How are they related? As pointed out above, indirect knowledge is dependent on direct knowledge, since indirect knowledge is really just direct knowledge removed some number of times from the original source.

But indirect knowledge is dependent on direct knowledge in another way, and that is subjectively. By that I mean the only means we have available to evaluate indirect knowledge is through direct knowledge. When a scientist says that the Big Bang is true, how do I know whether to believe him or not? I could appeal to some other instance of indirect knowledge, for instance that other scientists agree with him, but this only pushes the problem back a step, since I now have to think about how to evaluate that piece of indirect knowledge. Again, at some point I must have recourse to something I simply know directly, through which I can evaluate competing claims of indirect knowledge.

The process of analyzing and appropriating direct knowledge is philosophy. The crucial distinction with direct knowledge is that it is not mediated; that is, it must ultimately be known without reliance on anyone else. Kierkegaard discusses this in his analysis of Socrates in Philosophical Fragments. A true teacher - that is, in my terms, a teacher of direct knowledge - is only the occasion by which someone comes to know, and the process has only completed when the teacher has become dispensable. It is for this reason that philosophy does not "progress" or produce "results" - one of the perennial charges against it. A "result" is knowledge that can be appropriated without reproducing the process by which it came to be known - for example, when an engineer uses the facts about electronic devices to design a system without first proving all those facts scientifically for himself. "Results" are therefore by nature indirect knowledge. Philosophy cannot produce "results" without falsifying itself; and everyone who would make progress in philosophy must reproduce for himself the process by which philosophers have come to know - and in the process, make those philosophers dispensable. There are no "results" that can be handed on from Plato's Republic. But someone who reads it may come to know things for himself that he might otherwise not know.

The fact that the teacher becomes dispensable is one characteristic of philosophy; another is that it appeals to direct experience as its evidential basis, on the eminently reasonable principle that it is the only possible basis. For my own, immediate experience is the only direct contact I have with reality (if in fact I have contact with reality at all); anything else is mediated and therefore a subject of indirect knowledge. This too, like the fact that philosophy doesn't produce "results", sometimes puts people off philosophy, for it makes philosophy seem a matter of purely "subjective" preference. And it is subjective, in the sense that it is only I that have access to my own experience. This is true, necessary and unavoidable, nearly tautological, yet is frequently overlooked. From p. 195 of Faith vs Fact:
"I'm hungry," my friend tells me, and that too is seen as extrascientific knowledge. And indeed, any feeling that you have, any notion or revelation, can be seen as subjective truth or knowledge. What that means is that it's true that you feel that way. What that doesn't mean is that the epistemic content of your feeling is true. That requires independent verification by others. Often someone claiming hunger actually eats very little, giving rise to the bromide "Your eyes are bigger than your stomach."


The fact you feel hungry is a fact concerning reality as much as any other. Whether you really need to eat or not is irrelevant to the truth that you in fact have the feeling . Ultimately, science itself depends on subjective knowledge, because scientists must read meters and look through microscopes - "I am seeing an amoeba through this lens" or "The voltmeter says 5 volts." There is really no way to escape the subjective nature of these experiences. For instance, trying to "independently verify" them as Coyne suggests, for instance, by asking someone else whether they see 5 volts as well, may be a reasonable procedure, but it only works because we take our subjective experience of what someone else tells us - "I am hearing Joe say the voltmeter reads 5 volts" - as itself not in need of independent verification. Otherwise, we are back to the familiar infinite regress that comes up so often in this context.

The philosopher faces the fact that all our knowledge - direct, indirect or otherwise - can ultimately be evaluated only in light of our own personal experience. The philosopher serves as an ultimately dispensable aid in analyzing and discovering the significance and meaning of that experience. The scientist simply takes the meaning of personal experience for granted so he can get on with his science. And he is perfectly justified in this, but he is in danger, like Coyne, of misunderstanding the real relationship between science and philosophy - which is really a misunderstanding of the basic human condition.

Coming next: How direct knowledge is used to evaluate indirect knowledge. Hint: Read Plato's Apology.

Thursday, June 27, 2013

Gilson on the Modern Philosopher

"Many of them live by what they choose to forget."
 - from The Spirit of Medieval Philosophy

Saturday, June 15, 2013

Scientific Absolutes

Steven Novella has a post on SETI (Search for Extraterrestrial Intelligence) here. What interests me are his comments on science and absolutes. Using the classic question concerning the possible existence of a black swan, he has this to say:

Sometimes a hypothesis can be stated in such a way that a single counter-example will disprove it. The now classic example is that all swans are white. A single non-white swan will falsify this hypothesis. How thoroughly do you have to search, however, before we can conclude that all swans are white? Would you have to simultaneously survey every swan in the world? If it takes 10 years to conduct a thorough survey can you be sure that a black swan was not born in the last 10 years? 
The problem here is in thinking in absolutes. Scientific theories, rather, often deal with probabilities and are not necessarily wrong when exceptions are found. In the case of swans, the more thoroughly we look for non-white swans without finding them the greater our confidence is that all swans are white, and we can certainly conclude that most swans are white and that any exceptions are rare.

While it is true that science does not deal with absolute conclusions, it doesn't follow that science doesn't involve absolutes at all. In fact, science can't be done without some thinking in absolutes. Consider those swans that are the subject of a worldwide survey. Assumed in the story is that scientists have no problem distinguishing swans from non-swans, be they black or white. We might say that, as far as the experiment is concerned, scientists are absolutely able to distinguish swans from non-swans.

Why, for instance, on encountering a creature that is furry, has floppy ears, and barks, doesn't a scientist announce a revolutionary discovery: Not only can swans be black, but they can have fur and floppy ears! Because, of course, what the scientist has encountered is a dog and not a swan. Experiments like the one described by Novella presuppose, albeit unconsciously, an Aristotelian natural philosophy - specifically, the distinction between essential and accidental properties of being. An essential property is a property that makes a being the kind of thing it is; an accidental property is a property that, whether a being has it or not, does not change the kind of thing it is.

How is it possible for us to make absolute statements regarding essential properties? How can we know, for instance, that while all swans may not be white, all swans are naturally born with the ability to fly? (I qualify that statement with "naturally" because, through accidents of birth or injury, a particular swan might not be able to fly. This does nothing to change the fact that its nature is directed toward flight and would have achieved it but for accidents of fate). Hume famously denied such a thing was possible with his criticism of induction. But what Hume overlooks is that when we analyze something, we not only understand it as a catalog of properties, but we also understand it's mode of being, the why behind its collection of properties. A swan has a mode of life peculiar to it, and very different from the mode of life of a dog, that accounts for the essential properties of the swan vs. a dog. A dog is an animal that hunts prey through smell, and so is built low to the ground with a wet nose and an extraordinary sense of smell. The swan eats plants at the bottom of ponds, and so has a long neck and a bill, but a poor sense of smell since it doesn't need it. The dog's nose is essential to its mode of being so we can be sure we will never encounter a dog with a bill, and similarly we won't find a swan with a soft wet nose. But being black or white is irrelevant to the mode of life of either, so we should expect that we might find different colored dogs or swans. And in that case, statistics tell us the probability of occurrence of the various colors.

If we don't like Aristotle, Kant saw the same thing with respect to the distinction between essential and accidental properties, but he hoped to avoid any metaphysical assertions concerning being. His solution was to relocate the essential/accidental distinction from being (i.e. in the world out there) to the subjective (i.e. in your mind). Kant argues that in order for experience to be possible for us at all, it must be organized by our cognitive faculties into some sort of coherence - otherwise our experience would be the "blooming, buzzing confusion" of William James. Actually, it would be worse than that, for Kant insists that it wouldn't be experience at all, not even a confused one ("confusion" still implies a relationship between the confused elements, some stable background with respect to which they are confused.) So the mind organizes experience spatially and temporally, with space and time being the terms in which the mind constructs that organization. Essence and accident are categories within which the mind refines experience. For Kant they are imposed on nature rather than read off it as with Aristotle. But it doesn't really matter for the purposes of this post, for they are just as absolute for Kant as they are for Aristotle; they are just subjectively absolute rather than objectively absolute. Either way, empirical investigation is impossible without some thinking in absolutes.

Saturday, March 23, 2013

Augustine on Leisure

"The charm of leisure must not be indolent vacancy of mind, but the investigation or discovery of truth, that thus every man may make solid attainments without grudging that others do the same."
- The City of God, Book 19, Ch. 19.

Sunday, February 17, 2013

Goldberg on the Meaning of Life

In his latest G-File (an emailed newsletter), Jonah Goldberg ruminates on the meaning of life. After mentioning Robert Wright and Wright's interview with Edward Fredkin, and Fredkin's take on information as fundamental to the universe, Goldberg gives his view of things: 
But here's the thing. It doesn't matter whether it is literally true. It is metaphorically true. And in a way, metaphorical truth is more important. The meaning of life is found in the living of it. This not a materialistic, "you only live once" argument for hedonism. Rather, I'm simply acknowledging the fact that whatever meaning there is to our existence can only be gleaned from existence. If all you've got are shadows on the wall of Plato's cave, you learn what you can from the shadows.

We are all individually working out the math. I don't mean to belittle or sidestep religion, but to bolster it. Religion is metaphorical too, insofar as God's will is always a mystery and out of reach. But religion helps most people look beyond the material to the deeper purpose of all things. Atheists who hate religion, it seems to me, often really hate the language of religion because it doesn't speak to them or because they lack the imagination to see it in anything but the strictest and most literal terms.

Meanwhile, John Donne was right in the small-c catholic and big-C Catholic sense: No man is an island. And whether you want to say that we are "Each ... a piece of the continent, a part of the main," or whether you want to say that we are each working on our own little bit of the big math problem, you are still grasping at the shadows to describe a truth too big for your hands to recognize, but that your soul can feel
There are some things well said here, in particular "The meaning of life is found in the living of it." The reason is that what is known in the meaning of life is one and the same with the process of knowing it; the life that knows the meaning of life is no other life than the one for which the meaning is known. This is different from an objective pursuit like science, where the life of the scientist knowing science is entirely separate from the science known. It is thus possible to know science well but be utterly confused about the meaning of one's own existence. On the comic side, this results in shows like The Big Bang Theory, which feature brilliant scientists who give disquisitions on quantum mechanics one minute and display a childlike level of sophistication in social relationships, empathy and ethical reasoning the next. On the sinister side, it results in the phenomenon of Nazi scientists, who could perform experiments on subject persons that followed the strictest scientific protocols but were justified by the most crude moral reasoning. In contrast, a man such as Socrates, who understands the meaning of his own existence, necessarily reveals this knowledge in the manner in which he lives (see Crito and Phaedo). Offered an opportunity to escape from prison in the Crito so that he might pursue philosophy in some other city than Athens, Socrates demurs because such an escape would prove he is not truly a philosopher. The knowledge Socrates seeks is self-knowledge, which is nothing other than the meaning of his own life,  and since his reasoning has convinced him the citizen must submit to law, he must live that meaning in his own life or prove he doesn't really know what he says he knows.

But where Goldberg goes astray is in the common assumption that since the meaning of life must be found in the living of it, that meaning must be murky or only something you feel. The example of Socrates contradicts this. There is nothing murky or merely emotional about what Socrates tells us in the Crito. In fact it is perfectly clear and stated with Socrates's customary equanimity. What confuses us is that, unlike a solution to a math problem, we can't really know what Socrates tells us merely through his telling it; we can only know it to the extent that we have subjectively appropriated it, and no one can do that but ourselves. As Kierkegaard tells us, the subjective thinker understands that the difficulty with subjective knowledge is not knowing what is required, but in doing it. The modern way of thinking, however, only recognizes the objective aspect to knowledge, and so sensing that something is missing in the objective assertion of the meaning of life, but not grasping the missing element as a subjective thinker, the modern thinker collapses that missing subjective element into the objective equation, perceiving what is in fact plain to be murky or merely a matter of emotion.

There is also the unwarranted conclusion that since God is greater than us, God's will is always a mystery and out of reach. This is true if the only way we can know God's will is through our own efforts, i.e. our own attempts to reach up to God. But what if, rather than leaving us to our own devices, God chooses to reveal Himself to us in a manner that we can appropriate? Then we again have the situation where the real problem is not the objective content of what is known, but the manner and fact of its subjective appropriation.

Tuesday, August 21, 2012

Doubt as the Engine of Inquiry?

The Maverick Philosopher has a post here where he asserts that doubt is the engine of inquiry. But surely more fundamental to inquiry, as Aristotle says, is wonder. For you can doubt something without any urge to find out the real truth. In fact, that seems to sum up the ignorant cynicism that passes for sophistication these day.

The oracle at Delphi told Socrates that he was the wisest of men. This puzzled him because he knew he possessed no special knowledge. Did Socrates, then, doubt the oracle? Perhaps, but more significantly, he set out to find someone wiser than himself and prove the oracle wrong. The motivation for this was not doubt, but an urge to see the truth vindicated. If Socrates had merely doubted, he would have heard the oracle, not believed it, and just gone home.

Sunday, July 15, 2012

Method and Dialectic

In the comments section of this post on atheistic teleology over at Edward Feser's blog,  I had an extended discussion with an intelligent non-Aristotelian regarding the nature of final causes. As often happens in these cases, my interlocutor demanded in one of his comments that I present and defend a method for investigating final causes as a necessary prerequisite to continuing the discussion. As he put it, how could we know we are tracking the truth and not merely indulging in wishful thinking without an established method up front? I resisted, for I well knew that the insistence on an a priori establishment of method involves a host of philosophical assumptions (and mistakes, in my view) that pretty much give the store away to Enlightenment style thinking vs an Aristotelian approach. As an alternative, I proposed dialectic, which my interlocutor interpreted as a form of method (naturally, as he thinks thought must begin with method), and a poor one at that. Well, he was right to the extent that dialectic is a pretty shabby thing if it is interpreted as a degenerate form of the modern methodical approach. But it isn't such a degenerate form; it is a genuine intellectual alternative to method and is necessarily distorted if it is interpreted under the category of method. In fact, it is more accurate to conceive of the modern insistence on method as itself a degenerate form of dialectic.

We can see this through two people, one at the beginning of modern thought and the other a contemporary modern thinker. The first is Descartes, who it may be argued was the foundational thinker of modern thought by establishing the insistence on method as an intellectual first principle. The other is Daniel Dennett, a champion of method in contemporary thought and the supposed scourge of Cartesianism in the philosophy of mind. But as I argue in this post, Dennett is mistaken in seeing dualism as a foundational principle of Cartesianism; it is method that is the foundational principle and it is the insistence on method that has the consequence of dualism. This is why the contemporary philosophy of mind is so haunted by Cartesianism and why thinkers like Dennett, the more they struggle to free themselves of Descartes through method, only find themselves more tightly bound to him.

The problem with insisting on method as primary in thought is that it isn't. It isn't personally, as we all explore and come to know the world as youth without an a priori method in hand. It isn't historically, as there was an undeniable body of knowledge accrued through cultural accumulation over millenia. Nor was it even true of the seventeenth century philosophers who began to insist on the primacy of method. Descartes begins his Discourse on Method not with a method, but with an extended justification of his insistence on method in terms of his historical views and personal experience. He dismisses the philosophical tradition with this comment:

I will say nothing of philosophy except that it has been studied for many centuries by the most outstanding minds without having produced anything which is not in dispute and consequently doubtful and uncertain. (Discourse on Method, First Part)

He then goes on in the Second Part to describe the rules of his method and later, in the Fourth Part, summarizes the method thusly:

... I thought that I should take a course precisely contrary, and reject as absolutely false anything of which I could have the least doubt, in order to see whether anything would be left after this procedure which could be called wholly certain.

When he began implementing his method, Descartes doubted many things, but not everything.  He certainly didn't doubt his original historical assessment of philosophy as doubtful and uncertain, his main justification for method in the first place. If he had, he might have considered points like this: If the mere fact of disputation is sufficient to condemn something as uncertain, then uncertainty is a self-fulfilling prophecy, for we can render something uncertain merely by disputing it. And this, in a just historical irony, is exactly what later happened to Descartes and his method.

The point is that Descartes did not doubt everything because he couldn't. However skeptical he wished to become, Descartes remained a man nonetheless, an embodied knower forced to found his thinking in common sense and nowhere else. There is an entire worldview of thought and history buried in the first part of Descartes's Discourse; his insistence on method has the effect of protecting that worldview from criticism (dialectical criticism) or even acknowledgement that it exists. This is why the insistence on primary method can be considered a debased form of dialectic. All it does is hide the non-methodical assumptions from view and protect them from dialectical criticism. It doesn't transcend dialectic or protect itself from the pathologies of a degenerate dialectic, but only provides the illusion of transcending dialectic by assuming dialectical conclusions without argument.

We can see why philosophy has gotten such a bad reputation in the modern world. The reason is that modern philosophers, beholden to method, find it nearly impossible to engage on the issues that really separate them. For those issues involve the pre-methodical views of the world they must have and that inform their selection and establishment of method. In that earlier post I mention John Searle, who in his books on the mind states flatly his assumption that the fundamental particles described by physics are also the metaphysical fundamentals of reality, and that anyone who disagrees with this should be summarily dismissed. Searle, like Descartes, makes an a priori insistence on method in thought, only Searle's method is the method of science rather than the method of universal doubt of Descartes. But in any case, science is a product of the mind, and to insist a priori on the non-negotiability of the metaphysical significance of science is to already cast in concrete certain conclusions about the mind, e.g. that the mind is such that its methodical conclusions are more certain than any other conclusions it might make. Naturally, the view of the mind baked into Searle's assumptions about the metaphysical meaning of science is not indisputable, but disputing it is very difficult, because Searle has made his scientistic assumptions a barrier to entry to conversation (if you don't accept them, you are not worth talking to). Searle is not unusual but typical in this regard. And since not every thinker will make the same pre-methodical assumptions, discussions between modern thinkers have the flavor of circling the real issue in dispute without ever quite getting to it; for the disputants assume that "real thought" can only begin with their preferred approach, and when it becomes clear that the disputant does not share the same pre-methodical assumptions, bad faith or naivete is concluded.

In the dialog in response to the post on Feser's blog, my disputant insisted that I propose a method before we could discuss the metaphysical ordering of final causes. Without that, he asserted, we could be the victims of wishful thinking, psychological bias, and have no way to know whether we were tracking the truth. Now if I were to accept his demand, I would be implicitly agreeing to his views on wishful thinking, psychological bias and the rest as pre-methodical established facts, for those facts would stand in judgment of any method I might propose. But my insistence is that final causes and their ordering are primary facts about nature that stand in judgment of method rather than vice versa. For instance, why are we concerned with wishful thinking at all? Wishful thinking is a form of error, that is, failure to fulfill the final cause of the intellect to know the truth. Methods are to be judged in light of the extent to which they fulfill the final cause of the mind to know truth. But if the final cause of the mind is doubtful, what objective reason is there to prefer truth to error? Moreover, to allow facts like wishful thinking to implicitly stand in judgment of facts like the final cause of the mind is to already concede the case against final causes; for if final causes are not basic facts of nature at least as transparent as facts about wishful thinking, then they are nothing at all, or at best the illusions modern thinkers think they are.

The point is not that the brief analysis I just gave is a knockout argument in favor of final causes, but that the real point of disagreement between us is at a level that can only be resolved dialectically rather than by an a priori assertion of method. Were I to give into his demands for method, I would be conceding the case against final causes at the outset. The discussion would then be a playing out of that logic to its inevitable conclusion against final causes, or dawn in the eventual realization that the real argument is to be found in the category of pre-methodical facts of nature rather than post-methodical conclusions, at which point the original concession would have to be rescinded (perhaps generating an accusation of bad faith).

I brought up Daniel Dennett in the conversation, and he is a good example of how pre-methodical facts are unconsciously assumed in the development of method that is then used to bludgeon all rivals. In Consciousness Explained, Dennett develops a method for exploring consciousness called "heterophenomology." Like Descartes, prior to explaining and deploying his method, he gives reasons for its development. For Descartes, the villain was the endless disputes of philosophers; for Dennett it is "introspection:"

Or perhaps we are fooling ourselves about the high reliability of introspection, our personal powers of self-observation of our own conscious minds. Ever since Descartes and his "cogito ergo sum," this capacity of ours has been seen as somehow immune to error; we have privileged access to our own thoughts and feelings, an access guaranteed to be better than the access of any outsider...

But perhaps this doctrine of infallibility is just a mistake, however well entrenched. Perhaps even if we are all basically alike in our phenomenology, some observers just get it all wrong when they try to describe it, but since they are so sure they are right, they are relatively invulnerable to correction. (Consciousness Explained, p. 67)

The qualification "some" Dennett puts in front of "observers" may seem inconsequential, but it is crucial to the further development of his project. But more on this in a moment. The idea is that Dennett plans to put the subjective experience of consciousness under suspicion. Rather than accepting at face value what subjects say about their consciousness, Dennett will take a step back and only commit to saying that, when a subject describes his consciousness, we can only strictly conclude that what they are describing is how their consciousness seems to them, not necessarily how their consciousness really is. It may be that how their consciousness really is may coincide with how it seems to them, but then again it may not. Dennett uses the example of anthropologists encountering a primitive tribe. The natives speak of a being called Feenoman, whom the anthropologists gradually figure out is a kind of forest god. The "heterophenomenological fact" here is the fact that the natives are truly and sincerely speaking of Feenoman. But that Feenoman is fictional rather than real is something known to the anthropologists and not the natives; in other words, the anthropologists are able to discern how the native's world seems to them (Feenoman is just as real as anything else) compared to how it really is (Feenoman is fictional). More deeply, the cause of the natives belief in Feenoman is not the existence of the real Feenoman (as the natives think), but some other complex of causes unrelated to an actual Feenoman (since their isn't one).

Dennett's plan is to extend this procedure beyond native beliefs in forest gods to the contents of consciousness in general. His ultimate target is the experience of the unitary "I" itself, which Dennett calls the "Cartesian theater", and he claims in good heterophenomenological fashion to only seem to be real rather than really be real. The procedure is to interview subjects about their experiences of consciousness, and remain objectively neutral as regards their veridical nature until a complete account has been given. The interviewer is happy to grant the subject the authority to define how his conscious experience seems to him, but certainly not how his subjective experiences ultimately relate to reality. This is done later in an analogous manner to the anthropologist described above. And if he finds, like the anthropologist, that there are no objective reasons to believe in the reality of some element of the subject's conscious experience, and he can provide an alternative causal explanation for the subject's seeming to experience it, then the element in question can be reasonably dismissed as illusion. At the end of process, you end up with what is really true about consciousness, results the interview subject should then accept as a generous gift, but usually rebels against because his cherished beliefs in things like the self and gods are shown to be illusions.

Now I mentioned that it is critical that Dennett used the phrase "some observers" in the paragraph quoted above. This is because of the problem of the Prime Interviewer, a problem latent in the heterophenomenological procedure but not acknowledged by Dennett. With respect to subjects who do not meekly submit to heterophenomenological conclusions but have the temerity to question its authority to debunk consciousness, Dennett has this to say:

If you want us to believe everything you say about your phenomenology, you are asking not just to be taken seriously but to be granted papal infallibility, and this asking too much. You are not authoritative about what is happening in you, but only about what seems to be happening in you, and we are giving you total, dictatorial authority over the account of how it seems to you, about what it is like to be you. (p. 96, emphasis in original.)

Well, who has the authority to pronounce on how things really are, and not just how they seem? The interviewer, naturally. But the interviewer is just a man like the subject, and so presumably subject to the same cognitive suspicion as everyone else. He is only authoritative about what things seem like to him, not how things really are. If he is to legitimately serve as a heterophenomenological authority, it stands to reason that he must have already gone through a prior heterophenomenological vetting so that he could know what is real and what is illusion with respect to his own consciousness. No doubt you can see the infinite regress coming, and at some point there must be a Prime Interviewer, an individual whose consciousness is itself not under suspicion so it can serve as the methodical anchor for all other consciousnesses. In the paragraph quoted above, if Dennett had written that "Perhaps even if we are all basically alike in our phenomenology, all observers just get it all wrong when they try to describe it.." rather than just "some" observers, the problem of the Prime Interviewer would have been made explicit. If all observers get it wrong, how can the method get going? No points for guessing who gets the mantle of implicit Prime Interviewer in Consciousness Explained.

The method Dennett describes in Consciousness Explained is really just a way of privileging certain pre-scientific, pre-methodical scientistic assumptions by baking them into the cake of his method, then demanding that all other accounts of consciousness submit to the strictures of his method, which of course means that his pre-methodical account of things must triumph. Dialectic has not been transcended through method; it has merely been avoided through pre-methodical assumptions, and will inevitably reemerge when those assumptions are disputed. Which is why Dennett must order his reader to accept his results.

There are of course cases in which it is appropriate to establish a method and remain suspicious of conclusions not arrived at methodically. But the establishment of method cannot substitute for a dialectical justification of the method itself. And, finally, method cannot be the first thing in thought, for method is not self-evident, otherwise everyone would have it and there would not be disputes about methods. This is one of the many ironies of the modern age. The Enlightenment insistence on method was supposed to put an end to the endless dialectical wranglings of philosophers; instead it just substituted endless wrangling about method, a wrangling that is worse than dialectical because the dialectics were assumed and then forgotten in assumptions about method.