1st series [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]  2nd series [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49]

  View the latest questions and answers at askaphilosopher.org

Ask a Philosopher: Questions and Answers 16 (1st series)

Here are some of the questions that you asked a philosopher from March 2002 — April 2002:

  1. How well did Socrates defend himself?
  2. Origin of 'tree falling in the woods' question
  3. Some puzzles about memory
  4. Hobbes and Locke on September 11th
  5. Is science the new religion?
  6. How a theory of truth can be 'true'
  7. God, conscience, free will and evil
  8. Is Abdel Magid right to steal?
  9. Roger Scruton on the imagination
  10. Do we know anything?
  11. God and logical positivism
  12. Influence of Descartes on our idea of 'the mental'
  13. Art, poetry and science
  14. Burning your boats
  15. What would happen to religion if aliens appeared
  16. The problem with the young today...
  17. How philosophers defend their jargon
  18. Gareth Evans and the name 'turnip'
  19. Does omnibenevolence entail benevolence towards bad things?
  20. Philosophical considerations on 'expectation'
  21. Applying utilitarianism and categorical imperative to vivisection
  22. What is a university?
  23. Moral consideration towards artificial intelligences
  24. Did Pythagoras discover that Hesperus is Phosphorus?
  25. Does knowledge entail belief?
  26. Is life necessarily good?
  27. Role of justification in defining knowledge
  28. Value of philosophy today
  29. Categorical imperative vs. virtue ethics
  30. Karmic effects of suicide
  31. Learning to be
  32. Why computers can't have minds
  33. 'To reach our target everything is permissible'
  34. Anti-humanism of Heidegger's Letter on Humanism
  35. If everything has an opposite, where is the middle?
  36. Hume on liberty and necessity
  37. Questions on The Cloud of Unknowing
  38. Can there be laws of war?
  39. Origin of 'practice of the presence of God'
  40. Hobbes on human nature
  41. Heidegger, Medieval thought and Nazism
  42. Questions with self-evident answers
  43. Aquinas on natural and revealed theology
  44. Contribution of science to the meaning of life
  45. Maeister Eckhart on why we can't talk about God
  46. Choosing a topic for the IB extended essay
  47. Does philosophy consider pop music art?
  48. Proportionality and US response to September 11th
  49. Killing in war, and what we owe to a starving world
  50. Book list on ethics
  51. Should we obey immoral laws?
  52. Defining reality
  53. How non-human animals 'think'
  54. Being useful vs. being popular
  55. God and free will
  56. Frequency theory of probability vs. Bayesian theory
  57. Persons and their bodies
  58. Merleau-Ponty on the role of the body in perception
  59. Soccer and masculinity
  60. How much of our brains do we use?
  61. Why determinism might be needed for free will
  62. How to view the Big Bang
  63. Our responsibility for things that happen to us
  64. Questions on geography, language, literature, history and technology
  65. Problem of evil and the Afterlife
  66. Readings on autonomy and mental illness
  67. Knower's point of view as asset or obstacle
  68. Hart vs. Dworkin on legal principles
  69. Problems with the inductivist model of science
  70. Books for a philosophically inquisitive 14 year old
  71. Reflecting on one's own reflections
  72. Bluff your way with the ontological argument
  73. Why Aristotle was wrong about motion
  74. Descartes on why God is not to blame for our mistakes
  75. Best book to read on Medieval philosophy
  76. Illustrating whether moral truths exist independently of us
  77. Ethics of abortion
  78. Understanding that we have fallen in love
  79. Contemporary vs. Democritean views of ultimate reality
  80. How we are self-destructive and brainwashed by advertising
  81. 'Meaning' in postmodern philosophy
  82. Did A.J. Ayer know C.L. Stevenson?
  83. Searching for originals of Plato's and Aristotle's works
  84. Difference between syntactically true and analytically true
  85. Source of Epicurus quote on human suffering
  86. Teaching philosophy to impoverished teenagers
  87. Definition of life and the Turing Test
  88. Is humanity alone? and why must all things die?
  89. Is meat eating empty gluttony?
  90. Considerations on scepticism
  91. Question about mowing the lawn and boyfriends
  92. Is belief in God realism or escapism?
  93. 'I don't know anything about art but I know what I like'
  94. Tolstoy's Anna Karenina

Noele asked:

In "The Apology", how well did Socrates defend himself?

First of all it is interesting to follow the line of argument — is it convincing? Has Socrates adequately addressed the charges against him?

Socrates spent considerable time addressing the "old" charges before answering the actual affidavit against him, and it could be questioned whether he did not make his case worse by doing so.

Defense against the "old" charges

Socrates is able to appeal to the jurors themselves as witnesses that he does not discuss natural sciences. But he does not directly reply to the charge of making the worse case appear the better. In fact he could be accused of rhetorical trickery in linking educating people and charging a fee, since this latter was not part of the accusation (even though it could be understood to distinguish him from the sophists, the confusion with whom Plato thought instrumental for Socrates' indictment). Basically he dismisses the "old" charges as a routine charge against anybody who seeks wisdom, and says the court case really stems from the resentment of the poets, artisans and orators, whose ignorance he uncovered by his practice of examination. This could have been understood as belittling the charges and attributing of bias to the jurors, many of who must have belonged to the groups mentioned.

Regarding the actual court case, Socrates sets out to address Meletus' charge, saying he will reply to the others later (which he doesn't).

Defense against Meletus' charge

Socrates addresses the charge of corrupting the young by trying to demonstrate that Meletus has not thought a lot about the education of the young, an ad hominem argument, which fails to convince, since Meletus may be an idiot but he could still be right in claiming that Socrates corrupts the young. Socrates' arguments: 1) he concludes from the analogy with horse training that it is not the case that only a minority corrupts but that on the contrary experts are few. Therefore it cannot be the case that only Socrates corrupts the young. 2) The bad harm everybody who is in contact with them; therefore Socrates cannot have corrupted anybody intentionally (because they would harm him). Therefore he either does not corrupt the young or does it unintentionally; in both cases he should not be punished. 3) He points out that Meletus has not called as witnesses for the prosecution the supposedly corrupted or their relatives and asks what reason they could have for not coming forward other than that the charge is false.

Regarding his arguments it could be argued that Socrates debates side issues i.e. whether Socrates is the only one to corrupt the young and whether he does it intentionally or not, but does not really address the issue of corrupting the young. Also one would have to ask — is education really like horse training? Do the bad really always harm everybody they are in contact with? Would that not mean that no one corrupts the young willingly? And could anybody ever be punished for a crime if no one does evil intentionally? Finally one could think of reasons why the corrupted youths/ their relatives would not appear as witnesses e.g. because they did not realize they had been harmed, or because they did not want to make a public spectacle of themselves or be implicated etc.

Regarding atheism Socrates says that 4) Meletus claims both that he does not believe in gods and that he introduces new gods, which is a contradiction. 5) When Meletus accuses him of teaching the sun is a stone and the moon a mass of earth he confuses him with Anaxagoras.

Regarding the atheism claim the demonstration of two claims that cannot be jointly true does not rule out that one is true. It could be true that Socrates does not believe in the City's gods and introduces instead new ones (his "sign"). It is noteworthy that at no point does Socrates proclaim belief in the city's gods. Regarding confusion with Anaxagoras: Even if Anaxagoras held these beliefs first it could still be true that Socrates shares them and/ or teaches them.

His final claim to be pleading in fact on the judges' behalf must have enraged the jury.

In summary a number of problems can be found with the arguments, which in fact failed to convince the jurors.

One could speculate why Socrates' defence was so curiously ineffective. In the Memorabilia Xenophon seems to suggest that Socrates (who at the time of trial was 70 years old) felt his time had come and preferred death to life (i.e. to old age with sickness, senility and a lingering death). Socrates may therefore have provoked the jury ("assisted suicide") or at least have not thought it necessary to appease them.

Another interesting speculation has been advanced by I. F. Stone, who claims that the charge (intentionally misrepresented by Socrates' pupils Plato and Xenophon) actually was a political one — that Socrates was charged to have continued antidemocratic teaching even after the general amnesty for the collaborators of the oligarchic rule of terror of the Thirty (404-3 B.C.), and that he was executed because the democrats feared another anti-democratic coup and held Socrates responsible for the education of Critias, one of the Thirty.

See STONE, I. F. 'I.F. Stone Breaks the Socrates Story: An old muckraker sheds fresh light on the 2,500-year-old mystery and reveals some Athenian political realities that Plato did his best to hide' New York Time Magazine, April 8, 1979, pp. 22 ff. The article can be found online at:


Another angle would be to consider whether Socrates wanted to be acquitted at all. In that case the Apology would have to be read as addressed to posteriority for which it has proved surprisingly effective — Socrates as 'philosophy's martyr' — the individual following his conscience over the secular authorities, and his death as the proof of concept, that nothing — not even death — can harm the just man, and that the greatest evil is injustice i.e. the evil that we inflict upon ourselves.

Further literature

SUDDUTH, Michael: Arguments in the Apology. 1996. [Plato2]

SUDDUTH, Michael: Socrates and the Apology. [Plato3]

N.N.: A Brief Comment on the Query: "Is Socrates Guilty as Charged?" History of Political Thought 47.230 B Mini-Essay for Discussion Group #3

Helene Dumitriu


John asked:

Who originally asked the question, "If a tree falls in the woods and there is no-one there to hear it does it make a sound?"?

I would like to know the answer to this one too. I'm afraid that the best that I can offer is a partial debunking of the most popular answers.

1) I have seen it claimed that the question is a Zen koan. This is true in a way: it could be read as a koan — a paradoxical question, intended as thought-provoking rather than straightforwardly answerable. However, it is not to be found in any of the principal collections of canonical koans (the Blue Cliff Records or Pi-yen-lu; the Gateless Gate or Wu-Men Kuan; the Book of Serenity or Ts'ung-jung lu) and I've never seen a specific attribution to any such source.

For more information on koans see:


2) Scientists and engineers sometimes argue that the question is a straightforward one, intended to illustrate the distinction between 'noise', radiant mechanical energy in air, and 'sound', our perception of such energy, that is, heard noise. Hence the tree makes a noise, but cannot make a sound, a heard noise, because there is no one to hear it. This answer tends to irritate people whose dictionaries are less prescriptive. A more sophisticated way of expressing the same point would be to say that the appearance of paradox is merely a result of equivocation between the two senses of 'sound'.

It is possible, but seems unlikely, that the question may have originated as an illustration of this contrast in a physics or engineering textbook.

3) The question is sometimes attributed to George Berkeley, and routinely comes up in philosophy tutorial discussions of his work. This is understandable: Berkeley's metaphysics has the apparent consequence that unperceived objects do not exist. Only apparent, since God plays a central role in Berkeley's system as the guarantor of the continued existence of all objects. Although unperceived trees are amongst Berkeley's favourite examples, he does not consider their falling or making sounds.

For examples of what Berkeley did say about trees, see his Principles of Human Knowledge, Part I, 23, or this passage from the first of the Dialogues between Hylas and Philonous:

Phil.: How then came you to say, you conceived a house or tree existing independent and out of all minds whatsoever?

Hylas: That was I own an oversight; but stay, let me consider what led me into it. — It is a pleasant mistake enough. As I was thinking of a tree in a solitary place, where no one was present to see it, methought that was to conceive a tree as existing unperceived or unthought of; not considering that I myself conceived it all the while. But now I plainly see that all I can do is to frame ideas in my own mind. I may indeed conceive in my own thoughts the idea of a tree, or a house, or a mountain, but that is all. And this is far from proving that I can conceive them.

or, from the third dialogue:

Phil.: ... Ask the gardner, why he thinks yonder cherry tree exists in the garden, and he shall tell you, because he sees and feels it; in a word, because he perceives it by his senses. Ask him, why he thinks an orange tree not to be there, and he shall tell you, because he does not perceive it.

The association between Berkeley and trees has been reinforced by Mgr. Ronald Knox's celebrated limerick:

There once was a man who said, "God,
Must think it exceedingly odd
If he finds that this tree
Continues to be
When there's no one about in the Quad."

and its reply (attributed to Bertrand Russell).

Dear Sir, Your astonishment's odd:
I am always about in the Quad.
And that's why the tree
Will continue to be,
Since observed by, Yours faithfully, God.

For an explanation of how these limericks misrepresent Berkeley's position, see:


Andrew Aberdein


Jeff asked:

What is memory? How important is it to my identity? Why is my short term memory getting worse as I age? Why am I starting to remember random things from my childhood that I haven't thought about since they happened? Along the same lines (I think), what is the current thinking on the phenomenon of deja vu?

An interesting set of questions. I don't really know how to answer the first one, because I don't know in what sense you're asking it. The recall of past events and objects? But surely you're asking more than that... you mean, what are the mechanisms of memory? That's still being researched... off the top of my head, here's some of it. You have, say, a visual experience: you see something, and you're paying attention to it. The first thing that happens is, maybe, due to reverberating circuits in the visual cortex (neural discharges which regenerate themselves): you have a very clear visual impression which lasts for a few seconds. Second, that visual impression fades and is replaced by a less clear visualization, if you make some effort, which lasts a few minutes: that is, if I recall correctly, "short-term" memory. Then that fades, and you have an "intermediate-term memory" which lasts for a few hours, perhaps days, during which you can recall the object fairly clearly (usually). Then if you paid attention, you have a "long-term" memory, lasting for days, weeks, or whatever, in which, given some cue, like a word, etc., you can recall the object, i.e., visualize it, attach meaning to it, etc., fairly clearly. I think that's about it... there may be another stage in there that I'm forgetting (haha).

Now, what is happening in all that. Well as I say, the first is probably reverberating circuits. The second probably has to do with both the reverberating circuits (which are fatiguing) and evocations from other sensory modalities and association areas... like, red is associated with apples, which feeds back into the fire engine you're seeing to keep it active. Or something like that. The third has to do with establishing those associations, and also with creating a trace of some sort in the hippocampus, which somehow, no one knows how, stores memory for a while (hours or days, maybe) while it somehow creates (probably by creating maps to and from) long-term memories, particular neural paths, in the visual cortex and associated areas.

Loss of short-term memory with ageing is thought to be associated with gradual damage, basically loss of cells, in the hippocampus. This area of the brain seems to be very sensitive to damage, Alzheimer's, etc.... why, no one knows (and I'm not saying you have Alzheimer's... everyone has this memory problem with age). So the hippocampus either doesn't store the short-term memories well, and/or doesn't "write" them into the cortex well. Or both.

As for remembering things from childhood... again, no one knows. I did see an article once, quite a while ago, in which someone had studied some computer simulations of neural nets, and found that when the nets were saturated, i.e., when learning something new caused something old to be partially lost, old patterns would spontaneously emerge. I'm sorry that I cannot recall anything more about this study... it was quite a while ago that I read it, and it may have been disproved in the interim. The other explanation advanced is merely that repetition makes similar patterns more likely to be evoked. Both of those, as you can imagine, have problems and are incomplete as explanations. I don't know about the current thinking on deja vu. There is work being done on "feelings of knowing" (FOKs) by several people, and they find that FOKs are real but not very reliable.

There's lots of literature in the area of memory, most of it pretty technical. You need some background in neuroanatomy and neurophysiology to really get into it, so I don't know if it's worth my giving you many references. You might just browse around the Web until you find information; lots of labs have pages on this topic. However, if you want refs:

Ebbinghaus, H. (1913). Memory; a contribution to experimental psychology. New York, NY: Teachers College.
Kahneman, D., & Treisman, A. (1984). Changing view of attention and automaticity. New York, NY: Academic Press.
Hasher, L., & Zacks, R. T. (1979). 'Automatic and effortful processes in memory'. Journal of Experimental Psychology, 108 (3), 356-388.
Koriat, A. (1994). Memory's knowledge of its own knowledge: the accessibility account of the feeling of knowing. Cambridge, MA: Bradford.
Reisberg, D. (1997). Cognition: exploring the science of the mind. New York, NY: W. W. Norton & Company, Inc.
Wegner, D. M., & Bargh, J. A. (1996). Control and automaticity in social life. Boston, MA: McGraw-Hill.

These are just the tip of the iceberg... not even that, really.

Steven Ravett Brown


Sarah asked:

What would Hobbes have thought about the events of September 11th? Also, What would Locke have felt about September 11th?

Briefly, Hobbes was a maximal statist and Locke a minimal statist, and each would regard the events of September 11th as a horrible consequence of the failure to implement his respective political philosophy.

Hobbes believed that the primary motivator of human action is not so much a positive goal as the avoidance of what is most feared. He further held that human beings fear nothing as much as the prospect of a violent death. Consequently, they would give up freedom to say and do as they please with their property if they knew that such forfeiture were a necessary condition of avoiding a violent death. In Hobbes' view, the forfeiture of freedom requires transferring all individual rights to a monarch. The monarch's job is to protect his subjects from danger, terrorize those who would contemplate endangering them, and punish those whom terror does not deter.

Hobbes would point out that because there is no such monarch in the United States, those who inflicted violent death, on a scale unimaginable to people in Hobbes' day, were not deterred that September morn. He would also note with some satisfaction that Americans now seem willing to forfeit their freedom — airport by airport, stadium by stadium, street by street — to a monarch-substitute, the Federal Government. To the extent that they do this, to that extent they demonstrate their preference of safety to freedom when the choice is put before them starkly enough.

Locke thought quite differently. He believed that the sole job of government is to protect the people living under it as they peacefully deploy their property in individual pursuits of happiness. He was not favorably impressed with the record of absolute monarchs in protecting their subjects from violent death. Rather, he regarded absolute monarchs as a major threat to the lives, limbs, and property of their hapless subjects, a threat that had to be reined in through a system of governmental checks and balances.

Were he resurrected to comment on September 11th, I predict that Locke would note with horror the historical record of governments, even the one explicitly founded on his political philosophy, to accumulate powers that go far beyond protecting property rights. He would also cite the frequency, in the two centuries between his time and ours, with which governments have militarily collided with each other in the furtherance, not of the universal interest in peace and prosperity, but rather the particular, private interests of a few. He would lament that this has happened at the cost of millions of lives and trillions of dollars coercively taken via taxation. He would, I suspect, argue that modern world wars, all fomented by non-Lockean states, are hardly more desirable than the "war of all against all" that Hobbes' absolute monarch is supposed to prevent. He would not take seriously the suggestion that the answer to squabbling megastates is a global totalitarianism regime from which no refuge is possible. Locke would make clear that the American government's steadily increasing involvement over the last century in the affairs of Europe, Asia, Latin America, and the Middle East can find no justification in his doctrine of government. Finally, he might conclude that such involvement has only made America vulnerable to attack from those who resent that involvement and who would, absent that involvement, not be sufficiently motivated to cross land and sea to kill thousands of Americans in a terroristic assault.

Anthony Flood


Phil asked:

Is science the new religion?

Ooh, you've pushed one of my buttons here. The real-world answer, of course, is both yes and no, depending on how one views science, and religion. But let us take the ideal case.

In religion, one sooner or later comes to something that must be accepted unquestioningly, on faith: a dogma.

In science, ideally, one may not have dogmas. There is, for science, nothing in any principle, methodology, or idea that cannot be investigated as to its validity, applicability, and so forth. Including that statement. Nothing is sacred, above questioning, including scientific methodology. Nothing.

And that, in a nutshell, is the difference between science and religion. Now I am not saying that scientists as individuals and as schools have no dogmas, assumptions, and so forth. But the history of science is a history of the investigation of those assumptions, their overthrow and replacement by other principles.

Now, is this a religion? Is the principle that everything, including this principle, can and should be investigated by any methodology available, and checked and rechecked for accuracy and validity a religion? Well, if it is, then there is nothing that is not a religion, and the terms "religion" and "non-religion" become meaningless distinctions, don't they. If science, as this ideal, is a religion, then that's the end... everything is a religion.

Contrariwise, one can ask something like, "do too many people have a blind faith that science will benefit them?" And if that is making science a religion, then, given the current political and ideological climates world-wide, my own very personal response would be that we need much more of that version of science. The world now seems to me to be in the grip of various religious frenzies; a little more science would be wonderful at this point, in my very politically-incorrect opinion. To put it more calmly... science is a tool, and results in tools. Tools can be used for good or for bad; one can use a plowshare to beat someone else over the head. Science per se is something that must be properly directed; and by the same token, it will always both be used properly and misused, just as all tools are, by human beings.

Steven Ravett Brown

When I first read this question my first response was to write down a list of aspects of the situation that I thought could be the basis for an answer. This list consisted of:

1. Faith and falsifiability
2. Comfort, comforting and comforter
3. Certainty and being certain
4. Explanation and explanatory power
5. Protection and protector
6. Control, controlling and the controlled
7. Defender, protector and weakness
8. Value source
9. Forgiveness source
10. Transforming, life changing and change
10. Belief code
11. Moral code
12. Culture

I thought I would then construct an answer in terms of the interaction of these terms with the key terms of the key question that I identified as: science, religion new and old. However it occurred to me that it would be interesting to take another approach in which we assumed that science was the new religion, in effect asking the question, "supposing science was the new religion?" We can generate a surprisingly rich field within which we can pursue this question just by considering it in the context of the categories of change, time or sequence, person and the logical operator, 'not'.

If we add two examples for each category you can begin to see the complexity that is being generated by the question. For the category of change we can introduce the possibility of 'change for the better' or 'change for the worse'. We can divide the category of time or sequence into 'now' or 'later' by which I mean we can look at 'now' in the sense of the present and 'later' in the sense of the future or we can read 'now' in the sense of what is the case and 'later' in the sense of what follows or what it leads to. Persons, we can consider in terms of 'self' or 'others'.

When we take into account the logical operator of negation taking these terms as its object to produce terms like, 'not change for the worse' or 'not now' then we can construct a total number of sixty eight paradigms or thought vehicles within which we can examine the original question.

So the original question has now become a question generator that produces sub questions like,

Given that science is the new religion,

1. Does it bring about change for the better for me, now?
2. Has it made things better and not worse?
3. Will it make things worse for others later?

Within which you can provide examples, evidence, counter examples and argument for each case and from which you can later look back and search for patterns and generalities that may have appeared within all your answers.

In a sense my answer has also been a non-answer in that it has offered a technique from which you can develop and debate your own answers rather than offer you a specific analysis. It has also offered a general pattern of question generation for developing arguments in the form of the inference pattern:

Given situation S then Question Q?,

Where Q is a question sentence that contains the constituents:

[? Assert or Negate ( Category1 or Category2 or Category3)],

like in the example above.

From which we generate paradigms or pseudo-sentences of the form:

?C1,C2,C3, or ?NC1,C2,NC3. We could for example go back to my initial approach and let C1 = Falsifiability, C2 = Faith etc and proceed with the analysis of your question from this starting point.

All of which is saying something in a very complicated way that we say very simply to children when we teach them to classify biological objects using 'Keys'. This approach may only lead to you from your initial question to many further questions, and possibly some particular answers, but some might consider that this is the most that philosophy can achieve.

Neil Buckland


Thomas asked:

How are we to tell which of the theories of truth (e.g. pragmatism, correspondence) is true? Surely we would already have to be within one of these systems to discern truth?

Your question is a very reasonable one; circularity of this sort does hamper many fundamental enquiries in philosophy. However, this isn't one of them.

Theories of truth are concerned with what truth is, not with what is true. Hence proponents of different theories need not disagree over which propositions are true and which false (apart from the propositions which articulate their competing theories, of course). Rather, they disagree over what it means to say of a proposition that it is true.

Correspondence theorists believe that 'p is true' means that p corresponds to the facts; pragmatists believe that 'p is true' means that p is a useful thing to believe; coherentists believe that 'p is true' means that p is consistent with all the other true things; and disquotationalists (aka minimalists) believe that 'p is true' is just another way of saying p.

It is no part of the responsibility of a theory of truth to help us to identify true and false propositions. Indeed, proponents of the competing theories are often in agreement about the sort of methods we use to do that. Should these methods identify one theory as true and its competitors as false (by no means the most likely outcome) their different theories of truth need not prevent them from recognising this.

Andrew Aberdein


Tom and Isabel asked:

To what extent does conscience prove the existence of God?

Tom and Isabel also asked:

What is the connection between free will and the problem of evil?

There is no question of the pangs of conscience being sufficient to prove the existence of God. There is a question, however, of whether the felt "lure" to approve (or disapprove) of freely chosen actions is entirely understandable in terms of socially ingrained and reinforced habits.

"Freely chosen" implies the reality of alternative possible courses of action competing for one's adoption. This invites the question of the status of possibility itself. Is "possibility" a mere word with no real reference? That is, is everything determined? Could nothing be other than the way it is now given the way everything was? If everything is determined, then there is only one logical modality, not three. There is just the necessary: "possible" refers to nothing real, while the impossible is reducible to negative necessity.

Such is one implication of the denial of freedom. If freedom to choose among competing possibilities is real, then possibilities cannot be nothing, even if they have no agency of their own (because they are by definition not actual). But then where do possibilities "reside," and how do they get actualized in the world of actual things?

From Plato to Whitehead there have been philosophers who have hypothesized God as the answer to the last question. God envisions all possibilities and guides their actualization in things. In the case of human beings, God's guidance can enter into their conscious, self-reflective awareness. An initial aim from God would be felt as a lure to the greatest good possible at each actor's choice-point, but he or she would be under no compulsion to act for or against that feeling. God, according to such a philosophy, is therefore the primary, nonsociological, nonbiological source of the feelings we associate with the goals we entertain, the feelings we call "conscience."

The problem of evil is that of reconciling the great goodness and great power of God with the existence of great evil. The "free-will defense" of God's goodness is that human beings, who are free to choose among competing possibilities, are indictable — and therefore God is not — for the evil consequences of their freely chosen actions. Embedded in the problem is the presupposition that God is the creator, not only of the contingent order of this cosmos, but also of the sheer existence of the things ordered: if God wanted to, God could literally annihilate the whole realm of nondivine beings, i.e., "turn it off" as we would flick off an electric lamp. This notion seems to follow logically from the idea that God is the exnihilator (to borrow a term coined by Mortimer Adler). That is, God does not merely transform and rearrange pre-existing things, but rather brings things into existence without anything and from nothing (ex nihilo) and who can interrupt natural processes. In such a cosmology, it would seem that God is morally indictable for any evil that may result from the interaction of created things, insofar as they exist and have the natures that they do only because of his creative fiat. The traditional free-will defense therefore only lessens the length of the indictment without reducing its gravity.

The God of classical theistic philosophy is certainly not responsible for the evil that men do, for they freely undertake to do it. (Let us grant that the world is better with free human beings in it than it would be without them and that God could not have made free human agents without incurring the risk that they would bring some evil into the world.) There is, however, the evil of natural disasters and the pain and suffering thereby visited upon all members of the animal kingdom. There is the evil of painful and debilitating disease not traceable to human malice. God the exnihilator could interrupt such natural processes, and perhaps God does from time to time. There is also the excessive evil consequences that sometimes follows from mere human error. In the great majority of cases, however, cases in which God is implored to interrupt them, God does not. Even one instance of natural evil that a human being was able to prevent, at no risk or cost to himself, but failed to do so would mar forever that person's reputation. Yes, the careening car that is about to mow down that unsuspecting three-year-old child has faulty brakes due to human error, but that does not excuse any passerby from failing to try to move that child out of harm's way if he or she can. Nor does it excuse God.

There are alternative philosophies of God, however, particularly those inspired by Whitehead's Process and Reality, in which God is neither the exnihilator of the cosmos nor its miraculous interrupter. In them, God is the supreme being who incessantly interacts with all other actual entities. Actual entities are not the gross things we perceive, but submicroscopic processes whose time span is a fraction of a second. All actual entities other than God partly create their successors at each choice-point, even as they receive influence and ideals from God and from other things. That is, all of them, not just human beings, have a degree a freedom to choose among real possibilities. The cosmos at any given moment is therefore the resultant of the choices of all actual entities, not just God's choices. God is the primary cause of the contingent order of the cosmos, but God does not make the cosmos either absolutely or unilaterally. God orchestrates the symphony, but doesn't play all the instruments.

In short, such philosophy more fully develops the insight of the free-will defense, without which such a defense is too little, too late. Creativity is characteristic of all things, not an anomaly on a small planet.

Anthony Flood


Mohammed asked:

Please try to answer this problem:

Abdel Magid is a son of a bus driver, his father earns less than 100 B.D. a month, and he has three brothers and two sisters. After high school he could not afford going to college because of his family situation. As time passed, his situation became even worse, he had no money, no job, and no one to help him. Yet like all young men, he needed and wanted a lot of things, so whenever he needed something he simply stole it. Although he was aware of the moral law: stealing is wrong, he did not stop. He thought that stealing from the filthy rich is permitted. To Abdel Magid, he is poor because someone out there has more than he or she needs, and therefore he thought that taking whatever he needs from the rich is not wrong, rather it is his right.

Question: Is there anything wrong with Abdel Magid's thinking?

Well, firstly Abdel is acting under the false belief that stealing from the rich is permitted. It may be a false belief that he is poor because others have more than they need which depends on how far he could have gone in finding work, since there obviously is work available, such as bus-driving. Even if his beliefs were true, he is acting irrationally. He couldn't make it into a principle on which he believed everyone should act since he would be inviting competition and making his life of thieving more difficult, and also if everyone stole from the rich there would no longer be any rich to steal from. Another irrational aspect is that if he were rich he wouldn't think it right for the poor to steal from him.

Apart from being irrational, his behaviour is immoral and lacking in prudence. We see the lack of prudence in his failure to think about the future when he is likely to be caught and punished. This is unsuccessful behaviour. The immorality is in not respecting other's feelings about their property. The filthy rich might have worked hard for their money. Further, needs vary. The richer people become, the more money they need to support new ways of life, so he seems to lack social understanding.

It might also be thought to be politically mistaken. He doesn't have a right to take what he needs especially when these are needs are those of a young man who has developed a desire for a lot of "things".

If he really thought that people are poor because others are rich, and people have a right to take whatever they think they need, perhaps a better course of action would be to set up a political party.

Rachel Browne

Well, you're presenting us with an oversimplified situation and question... why do I say that? Well, how do I know how much in above is really true, and what is exaggeration? For example, what does "need" mean? How did the rich in that society get rich, by their own efforts or by inheriting wealth? How does someone know that a) he has what he needs (or does not), b) someone else has "more" than they need? What is a "right"? How does someone know how much the person he's stealing from "needs" what he's stealing? And so forth. I'm not even getting into the basis for the Koran's admonition against stealing, am I (which I'd have to research, anyway).

But I'll go ahead and give you an oversimplified answer. If Abdel Magid lives in a just society... that is, one that provides the basic needs: food, clothing, housing, etc., and basic education for its citizens, and, if they educate themselves and work hard, the opportunity (not the certainty, mind you, nothing can provide that) to advance in the society and in their earnings, then his stealing is immoral. If he lives in an unjust society, that is, one which does not provide for its citizens' basic needs, nor does it provide opportunity for education, or at the least, self-education (i.e., free public libraries), or advancement, and in addition has a ruling class of hereditary rich, then his stealing is (or could be, depending on situation and circumstance — for one thing, he is stealing only from the rich) moral, and good luck to him, although I think that it would be much better and more moral if he got out of there to a just society, as so many are doing, and tried to educate himself and advance by clearly moral means.

Where are there just societies in the above sense? Scandinavian countries are probably the best examples. Then most of Europe, Canada, the US. There are others. None of those are heaven, all have some injustices, but all, especially the Scandinavian countries, provide (to greater or lesser extent) the basics, and the opportunities, especially if one becomes fluent in their language.

That's all pretty simplistic, and probably won't be a popular viewpoint, but one must consider that there are times it is moral to steal bread to survive.

Steven Ravett Brown


Delocks asked:

What is imagination?

Can you comment on Roger Scruton's Imagination I & II in his book Art and Imagination?

What are concepts and concept formation? what is thinking?

I don't have a theory of imagination, so can only comment on Scruton's. In Imagination I, the first feature, that we imagine something, or that there is an object of imagination seems undeniable. The second feature that in the normal case imagination is subject to the will seems difficult to criticise when we think of bringing images to mind, although the will doesn't seem to be involved in aspect perception, such as seeing the duck in the duck-rabbit in a creative way. The feature that we have incorrigible knowledge of what we imagine might be criticised on the ground that knowledge is not appropriate to internal experience, since there is no means of justification and no possibility of confirmation. I'm not sure about the fourth feature, that there is a verbal criteria for saying that a person is imagining something, because sometimes a statement can falsify what is imagined, or doesn't fully express what it is we imagine. Scruton himself thinks there is more to what is imagined than that which can be expressed verbally, since a person has to experience an aspect to understand what another is imagining, although this itself has been criticised by Malcolm Budd..

Scruton thinks that imagination covers a wide range of activities, but I think he goes too far. In Imagination II, Scruton thinks that the man who sees the duck aspect of the duck-rabbit 'must say something like "It is as though I were seeing a duck"', which indicates an imaginative experience as an unasserted thought. This seems false. We actually are seeing the shape of a duck and we are able to do this because we know what a duck looks like. Similarly, the claim that "It takes imagination to see the sadness in X's face" seems doubtful. If we know what sadness is and how it is expressed, this is an ordinary perceptual experience.

The idea of imagination as unasserted thought, thought that goes beyond what we believe, is also difficult to accept. Music is not the sort of thing which can be sad, and to perceive it as sad, is not to say that it is sad, so Scruton avoids the apparent nonsense of ascribing an emotion directly to music. However, I'd be more inclined to say it is true that the music is sad and, for sure, that I believe it is, rather than that it is appropriate to say so. However, this requires a different approach to truth, and would require an argument to the effect that metaphor is true (and since a word used metaphorically is used falsely of its object, this is a difficult approach to take). The concept of sadness is of an emotion felt by living beings and it is part of the concept that it is expressed in a certain way, since the way it is expressed allows us to understand sadness as felt by another. So although it cannot apply to music, as an alternative to the problems raised by consideration of metaphors, I'd be inclined to think that sadness has a certain something, an inner movement, that cannot be put into words or conceptualised and that it is this we find in music.

You ask what a concept is and very simplistically, because we don't know for sure, it is either taken as a particular mental representation or a word for something. As a mental representation, concept formation would be determined by memory. As a word, concept formation is learnt socially, as we learn a language.

Thinking is the movement of conceptual content, and Scruton would say, that in contrast to imagining, it is asserted. This doesn't seem to be so, but it is the case that a thought is either true and false and if we think something is true, we would assert it. Normally we only ascribe thought to a being with a language although this isn't necessary if we think by means of mental representations or concept formation is a facility for ordering information.

Rachel Browne


Tookee asked:

Do we know anything? Give reasons for your answer.

We know that we are reason-demanding beings. The very skepticism with which one might greet the preceding statement would be sufficient evidence of its truth, for skepticism is nothing if not at least implicitly reason-demanding. We have experiences (something is given in to us in sense, memory, and imagination). We inquire into those data ("What is it?") and form hypotheses about them. We reflect on our understanding and ask if there is sufficient evidence to affirm or deny that the understanding is true ("Is it so?"). When we believe there is sufficient evidence, we pronounce judgment. Knowing may reasonably be defined as this compound activity of attending to data, understanding what we attend to, and judging our understanding to be true or false. One cannot coherently affirm that one has never had an experience, has never understood what one has experienced, or has never verified or falsified one's understanding. (Take reading and disagreeing with the preceding sentence, for example.) Therefore, one cannot coherently affirm that one has never known anything.

Anthony Flood


Mode, Azim, Ramy, and Niya asked:

We are students from Maldives (where philosophical resources are scarcely available and philosophical enthusiasm hardly found and philosophical expressions highly restricted). we would like to know whether a god as held by major religions of the world exists and and to know whether what the logical positivists say about the meaninglessness of such questions as the existence of a god is really true?

Azim asked:

I would like to know with the incorporation of latest philosophical reasoning which position (theism, agnosticism, atheism,etc) as regards a god is logically sound or which a rational mind will go for!

Thank you for your interesting question.

I am sure you know that many people believe in some god, but I am not clear which religious conception of God you have in mind. There are many major religions in the the world, and each of them has a different conception of God. But at least some religions believe that there is a God who is, among other things, the creator of the the universe, all-good, all-powerful, all just, and so on. As I am sure you know, whether such a Being exists is a matter of disagreement. But I think that none of the traditional arguments for such a Being succeeds in showing he does. The criticisms of those arguments by the British 18th century philosopher, David Hume, and the 18th century German philosopher, Immanuel Kant, seem to me decisive in that respect. Of course, the failure of those proofs in no way shows there is no God, but, on the other hand, if for such a long time very intelligent and industrious people have tried to prove there is a God and have so far failed, then that might be taken as reason to believe that the existence of God is unlikely.

The Logical Positivists developed a theory of meaning knows as the verifiability theory of meaning. According to this view, it is possible even to conceive of evidence that a sentence is true, that sentence is "cognitively" meaningless (although the sentence may have emotional meaning for people). If this theory is true, and if it is true that no evidence for the existence of God could even be conceived of, then the sentence, "There is a God" is cognitively meaningless. A good way of understanding the idea of cognitive meaninglessness is that a cognitively meaningless sentence is neither true nor false.

The Logical Positivists did hold that the sentence "God exists" was cognitively meaningless (neither true nor false) An interesting question is whether that meant that the Logical Positivists were atheists. The Logical Positivists claimed that they were not atheists because atheists believed that the sentence, "God exists" was false, but since the Logical Positivists did not believe that the sentence was either true or false, they did not believe the sentence was false, and therefore they were not atheists. Although the Logical Positivists did not believe in God, neither did they disbelieve in God.

I hope this is clarifying. I am very glad you are interested in philosophy.

Kenneth Stern

A logical positivist, as far as I know, would say something like: unless there is an operational definition, i.e., something like: a definition resulting from a methodology for testing the existence and/or characteristics of a god, then the question is at worst meaningless, at best not worth pursuing.

Let me give you my take on this issue. First of all, there are many philosophers who are and have been theists. But they are in a minority today, as far as I know. The reasons for this are that philosophers cannot help but (and should) ask questions such as the following: how could humans possibly have knowledge of a being which cannot be investigated or experimented with? Why is faith knowledge, when it is clear that people have faith in many contradictory things, many of which seem absurd to most other people? What, more precisely, justifies faith, and more importantly, what justifies any particular faith over any other? Getting more specific, why should one believe in any one particular god, and when there are so many different possible choices, and when there seems to be no reason to prefer one over another? Thus, humans have believed, through history, probably in thousands of different gods. All of these are different: in appearance, in emotional characteristics, in their goals, in the way they regard humanity, in their power(s), in their demands. What could possibly bias someone who has not been trained since childhood to believe in a god, to believe in some particular one or group of them from all those choices?

Let us take something we all will agree is ridiculous: there are little humanlike beings with wings, called fairies, living at the end of my garden. I don't see them because most of the time they are invisible, but they are there, and are responsible for things like bees going to certain of my flowers, etc. Now this is absurd, right? But there have been and are people who believe in fairies, much like the ones I describe here. Why shouldn't I? Just because I can't see them? Well... then I shouldn't believe in Allah or Yahweh either, right? I can't see them either. Because I have other explanations for why bees like my flowers? Well... there are other explanations for the existence of the world, for the existence of animals, etc., etc., also. Need I go on? And I have just started a list of possible gods and reasons for gods to exist. There are, as I say, literally thousands I could go through: the Hindu gods, the Norse gods, the Native American gods... and on and on and on. And on. It really gets depressing to me, I'm afraid, to just contemplate the full extent of humanity's time and energy in creating gods. God after god after god... all for what? Explanation? Comfort? A father figure? A mother?

Why not ask this: suppose you were a (or "the") god? Where would you get your values from? A god's god? Why? What justifies a god's actions and goals? What, more importantly, justifies a god's actions and goals that wouldn't be just as valid a justification for us? Gods are smarter? Oh? Which one(s), particularly? Gods are more compassionate, more "ethical"? Really? Need I list the atrocities committed in the name of (and supposedly at the command of) virtually any god one can think of? If you want the basis of a value system, why not create one yourself instead of picking the one of the god you've been raised to believe in? Or you could just obey because you're afraid of punishment. Now, what's the difference between that and animal training: reward the dog with food or a pat, punish the dog with a slap or a shout, and you've got an "ethical" dog, right? Is there a difference between rewarding people with promises of heaven and punishing them with threats of hell, and training your dog or horse or whatever? I don't see one.

Perhaps we want to be assured of "life after death"? And what would that assurance consist of? Your favorite religious book telling you so? And we believe that because... someone tells us that it's true, who has been raised to believe that particular religious book is true... because...? They've had a vision. Yes. Well, what of all the people who have had visions of other gods? They must be illusions, because your god is the true one... because... the person who tells you so had a vision... and around and around and around. Why don't people get tired of this? You know, the only theory I can come up with is that we have an extremely powerful instinct to a) belong to a group, and b) to be dominated, and if one needs to be in a dominance/submission hierarchy, but cannot stand to be dominated by a real person, they invent a god.

But there are lots of theories. You might take a look at Eliade, M. (1959). The sacred and the profane: the nature of religion. New York, NY: Harper & Row; and Frazer, J. G. (1951). The golden bough: a study in magic and religion. New York, NY: The Macmillan Company, just for starters. Dawkins has some nice stuff to say, also, and there are some recent studies on the neurobiology of religious feelings, for example: Giovannoli's The Biology of Belief.

See also: Piattelli-Palmarini, M. (1994). Inevitable illusions: how mistakes of reason rule our minds. New York, NY: John Wiley & Sons, Inc.
Shermer, M. (1997). Why people believe weird things: pseudoscience, superstition, and other confusions of our time. New York: W. H. Freeman and Co.
Radner, D., & Radner, M. (1982). Science and unreason. Belmont, CA: Wadsworth, Inc.

Steven Ravett Brown


Natasha asked:

Please can you explain Descartes' influence on philosophical considerations of the idea of the mental?

Descartes introduced the idea of the subject and individual consciousness into philosophy. He highlighted the rational nature at the expense of the emotional, moral and intersubjective side of man and it could be said that Descartes had a de-humanising influence on the philosophy of mind. The criticism that there need be no more to the thought than the propositional content and that the "I" is superfluous makes Cartesianism even more anti-humanistic, since the mental becomes pure conceptual content and the subject himself is lost. Analytical philosophy of mind has concentrated a great deal on the nature of rationality and the proposition and the acquisition of concepts and all this stems from taking man as essentially rational and has dominated philosophy of mind at the expense of the phenomenological nature of the mental which is explored in continental philosophy.

Descartes' position that the mental cannot be explained by the physical is still widely accepted, and although the problems of Cartesian dualism (combined with advanced biological and neurological knowledge) have led to a massive swing in the direction of identity theories, the problem of consciousness which Descartes introduced into philosophy mind remains problematic for these modern theories.

The cogito gives rise to scepticism about the external world so another thread in philosophy is how the mind is related to the world. The res cogitans is a subjectivity with no objective counter-part. This has given rise to consideration about the nature of subjectivity and objectivity. A slightly different question is how we can know things about the world given that we might be dreaming. This does not simply give rise to problems of knowledge, but affects philosophy of mind because it means that we need to make a distinction between philosophy/metaphysics and psychology. We, as Descartes noted, would have to mad to really doubt our senses, which brings to light a strange duality between rational thought or theory (or philosophising), and real practical life.

Another thread in the idea of the mental, consciousness itself, has taken a curious turn. Since Freud introduced the idea of unconscious it has been difficult for us to take ourselves as thinking things or purely conscious beings. Rather we are driven by unconscious forces, keeping the dark side of our nature at bay through repressive mechanisms. However in the post-Freudian, Lacan, we see a return to the Cartesian subject: The subject, as far as he is known to himself, is a rational subject, or at least a subject immersed in language.

Rachel Browne


Chesca asked:

Please cite some philosophers who dealt with similarities between poetry and science and/ or similarities between art and science. if possible, cite some webpages where I can research more about them.

On art by an artist:

Kosuth, J. (1991). Art after philosophy and after: collected writings, 1966-1990. Cambridge, MA: The MIT Press.
Shahn, B. (1957). The shape of content.
Fenollosa, E. (1968). The Chinese written character as a medium for poetry. (Edited by Ezra Pound)
Tanizaki, J. (1977). In praise of shadows.

Art & psychology:

Arnheim, R. (1974). Art and visual perception; a psychology of the creative eye.

Philosophy of art by a philosopher:

Santayana, G. (1955). The sense of beauty.
Goodman, N. (1976). Languages of art. Indianapolis, IN: Hackett Publishing Company.
Levinson, J. (1990). Music, art, and metaphysics: essays in philosophical aesthetics. Ithaca, NY: Cornell University Press.
Kivy, P. (1994). 'How music moves'. In Alperson, P.(pp. 147-163). University Park, PA: Pennsylvania State University Press.
Kivy, P. (1990). Music alone: philosophical reflections on the purely musical experience. Ithaca, NY: Cornell University Press.
Turner, M. (1996). The literary mind. New York, NY: Oxford University Press.
Maritain, J. (1968). Creative intuition in art and poetry.
Holt, M. (1971). Mathematics in art.

Incredible and unique:

Kepes, G. E. (1965). Structure in art and in science. New York, NY: George Braziller.
Kepes, G. E. (1965). Education of vision. New York, NY: George Braziller.
Kepes, G. E. (1965). The nature and art of motion. New York, NY: George Braziller.

Steven Ravett Brown


Barbara asked:

I know there's a term for 'by deciding to act one way, you've precluded acting any other way' — I've checked categorical imperative, but I'm looking for some term I've heard lately that's less moralistic, and more just practical. Hm.

Well, there's jacta alea est, the die is cast; attributed to Caesar on crossing the Rubicon. And there's always "crossing the Rubicon".

Steven Ravett Brown


Paul asked:

What would be the effect, if the existence of alien life were to be proven, on the Judeo-Christian philosophies of God which focus on earth?

I'm not going to try to answer this, because I think that the science-fiction writer Phillip J. Farmer did a wonderful job in several of his novels on just this theme. Try Night of Light; Inside Outside; the Riverworld series; Lovers; Dare. There's also A Case of Conscience by James Blish.

Steven Ravett Brown


Annie asked:

Would your philosophers give their opinions on whether there is a link between the increased aggressiveness and lack of respect of school aged persons and the lack of encouragement by parents towards a belief system?

As a mum of two teenagers I've seen at first hand the attitudes of youngsters and how much different they are to the ones I was brought up with. I am speaking as someone who was a teenager in the punk era and who was into punk and rock music, but I was brought up (at least until high school) to go to church each Sunday, to respect the law and teachers (I left school in the late 70's when school children were still caned or clipped round the ear). Has my generation produced a society of youngsters who hold nothing in respect? Few of my generation send their children to church on Sundays, most of us still hold anti-establishment views, albeit toned down. However, despite all our shouting we do still hold onto certain values we were brought up with. Has the generation of punk rock and new romantics totally stuffed up society?

I don't agree that the generation of punk rock and new romantics totally stuffed up society. This charge has been leveled at many generations before — Socrates was put to death for it 2500 years ago.

I work in schools and I don't agree that young people hold nothing in respect. I think some of them do (as some of every generation always did), but many don't. In fact, I was talking to one of my classes about this today.

I agree that attitudes are different. However, you need to draw a distinction between the forms of respect and respect itself. I mentioned to my class that some of the forms of respect that were widespread in my youth (standing when a teacher enters the room, calling teachers 'sir' or 'miss') have virtually disappeared. Nevertheless, these students still respect teachers. If the well understood forms of respect of the past disappear, it is easy for older people to falsely conclude that respect itself has gone.

I don't think that sending children to church is any more likely to teach respect than not sending them, nor clipping them around the ear. How they are brought up in the home has much more influence. The best way to produce respect is to model it.

To my mind (and this is an incredibly complex issue, so I am only addressing a few aspects), increased aggressiveness and any failing respect is more likely to be a product of the rise of individualism and greed in public life. Economic rationalism, market driven economies, politicians with an eye only for the main chance: these are modelling a lack of community respect that is really damaging. The real villains in my eyes are George W Bush, John Howard, Maggie Thatcher, Ariel Sharon, Osama bin Laden, Yasser Arafat and lots of others.

Respect is vitally important — it is a complex of basic foundations for morality. But it is also a difficult and contentious cluster of concepts, and the forms of respect can change quite a bit without respect itself disappearing. Respect can best be built, while it is simultaneously being clarified, by working together in communities.

Tim Sprod

I have been working in a Montessori nursery school for a few years, teaching and looking after children aged 2 — 5 years. Older colleagues I've worked with have expressed the view that concerns you, that aggressiveness and lack of respect have greatly increased in young children during their teaching career.

But my first thought is that it is certainly not just 'school-aged persons' who display increasing aggressiveness and lack of respect! You only have to consider the many reported incidents of road rage, attacks on health service workers, teachers and so on to realize that this is a more general trend in society.

I would place some of the blame with the increased use of drugs, which reduce people's control over their actions, and also lead to related crime as addicts are prepared to use any means to finance their addiction. But another factor, which has been suggested to me by my present employer, is that of individualism — which I suspect was mitigated by Christian belief, as it has a strong emphasis on helping others.

Most people today seem to be out to get what they can for themselves, without concerning themselves about how that affects others. There is a general assumption that everyone should be ambitious, aspirational, go-getting, with regard to both careers and leisure time. It's expected of you. It is difficult to get away from this attitude, as it is expressed in the media, and parents unconsciously pass on their attitudes to their children.

My employer suggests that eventually individualism will reach an extreme, after which attitudes will begin to swing back the other way, towards a greater sense of social responsibility.

Katharine Hunt

You have most certainly raised a question which is at the forefront of concern for both government and society at large. For someone, like myself, who can go back much further than yourself, the state of the society in this country, and particularly the behaviour of many young people, is very disturbing. By comparison, people of my generation feel to be living in a totally different country, with radically different values to those which provided the foundations of our upbringing. You have, in my opinion, touched on some of the causes for our present demise, however, there is no simple explanation to what we see going on around us, and particularly in the area that your question focuses on. I am in some agreement with the thrust of your argument, that the belief system to which you and I were exposed now has little or no value in modern society. Also the false urgency and pace of society does not allow time for reflection and taking stock.

In former days most schools actually taught religion, bible study was often the first lesson of the morning; children learned the ten commandments, the sermon on the mount, selected psalms, parables and miracles : as opposed to the modern idea of religious study, which usually means a study of comparative religions or the odd hymn at assembly. In addition children attended Sunday school, and sometimes more than one church service each Sunday. In most communities the church was the central focus, children were not only involved on Sundays but during the week also, in social activities, choir practice, preparing for special events, etc.. In my day the only distraction was the radio, which, apart from "children's hour" and one or two nature programmes, had little impact on our activities. Children played out more, they were more involved in make believe, and were seemingly more inventive than most modern children. They were involved in children's things, the songs they sang were children's songs, they listened to children's stories, they spent more time being infants, progressing more slowly into and through adolescence.

This, I believe, is at the crux of your concerns. Children are not allowed to be children very long, they assume more grown — up ideas too early. One of the major reasons, in my opinion, is the massive influence of T V, the sorts of things that children are exposed to, would certainly never have been allowed when I was a child, nor when our own children were young. The soaps are certainly not the sort of fair that young children should be exposed to. Many children are now not living in the real world anymore, they know the characters of Coronation Street, Eastenders, etc., better than they know the people in their own street. These programmes contain a great deal of violence, bad manners, bad grammar and sex, hardly the stuff for an impressionable child trying to learn the basic values and ethics of life. Much of the pop that is commercially thrust upon them from 'the box' has an aggressive base or contains sexual innuendoes, quite unsuitable for youngsters who should still be singing nursery songs. The role models are hardly suitable for children, however, they are also pushed into it through massive peer pressure, so the problem then becomes cyclic, fed and stimulated by high pressure advertising. I think you are right to be suspicious about the influence of the punk stuff you were involved with. In my opinion the 60's and 70's have a lot to answer for in regard to the decline of conventional values and the rise of aggression and drugs.

Although I hesitate to say it, and indeed do not want to believe it, I would be hypocritical to my own conscience if I did not state the belief that the 'do-gooders' in society must shoulder a great deal of the blame for what we see today in the attitudes of young people. The removal of strict discipline in schools was fatal, children will always take the line of least resistance, and to suddenly find that the roles were reversed, and they could exert pressure on teachers, who were unable to respond without themselves getting into trouble, provided those who disliked school with ideal opportunities. I knew a teacher who almost lost his job for placing a hand on the shoulder of an unruly thirteen year old girl who was refusing to stand in line in the playground, she accused him of assault, he had to appear before the Head the following day in the presence of the girl's parents, to receive a ticking off and warned as to his future conduct.

Another area where do-gooders have intervened targets parents. Children are encouraged to ring a help line to report their parents if they believe they are being ill treated. Now, obviously, no one in their right mind condones violence inflicted on children, whether it be by parents, teachers, or anyone else. However there is a danger here, where perfectly innocent and good parents could be, and, I understand have been, compromised by a vindictive son or daughter. I vividly remember as a child being smacked by one of my parents, for a few moments I hated that parent, I did not consider the fact that I had done wrong and deserved what I got; possibly, under the conditions appertaining to day, I might have picked up the phone in my rage and reported my parent for assault. Children are prone to act on the spur of the moment, and this is the great danger. Some might say, better to be safe than sorry, I don't know. My point in all this, regarding teachers and parents, is to show that some of your concerns must be brought about by the state handing greater power to young children who have not yet acquired the education or the ability to reason that you would expect to find in someone holding these powers. Where parents are lax, this power is carried onto the streets and manifested in vandalism and violence. The do-gooders have also reduced the powers of the police and modified the law in such a way, that to attempt to punish these youngsters in a way that would leave a lasting impression is not now available to them.

However, as you say, the belief system is a problem, the former belief system has been abandoned with no attempt to replace it. When you tell a child that something he/she is doing is wrong, how do we back up our assertion ? In the past we would be able to point out that God would not be very pleased, or to indicate that he/she would jeopardize their chances of getting into heaven. Say that to most children today and they might give you some very odd looks. Also, based on the christian ethic was the firm idea of 'family,' mother, father and children in a secure home where father was the bread winner and mother was the manager of the household, with close bonding to her children (privileged households excepted). I am not saying whether this is right or wrong, but it certainly worked to the advantage of children. However, human rights, do-gooders, and women's lib, amongst other factors, put paid to this. There is also a biological basis to the family system which people find convenient to ignore. Split families, single parents, step fathers, step mothers, no matter how good these parents may be, and I know some excellent ones, there is often something missing where the children are concerned.

I must conclude by saying that despite the depressing scenario I have painted, there are still many young people who, thankfully, recognise a moral structure in society, who do respect their parents and teachers, and who do not mug elderly people, steal cars and vandalise property. It is just unfortunate that there are now rather more of the latter than there used to be. Also, I appreciate that there are still good parents like yourself that hold on to the basic values of society.

John Brandon


Catrina asked:

I do not wish to be disrespectful, but why do philosophers use so much jargon? Everything can be explained in a way that the majority can understand; in plain easy to understand English so why use words that don't make sense to most people? Doesn't this make philosophy elitist?

Some philosophers are elitist and consequently ineffectual. They use jargon because they will not take the time to clarify their meanings even to fellow philosophers, let alone to the average person. They are the one's showing disrespect. The success of the Dummies and Complete Idiot's Guide series confirm your point that experts can express themselves in a way that interested nonexperts can understand them.

There are times, however, when specialists properly resort to a technical vocabulary when speaking only to and for each other, and they violate no ethics of discourse when they do so. This is certainly true in fields other than philosophy. Ask your question again after substituting the words "physicians" and "medicine" for "philosophers" and "philosophy" respectively. We should all be the poorer were physicians, when consulting with each other about the human body and its maladies, to impose on themselves the high transaction costs of translating their terminology and explaining their methodology into the vernacular.

The great champion of grace and clarity in philosophical writing was Brand Blanshard. I give him the last word:

"But on the great issues of philosophy many of men's hopes and fears do hang, and plain men feel that their philosopher should be alive to this and show it. It is not that they want him to give up his intellectual rigour and scrupulousness at least they do not think that it is; it is rather that when men with hearts as well as heads are dealing with themes of human importance, they should not deal with them as if nothing but their heads, and somewhat desiccated heads at that, were involved...

"I do not know why a biologist, presenting a paper on a technical point to colleagues, should not write in a way as unintelligible as he pleases to those outside the circle, provided it is no obstacle to those inside. But suppose that his subject is one of general interest, that the session is open to the public and that he knows many of his audience will be drawn from that public. Should he then travel the same high and unheeding road? ... He would not whisper a fascinating titbit of information to one friend while another who is equally interested is present, but feels no hesitation in talking to an audience in a language lost on half of them. The French, who have earned a right to speak on these matters, have a saying in point: La clart est la politesse. In philosophical speaking and writing, one's manners are connected very intimately with one's manner." (On Philosophical Style 1954)

Anthony Flood

Suppose I wanted to express precisely the relationship between mass and energy. How should I do that in English? Should I say, "energy equals mass multiplied by the speed of light squared"? The problem is that "energy" is a word that, in English, can mean many things, but in physics, means something very specific. The other words in that phrase suffer from the same ambiguity in English. So a physicist uses the equation, "E=mc2". But that's jargon, right? So now what?

Philosophy is a very old and difficult area, much older than physics (which branched off from philosophy perhaps around the time of Galileo, or perhaps a century or so earlier), with problems that have been kicked around, analyzed in various ways, developed, elaborated, and so forth. A lack of jargon, i.e., precision in expression, would merely indicate lack of progress in understanding and analyzing problems.

Does it make learning and doing philosophy harder? Yes. But one can say the same about medicine, mathematics, chemistry, and indeed all the disciplines which have achieved specialized knowledge, techniques, and need means of expressing them precisely. One might consider, in fact, learning the various disciplines the equivalent to learning their languages.

So when you say, "Everything can be explained in a way that the majority can understand; in plain easy to understand English" would you include physics in that? Formal logic? Computer programming? Electronics? Neurochemistry? Statistics? Cognitive science? Then why philosophy?

Steven Ravett Brown

Why do philosophers use so much jargon? Good question! One of the reasons I didn't go on to do further study in philosophy, after taking a first degree in the subject, was the specialized, academic nature of the subject as studied in universities. I have always been against the use of too much jargon in philosophical writing, and try to make my own writing easy to understand.

Why is philosophy often hard to understand? Sometimes — particularly in the case of classic texts — because it has been translated from the idiom of a foreign language.Other times we may suspect the writer of hiding second-rate ideas behind large, impressive-sounding words!

On the other hand, I disagree with you when you say that everything can be explained in a way that the majority can understand. There is always the danger that, in making a subject so simple that it's possible for everyone to understand it, you may oversimplify it — losing much of its original exactness and depth. I remember a quote I once read, which advised that one should "make everything as simple as possible, but not simpler". There is a place for technical terms within philosophy, just as there is in many other areas of specialized study. Some examples: chromosome, zygote , enthalpy, atomic mass , logarithm, dodecahedron, tempera, impressionism . Such words can often be used with greater precision than ordinary, everyday language. Or they may provide a 'shorthand' way of referring to commonly discussed ideas, which will be familiar to all those who have studied the subject to a certain level — in which case they have a place when those experienced students are writing for each other, but should be avoided in writing intended to be accessible to those with no previous knowledge of the subject. This explains most of the jargon use in philosophical writing — it is usually written with the assumption it will only be read by people familiar with the subject.

If you're looking for some jargon-free philosophy to read, I would recommend you try Philosophy Now magazine (http://www.philosophynow.org), which has always impressed me by its relatively low jargon content.

Katharine Hunt

Philosophers do use a lot of jargon, but so do those who have studied any field in any depth. Jargon is just a specialized vocabulary for a particular subject. Why do specialists use jargon? Because by using words with a specialized meaning, or by inventing terms to stand for particular ideas, they can talk to each other in a way that saves time and increases precision. Rather than saying things that can, as you say, be said in plain, easy to understand English, but which also need a lot of words to capture the precise meaning, they use the jargon as shorthand.

So, jargon is an essential, and probably unavoidable, feature on any deep study. As such, there is nothing wrong with it. It makes clear thinking about complex matters easier, for those who have been initiated into its use. The last phrase is, of course, vital.

It leads to two observations. Firstly, jargon is only useful when talking to someone else who understands it. Thus, anyone who is familiar with jargon ought to keep their audience in mind. If they don't, then their audience won't understand them. So, when I write to this forum, I try to use plain, easy to understand English as much as possible, even if it means I have to write a whole lot more to get my ideas across. The 'crime' is not using jargon, but using it in the wrong place.

Secondly, much writing in any subject is written for other initiates. If one wishes to become more knowledgeable about any subject, one must be initiated into the use of jargon. That, at least in an important part, is what getting an education in a subject is about — learning how to use its vocabulary accurately and well. Hence, when I write here, I introduce and try to explain some technical terms, because I believe that readers of this forum are concerned to become more able to read philosophy. But we cannot expect an initiate writing for other initiates to write in such a way that 'the majority' can understand, for that is not their audience. If you wish to understand technical writing in any field, you must become initiated.

Of course, I have over-simplified above. Some experts like to use jargon for its own sake, or to show how clever they are, or to dress up poor ideas as good ones. This is a misuse of jargon, and we can criticize such writing for being obscure.

Does this make philosophy (or any other specialist discipline) elitist? [Warning: the next sentence is highly typical philosophical jargon]. It depends on what you mean by 'elitist'. If you mean 'exclusionary', I think not, because it is open to anyone to learn the jargon if they wish (and provided they are able). If you mean 'confined to those who have put in the effort', then I guess it is, at least at the level of specialized philosophical discussion. Yet there are many who try to make philosophy open to the majority by interpreting it for a more general audience. These are people who try to translate complex, technical ideas into plain, easy to understand English so that the majority can understand.

Tim Sprod

Why so much jargon in philosophy? Well, as the Bible story says, Adam "named all the animals", and the structure of the Hebrew language gives a clue as to how he would have done so: by taking basic words with their basic meanings and mixing them together to form new words that effectively describe the animal. The principle here is that language expands as knowledge expands. Jargon is nothing but an expansion of language to accommodate increased knowledge.

But, would that everyone should be included in learning that knowledge in such a way that they do not need to learn the jargon before learning the subject! People should not be expected to learn the jargon as a sort of prerequisite for learning the subject. The jargon used while instructing them in the subject should only be to the amount and kind of jargon which they can easily digest while learning the subject. Expecting people to learn more jargon than subject is a very inefficient way to learn a subject, and often simply shuts their minds down. This is much like trying to lift a weight up some steps and the weight is much too heavy for one to lift high enough to place it even on the first step. Jargon is no replacement for a good grounding in a subject. If you were well grounded in a subject, then you could easily come up with jargon of your own — and that is precisely how we have jargon in the first place.

There is, in fact, such a thing as an elitism which disdains, or is even malignant toward, the "uninitiated", and this is bad. No good parent or other teacher treats their children this way, and, if all parents and other teachers did treat their children this way, then most children would reach adulthood having developed the kind of non-comprehending response to so many common subjects which, in the current world, some children have toward "higher math": the "eyes glazed over", mental-numbness response in the face of the jargon.

Daniel Pech


Bilal asked:

I have a question about Gareth Evans' example of the name 'Turnip'. What two major views are illustrated in this example and which one seems to be the stronger position?

I'll come to 'Turnip' in a minute.

I was lucky to attend Gareth Evans' seminars on Reference in the Summer of 1977 while I was a graduate student at University College Oxford. The seminars were widely thought of at the time to be as important as anything that was going on in the philosophy of language — on this side of the Atlantic. John McDowell, Evans' close collaborator at 'Univ', was my graduate studies supervisor.

I spoke up a lot during those seminars. At the last meeting I approached Evans to ask if he would be interested in looking at some of my work. He said he would try to fit me in, but the meeting never materialized.

Some time later — it could have been that summer, I'm not exactly sure of the time scale — I heard a horrifying story of an incident in Mexico, where Evans survived a bungled kidnap attempt by bandits on his friend Hugo Margan, the son of the US Ambassador. Both men were shot in the leg. Margan bled to death. Evans nearly died, enduring a desperate taxi ride from one hospital to another, being refused admission because the doctors would not treat gunshot victims.

By a cruel twist of fate, Evans died in Oxford not long after the Mexico incident, from lung cancer.

So much for personal reminiscences.

The two theories of how proper names refer are known as the 'cluster theory' and the 'causal theory'. You might wonder why on earth we need a theory of how names refer to objects. In fact, the two theories represent sharply divided views on the metaphysical question of how our thoughts relate to reality, as I will explain.

A baby is born, and given the name 'Turnip' by its loving parents. (This is my version of Evans' story about Turnip, with a few added details.) Turnip grows up to be quite a character, and many stories are told about his escapades. As the centuries pass, however, some stories are mistakenly associated with the name 'Turnip' when in fact they were about someone else entirely. Let's say, Turnip fought at the Battle of the Boyne. That's true. But Turnip was not, as many falsely believe, the anonymous author of the bawdy novel, The King's Mare. The true author was Swede, not Turnip.

In recent times, the errors have become so compounded that most of the beliefs associated with the name 'Turnip' are in fact true of Swede. But what exactly does that mean? When someone says, 'Turnip wrote The King's Mare', to whom are they referring? Is that statement a false statement about Turnip, or is it a true statement about Swede?

According to the cluster theory, what determines the reference of a proper name is the cluster of descriptions we associate with it. The object to which the name refers is the object about which a sufficient majority of the descriptions are true. In that case, 'Turnip' just means 'the author of The King's Mare'.

According to the causal theory, on the other hand, what determines the reference of a proper name is the initial act of 'baptism' when the object was first given the name, and the causal chain of speakers who each pick up the name from the previous speaker in the chain, with the intention of referring to the object to which the name was originally given. In that case, 'Turnip' still refers to the individual who was originally given the name 'Turnip' even if all the things we have subsequently come to believe about 'Turnip' are in fact true of Swede and not Turnip.

The causal theory was first proposed by Saul Kripke in his paper 'Naming and Necessity' (1972). Gareth Evans wrote a paper, 'The Causal Theory of Names' giving his version of the causal theory. Up until Kripke, the cluster theory was widely, or possibly universally believed to give the correct account of the semantics of proper names.

The metaphysical significance of the causal theory is that it rejects an idea which we find quite plausible when we first think about the reference of a name: that whatever we are talking about depends upon our knowledge of whom or what we mean to refer to. The causal theory says that's wrong. The true significance of our thoughts depends not on the way the world looks to us, from inside our minds, but on an external view which includes information which we do not possess, or which at least is not accessible to conscious reflection.

The metaphysical point about the external view is an important one. But it is not sufficient to vindicate the causal theory. In fact, it seems quite clear to me that both theories are false. In many cases there is no correct answer to the question, 'Is this a true belief about X or a false belief about Y?' The answer is simply indeterminate. Don't bother asking the speaker, because they can't tell you, and no-one else can tell you either. In short, there can't be a philosophical 'theory of proper names' of the kind that proponents of the causal and cluster theories supposed. Human linguistic intentions are messy and complex, and refuse to align themselves to any precise theory.

In his Oxford seminars, Evans was quite critical of his original formulation of the causal theory, pursuing some very interesting lines of inquiry later described in his book, Reference (John McDowell Ed. OUP), published posthumously. Evans combined the 'external' idea with much tougher line on the question under what conditions a person really understands a name. The (sometimes tenuous) causal 'chain of communication', Evans thought, was merely a reflection of the way we use language without always grasping the meaning of what we are saying — what Hilary Putnam calls the 'division of linguistic labour'.

Geoffrey Klempner


Daniel asked:

If logic requires that omnipotence is properly defined as power that has absolutely no limits (the ultimate extent of power imaginable, including the power to make 2+2=5), then does logic require that omnibenevolence is defined as benevolent toward absolutely everything no matter how good or bad? This poses the problem of whether power, and even benevolence, is a real thing in itself, or is only relative to other things. Is power and benevolence like the problem of the 'horseness' of a horse? And, if benevolence is necessarily partly a subjective feeling inside yourself in regard to something of which you approve, then would omnibenevolence include approving of logically (truthfully) self-contradictory arguments against omnipotence?

Several questions compete for attention here. One regards the status of attributes. Another, their mutual compatibility. Yet another is about the status of essences ("the 'horseness' of a horse"). The following is the best I can do given my imprecise grasp of Daniel's point.

Logic requires only that a definition be internally consistent. Apart from that, one may stipulate a word to mean whatever one wishes. Logic cannot require one to stipulate that omnipotence means a power that has absolutely no limits, including logical limits (as Daniel's example implies).

Commonly, the job of defining omnipotence and omnibenevolence (and omniscience) subserves the goal of formulating a philosophy of God. The ideal of the mutual coherence of God's attributes in such a philosophy will guide their definition. Doing this successfully requires more than stipulation. It requires understanding how God functions in one's cosmology. Through a process of mutual adjustment, the definition of each attribute emerges in the light of all the others. I see no reason to accept (what strikes me as) caricatures of attributes so that they are seen as mutually incompatible.

What is the "problem" of the "horseness of a horse"? Perhaps it concerns whether we may affirm that it exists just as we affirm that a horse exists. It seems to me that "horseness" is just an abstraction from our understanding of what it means for something to be a horse. The status of such an abstraction is one thing, and the status of attribute is quite another. An attribute is as real as the thing of which it is an attribute. For example, Celine Dion's voice is as real as she is. It is not independently real, but neither is it an abstraction. My concept of Celine Dion, however, is a horse of a different color: "Celine-Dion-ness" is only an abstraction.

Anthony Flood


Peter asked:

Hi, here I'm sitting in the middle of Bangkok and hesitating.

What are expectations?
How do I define expectations?

This definition is going to be the key element in research done amongst tourists. I'm trying to find out what the expectations of eco-tourists are (nature, forest, people etc). My problem is that I can't formulate questions without a theory and the theory needs a basic. If I use your answer (which I hope you will allow) I will refer to the source in my Masters thesis at the Agricultural University of Norway.

I have had problems with finding an answer, and for two days I have been stocked in thoughts and boring descriptions from www.

Ok, nobody else did it... Let's differentiate between goals and expectations. When you do that, what do you get? A goal is a state or situation, something like that, which you want to attain, to make something into, etc., in other words, a goal involves, usually, it would seem, some sort of effort on the part of an agent, in order to reach it, construct it, or attain it in some fashion. An expectation, on the other hand, is either a state or situation to which you assign, formally or not, some valence as to whether it will exist, usually independently of your effort, for the most part, although I imagine that the two can overlap: one could have an expectation of reaching a goal. But that latter would imply that you think it probable that you'll reach the goal, which again puts a kind of passivity or inevitability into the dynamic, perhaps. Or it's just that valence.

So a goal is a kind of concrete state or thing which you have no necessary likelihood for; an expectation is either a) the likelihood that you assign, or b) a state which you do have a likelihood for. A likelihood is the evaluation, the valence, which can either be a probability or an estimated probability or merely an emotional bias which is attached to... well, whatever you want to attach it to.

How's that?

And so we can extrapolate and claim that there will be both negative and positive expectation hierarchies, right? That we can have gradations of things more likely, and also of things less likely... at least if we think about it. It's interesting, in a way, that we can't, really, have negative expectations... we can say that things are less likely, down to zero, they won't happen; but the only way we can say there is a negative likelihood is if something prevents something else from happening... I wonder if there's any theory for that one; I mean, that might give you negative probabilities, mightn't it?

Steven Ravett Brown

What I want to offer you is first of all a dictionary definition of 'expectation' and then an idiosyncratic theory of expectation as an aspect of the phenomena of 'thinking' which I will place in the context of the concept of a knowledge schema. All of which should give you some food for thought or further research for your Masters.

Collin's Cobuild Dictionary, a dictionary of modern English Usage, exemplifies expectation as: a strong hope that something will happen, or a strong belief that something will happen, or strong belief that someone will develop into having a specific identity, or a strong belief that someone should behave in a particular way. Lawyers for example understand promises to create an expectation in the receiver in the senses given above.

So clearly 'expectation' falls squarely into the field of the concept of belief which could take us into well trodden territory concerning knowledge and it's pragmatic definition as 'justified true belief', a definition used by Nonaka and Takeuchi in their seminal work, The knowledge Creating Company in the context of their paradigm shifting work in 'Knowledge Management'. Which I mention because you could consider the project you are undertaking as a problem of knowledge schema creation and management.

We could find some disagreements with this definition of knowledge and its translation into 'justified true belief' as used in this context given that both are based on the implicit propositional content of knowledge with its associated problems of the verification of opinion or attitude as objects in a private language and some doubts over the logical meaning of the phrase, 'True Belief' or 'False belief' except as metaphor masquerading as theory. Whereas there is a growing body of thought that supports the view that non-propositional knowledge is not simply the absence of thought and the presence of random emotional impulses but that many aspects of thinking include an inseparable propositional and value content although the balance of one over the other may vary according to specific uses or linguistic 'forms of life'.

Goleman, for example argues in his book Emotional Intelligence that the view of knowledge that governments and parents colluded in developing up to now was based on a system education and a concept of intelligence geared towards the acquisition of propositional information and the inhibition of 'emotional intelligence' to use his umbrella term. Similarly, Stevenson argued in his work, Ethics and Language that ethical thinking contains both propositional content and separately emotive content and that both channels of thought support systematic reasoning though they always remain mutually exclusive.

What I want to argue is that we can develop a system of knowledge evaluation called 'SIFT'. This system contains some logic-like features and is based on the idea that there is a central organisational unit of thought that consists of the inseparable conjunction of factual or propositional thought together with emotive or value based thought the underlying vehicle of which is an 'expectation nucleus', though the balance of each channel of thought differs for different contexts. In mathematical thinking for example the value channel of thought is dominated by the propositional channel at the object level but switches round at the heuristic or problem solving level.

In this system knowledge is structured in terms of the categories we can impose on a situation. These divide into static values and dynamic agents, both of which occur together as distinct from occurring alternately or dependently. The static values consist of the mutually exclusive categories of satisfaction and non-satisfaction. The dynamic agents consist of the mutually exclusive categories of promissory value; those agents that produce satisfaction or reduce dissatisfaction and alternately agents of threat value that produce dissatisfaction or reduce satisfaction.

The static values are not simple in that they are made up from the distribution of elements from the primary qualitative expectation field which itself is generated from the fundamental components of the expectation nucleus. This conjugation leads to the generation of the primary 'Qualitative Expectation Field' shown schematically below.

Qualitative Expectation Field

Have and Want = Positive satisfaction

Have but not-Want = Frustration

Not-Have but Want = Dissatisfaction

Not-Have and Not Want = Negative Satisfaction

The agents of change are not simple either in that there are two other general subtypes contained within them, those that increase their protagonist value and/or decrease their antagonistic value, represented symbolically as Pv+ or Tv+ and those that maintain their protagonist value or equivalently counter the effects of reducing agents, represented as Pv0 or Tv0. One further division of the static values allows us to complete the schema for a 'Sift' evaluation of knowledge in terms of qualitative expectation. This comes with the introduction of a category for the objects in the knowledge field on which we are currently not focussing our attention. This provides us with a vehicle to which we adopt an attitude of Indifference. Static value then divide into those things on which we are focussed or those things of which we are aware but currently have no interest. We could think of this as a division between objects of knowledge that we currently consider essential and those that we consider inessential.

Objects in the category of indifference are often assigned the numerical value zero while satisfactions are assigned positive numerical values and non-satisfaction are assigned negative values, in games and decision theory for example. There is a danger that we read the assignment of the value zero as an indicator of non-existence, precisely because this is what it does often signify. In the context of the analysis of knowledge in terms of qualitative expectation it is important to realise that the category of indifference is not necessarily empty and further that we can recognise the existence of two special kinds of indifference, labeled positive indifference, representing those objects of knowledge that we have but to which we are currently indifferent and secondly labeled negative indifference, representing those objects of knowledge that exist but we do not have and to which we are currently indifferent. One final distinction needs to be made and that involves the placement into the earlier major or top level category of non-satisfaction of the sub units of frustration and dissatisfaction.

In general then the Sift schema provides you with the following categories into which you can analyse yours or others thoughts about situations and which represent "knowledge schemas that are built up from past experience to make sense of the world such that our past experience leads us to form expectations about objects or events and from which we anticipate what we are likely to encounter" ( Butler & McManus on Ulric Neisser in Psychology, a very short introduction, Oxford).

[Situation description] can be analysed into examples of [How things are], placing them into categories of:

1.1 Positive Satisfaction
1.2 Negative Satisfaction
2.1 Frustration
2.2 Dissatisfaction
3.0 Indifference

And at the same time examples of [How these things could be changed], placing them into the categories of change:

1. Change for the better
2. Value maintenance
3. Change for the worse.

The concept of 'expectation' has a well defined in use in games theory in which the expectation nucleus or 'act conditioned pair' is the combination of a 'cost' and probability which is then combined mathematically with other weighted pairs in the field of analysis to give an overall value for an event outcome.

There is also a sense of 'likelihood' attached to the ordinary sense of 'expectation' as exemplified in the language of promising in that an object, event or agent, including the speech event or act of making a promise, has promissory value if it is more likely than not to bring about the desired outcome.

Wilfred Hodges book, Introduction to Logic (Penguin) has a very interesting section in which he offers a qualitative analysis of likelihood in terms of 'inequality' operators. So given this gradation of definitions of expectation ranging from the mathematical, through the relational or qualitative logic and cognitive schema to the dictionary exemplified ordinary usage what are the grounds for believing 'Sift' analysis is an analysis of expectation? The grounds are that the concept offers a semantic as distinct from a syntactic explication or mapping of the properties of the concept in that it creates categories which are sensitive to the structures in the concept of expectation and represents them in a schema through which it is possible to undertake symbolic manipulation, an essential feature of 'thought schema's, (Luria The Working Brain and Fodor The Modularity of Mind) without over reducing them to inappropriate logical entities. A complete field of study for example is concerned with exactly the problem of how to represent 'expert' knowledge without over reduction in the context of knowledge elicitation for 'expert systems', computer programs designed to reason.

The key concept of expectation has been represented in the Sift system in terms of the image of change embodied in the mathematical, relational and usage aspects via the concept of promissory and threat value as representatives of agents of change. A full analysis of expectation then, I would argue leads to the development of an expectation schema a part of which involves the assignment of likelihood to the elements in its expectation nuclei. For example when you have classified aspects of the knowledge field for Eco Tourism into the Sift categories mentioned above you can then also assign likelihoods to their occurrence. You can also assign likelihoods to the promissory of threat agents your data surveys produce and these would contribute significantly to the way in which the primary expectation about Eco tourism are represented in marketing and advertising. (See my earlier answer, below on analysing the Guinness advertisements). Attached to the transformational and 'marketing' aspect of expectation is an inferential aspect. For example teachers often find that setting high expectations for students leads to higher achievement and conversely, setting low expectations lead to lower achievement. Expectation in this sense has more to do with creating and sustaining motivation and confidence than assigning probabilities of exam success.

Similarly, expectation has a 'look ahead' aspect that can lead us to reject our present position if we are dissatisfied with the future position to which it leads us in a form of inference by modus tollens.

Finally and most important is the inductive effect or 'lending effect' of promissory value in which present satisfaction is 'lent' from likely future satisfaction. 'I have satisfaction now because I believe I will have satisfaction later'.

Again, the complete structure of an expectation schema for Eco tourism must take account of the inferential aspects of the concept to give you some idea of the way that people could make decisions based on the embedded inferential structures.

Neil Buckland


Claire asked:

I am doing a 6—8 page essay on the ethical issues surrounding vivisection, I am getting along with this pretty good, but need to apply two or more ethical theories in my paper. I am thinking of using Utilitarianism and the Categorical Imperative, do you think these would apply well and could you give me some ideas of how these theories would think and apply to Vivisection.

Interesting task.

Utilitarianism is pretty straightforward, at least in theory. Vivisection is morally right if and only if it leads to greater happiness for a greater number than banning it does. There are, however, a number of pitfalls in interpreting this. What is happiness? How do we add it up? How do we know what the sum total of outcomes would be for each alternative? Even more importantly in this case: whose happiness? Is happiness the sort of thing that animals have?

Peter Singer, who is a type of utilitarian, talks more of minimising suffering than maximising happiness. Animals clearly suffer. Therefore, the suffering of animals must be considered, and weighed against any reduction in suffering that research involving vivisection might produce.

Kant's Categorical Imperative has a great deal more difficulty in dealing with moral questions concerning animals, though some Kantians have advanced ways in which it can. This is because the CI only seems to deal with rational beings. The Kingdom of Ends version of the CI, for example, claims that we must treat others as ends and never merely as means. The others referred to here are other rational beings — those who can impose the Moral Law on themselves.

In this view, we can clearly treat inanimate objects as means. We cannot do the same with humans. But where do animals fit? They are not inanimate objects. They are not rational beings. Standard Kantian theory seems to imply that they belong with the objects, and so there is no moral question concerning vivisection. As I said, some Kantian scholars would dispute this, and Kant himself does say that we owe some moral duties to animals (though I can't remember the details now). I would think it is worth your while chasing this up.

Tim Sprod


Arabella asked:

What is a University?

and Gina asked:

We are writing an essay of 3500 words on the topic, What is a University? For example, when Stellenbosch University became a university, it changed from a college to a university. I have to get one big question, but I can't only find one. My idea is: to build it around the student and from there to the knowledge and spirit.

"The true character of a university is the 'will to knowledge'. It is a collection of students who possess the will to knowledge — the will to possess it and still more the will to advance it. A university is constituted by its students, and by this alone." — Shand.

"A university is not a lecture-theatre, or a library, or a laboratory, it is not a building or a place at all, its essence is a frame of mind." — Shand.

Webster defines universities as institutions of higher learning providing facilities for teaching and research and authorized to grant academic degrees; A university differs from a college in that it is usually larger, has a broader curriculum, and offers graduate and professional degrees in addition to undergraduate degrees. The term also refers to the members of this collectively and a team, crew, etc., representing a university. Modern Western universities have their beginnings in the 12th century with the foundation of universities such as Paris, Oxford, Cambridge, and the oldest still-functioning university, Bologna. They were called universitas magistrorum et scholarium". Here the word "universitas" identified the fact that this institution of masters (magistrorum or professors) and scholars (scholarium or students) was a company of persons, a community, a body, like all other medieval guilds, organized for the sake of its protection from hostile outsiders. The university, from its origins, was not only a center of discussion, but also of critique. It considered issues as objectively as it could, and thus often disapproved, implicitly or explicitly, of policies endorsed by the State and the Church. In an era of authoritarian control by both secular and ecclesiastical authority, this trait surely needed protection.

Another aspect of university was that of a Studia Generalia, or "School of Universal Learning." According to its Website, this is quite the mission of the University of Stellenbosch: to create and sustain, in commitment to the academic ideal of excellent scholarly and scientific practice, an environment within which knowledge can be discovered, can be shared, and can be applied to the benefit of the community".

From these ancient beginnings, modern universities developed and in the nineteenth century the number of universities expanded considerably. The German model of university education with its emphasis on higher degrees (doctorates) has been influential particularly in America, and now universities focus on research as much as teaching.

University education focuses on theoretical analysis and concept-forming, rather than technical skills or development of techniques.

The typical modern university may enroll 10,000 or more students and educate both undergraduates and graduate students in the entire range of the arts and humanities, mathematics, the social sciences, the physical, biological, and earth sciences, and various fields of technology. Universities are the main source of graduate-level training in such fields as medicine, law, business administration, and veterinary medicine.

If you want to built your essay more around the student, following passage taken from Robert M. Pirsig's Zen and the Art of Motorcycle Maintenance, might be worth thinking about:

"The real University, he said, has no specific location. It owns no property, pays no salaries and receives no material dues. The real University is a state of mind. It is that great heritage of rational thought that has been brought down to us through the centuries and which does not exist at any specific location. It's a state of mind which is regenerated throughout the centuries by a body of people who traditionally carry the title of professor, but even that title is not part of the real University. The real University is nothing less than the continuing body of reason itself. In addition to this state of mind, `reason,' there's a legal entity which is unfortunately called by the same name but which is quite another thing. This is a nonprofit corporation, a branch of the state with a specific address. It owns property, is capable of paying salaries, of receiving money and of responding to legislative pressures in the process. But this second university, the legal corporation, can not teach, does not generate new knowledge or evaluate ideas. It is not the real University at all. It is just a church building, the setting, the location at which conditions have been made favorable for the real church to exist."

Simone Klein


Amy asked:

In light of the looming possibilities of artificial intelligence and nanotechnology, just what exactly does it mean to be human? If we could create intelligent "beings," would they be tools with our intentions, or is it possible that through emergent consciousness that they may develop their own teleological goals? Would these "beings" warrant moral consideration? Who or what does warrant moral consideration?

I just got back from a conference where this question was posed and answered by Ray Kurzweil (http://www.kurzweiledu.com/index.html), in a manner I find reasonably convincing. He argued that any entity capable of suffering deserves moral consideration. So this would include what animals we could determine actually suffer (rather than behave as if they are suffering — an objection of Descartes which might conceivably apply, say, to insects), and machines we build or cause to be built at some time which also suffer, as best as we can determine.

But one could object that any conscious being deserves moral consideration. The questions then become, 1) can an entity be conscious and not be able to suffer, 2) can an entity suffer and not be conscious? As far as 2) goes, one would have to say no, I would think... but then we get into considerations of degrees of consciousness. Sartre and others have described first- and second-order consciousness, and one might carry that further and hypothesize degrees within those categories. Then we have a much harder question, i.e., how far down can we go and still suffer? No one knows the answer to that.

The next question is, what is "moral consideration"? Assume that a dog is conscious, but not as conscious as a human. Does it deserve less moral consideration because of that? The operative answer, as we see it instantiated, is yes. But that assumes we can say what "less" is, and that we have means of knowing that dogs are less conscious than we are, and that indeed our logic making that connection is correct.

Well, I'm going to leave it here. Books have been written on this type of thing, not to mention tons of sci-fi. On the latter, you might check out P.J. Farmer's stuff; he's fascinated by morality as it relates to aliens.

Steven Ravett Brown

I want to pass on the first question, as it is a bit too big for me! I'll tackle the questions on beings with Artificial Intelligence.

I would go for the second of your possibilities. It seems to me that consciousness must be an emergent property, underpinned by a certain complexity of 'brain' development. While this has to be an empirical question, so we will have to wait for that answer, I don't see any reason why such complexity is only achievable in carbon based life forms. We are probably a long way from achieving it in our inventions yet, but I would hazard that we will reach it sometime.

Such beings would warrant moral consideration, it seems to me. Any being that is conscious of its own actions and the effects (for better or worse) of those actions on other similarly conscious beings would count for moral consideration. Such a being can have a reflective image of itself as a moral actor, and hence can develop a moral character.

I suspect though, that there is another requirement for the development of both sufficient intelligence and of a moral character. That is that these beings would have to grow up in a community of intelligent beings — maybe beings just like itself, but equally in a human community. I won't go into all my reasons here, but I believe that the same holds for humans.

Tim Sprod


L asked:

I've only seen "Pythagoras" as the originator of the Hesperus/ Phosphorus distinction. Someone on a mailing list said it was attributed to Parmenides. Is there anything to back this up?

As Pythagoras wrote nothing, it is hard to say how much of the doctrine we know as Pythagorean" is due to the founder of the society and how much is later development. According to the book Die Fragmente der Vorsokratiker" by Hermann Alexander Diels, which can be said to be one of the most reliable presentations of presocratic philosophy, Pythagoras (or at least one of his disciples) was the first Greek to recognize that the morning star (Phosphorus) and the evening star (Hesperus) were in fact one star. After his time it was called Aphrodite, and nowadays we know it as the planet Venus. He was also the first to note that the orbit of the moon is not in the plane of the earth's equator but is inclined at an angle to that plane. Though there is reason to believe that Parmenides was highly influenced by the Pythagorean cosmology, I couldn't find any indication, that Parmenides discovered the identity of Hesperus and Phosphorus.

Simone Klein


William asked:

A question about belief: If I know 'p' to be true, there would be no need to believe it, since I know it to be true. However, if I know 'p' to be false, again there would be no reason to believe it since I know it to be false. Conclusion: belief should have no epistemological status.

Well, first, there seem to be many people who know things to be true but still manage not to believe them. Perhaps knowledge should imply belief, but does knowledge that something is true imply that one knows why it's true? In that latter case, one might doubt one's knowledge, i.e., that one actually does know. The same would hold for knowing something is false. Looking at it the other way around, if knowledge is justified belief, then belief itself, without justification, is insufficient for knowledge. So we can set up the proposition: if knowledge then justified belief, and say that the contrapositive holds: if no justified belief, then no knowledge. But neither the converse: if justified belief, then knowledge (because we can doubt the justification); nor the inverse: if no knowledge, then no justified belief (for much the same reason), are necessarily true.

But if we can say, "if no justified belief, then no knowledge", then we have to say that belief plays at least some part in epistemology, since if we believe in something, but cannot justify it, we must say that we have no knowledge of it. And indeed one might consider the whole above discussion as working out some of the more obvious epistemological implications of knowledge and belief (if you take knowledge to be justified belief), which does bring belief into meta-epistemology, at least.

Steven Ravett Brown

Sorry, I can't buy this one. I think that, whatever else 'knowing' entails (and people argue at length about that), it must entail 'believing'. I can't see how you could consistently say 'I know that grass is green' but also say 'It doesn't matter if I believe it or not'. Knowing is a type of belief — belief plus something(s) else. The standard somethings are 'justified' and 'true', but both of these are contestable. I can't see how you can contest the 'belief' part.

Tim Sprod


Ivan asked:

I am trying to determine whether the ultimate goal of species is to survive (continue living), that is, to prevent their extinction. In order to be able to support this, I must answer the question, "Is life good?" My answer is, Yes. My question to you is, "Why is life good?" (or why isn't it?).

This is a very interesting question. I would like to address just one aspect:

A simple thought experiment shows that if evolution is true and given enough time every species that is not interested in its own survival will be less successful in evolutionary terms (and eventually very likely become extinct, especially if the conditions in the environment change for the worse or competition increases). This sits well with the fact that in all species we know so far we observe a marked instinct to survive/procreate. Therefore it would seem that for conscious life-forms to negate life would be incompatible with long-term survival. This would suggest that a) to negate life (i.e. to decide "life is not good") cannot be advantageous in evolutionary terms (considering the whole species and not individuals), and that b) being ourselves the result of natural selection we may be naturally disposed in favour of the decision "life is good". In short, evolutionary considerations would suggest that "life is good" is the right choice both from a rational point of view, and from an instinctive/emotional point of view. (In other words if one had to construct a successful species one would have to give it exactly this survival instinct — a fundamental "yes" to life). So under these considerations the answer would be that — given enough time — one would expect most species to show this survival instinct to a marked degree. (In other words, there can always pop up a mutation that does not have such an instinct but it would be 'weeded' out over time).

In summary this shows (if true) that the question "why is life good" in this meaning is somewhat of a tautology — life must be considered good by definition, otherwise there can be no life.

*This is not meant in any way to belittle individual suffering (that may lead to a "no" to life) or indeed the individual's rights of choice in any way. (However consider this question regarding individuals: If there were after some catastrophe only one man and one woman left on Earth — would this situation not impact on their personal plans for life, marriage, off-spring etc? And if it did not i.e. they preferred not to survive/procreate, would that not prove the a.m. theory through extinction of a species if the individuals lack the survival instinct?)

P.S. Background information: I am a pharmacist with a Dr in natural sciences, and currently studying for the BA in philosophy at Birkbeck, London. — I find the questions very interesting and hope I can help a little in exchange for the information I find for myself...

Helene Dumitriu


Emma asked:

What is the role of justification in distinguishing between knowledge and true belief?

Offhand, I'd say that just because a belief is true doesn't necessarily imply that you know why, i.e., that you know it's justification, i.e., that you know that it's true. So knowledge, as justified belief, can imply that you also know the justification, depending, of course, on how you understand "justification". On the other hand, can something justify your belief, so that it becomes knowledge, yet you don't know of that justification? Well, why not? Then you'd have knowledge but not know you have it. But in that latter case, you could not distinguish between knowledge and true belief.

Steven Ravett Brown

Perhaps I might expand on this answer a little?

Justification is typically understood to be constitutive of the difference between knowledge and true belief. For a pithy explanation why, see Kenneth Stern's answer to a related question at: Answer page 15, Question 19

The account of knowledge as justified true belief is sometimes called the 'tripartite definition' — justification, truth and belief being the three parts. It originates with Plato (see Theaetetus 201 and Meno 98) and it is very widely accepted as at least giving necessary conditions for knowledge. (Although some social scientists refer to any justified belief as knowledge, irrespective of whether it is true, this is a deplorable habit which leads to unnecessary confusion and hostility when they address wider audiences).

The tripartite definition has been the focus of much intense scrutiny since Edmund Gettier published apparent counterexamples to its sufficiency in a hugely influential and charmingly brief paper 'Is justified true belief knowledge?' Analysis 23 (1963), reprinted in A. Phillips Griffiths Knowledge and belief (Oxford, 1967) and many other places. Gettier cases involve beliefs which are true and apparently justified, but which do not seem to be knowledge.

One of Gettier's own examples should make this clearer: Smith and Jones are to be interviewed for the same job. Smith observes Jones, who he has good reason to suppose is much the better qualified candidate, nervously counting his pocket change. Hence he forms the justified belief that the man who will get the job has ten coins in his pocket. However, unbeknownst to Smith, he too has ten coins in his pocket, and unexpectedly, he gets the job. So Smith's belief is true and justified--but it doesn't seem like knowledge.

Here the truth of the belief and the justification for believing it are not linked in 'the right way'. Needless to say, it is a far from simple business to specify what the right way is in sufficiently watertight terms.

Andrew Aberdein


Michael asked:

I am preparing an exercise with the topic: "Philosophy — has it any value today?"

Yes, this is a tough one but I just wonder if the basics of philosophy should be reconstructed to meet present day issues at a level equivalent to them and not using answers developed during past days when present problems did not exist.

My first reaction is that most present _philosophical_ problems did exist in the past. They are the problems that arise from the human condition, just as much now as in the past. Clearly, grappling with them is just as valuable now as it has ever been.

My second reaction is that you are probably referring to the application of philosophical positions to other sorts of problems. Again, though, I would think that the majority of these other problems are basically pretty similar now to what they were in the past. Human condition again.

So the third reaction is that you might mean the application of philosophical positions to problems that arise from new technologies. While some of these are just variants on older problems (e.g. euthanasia, abortion), others may be more genuinely new (e.g. human cloning, artificial intelligence). As I say, these are a small sub-set of issues to which philosophy can be applied, so we can clearly say that philosophy has a good deal of value today, even if it can't deal with these. However, I think that these problems are susceptible to approaches drawn from the tradition.

As to reconstruction, I'm not sure what that would entail. In one sense, philosophy is continually reconstructing itself — new movements (pragmatism, post-modernism to name a couple of the last century) appear that reconstruct philosophy. On the other hand, they all take a philosophical approach, so the broader subject is evolving within the area that it has always covered. Approaches (as opposed to answers) developed in the past are often applicable to new problems.

Finally, those who reconstruct philosophy generally end up in all the textbooks. If you have an idea of how to do it, go for it! But I suspect that actually reconstructing philosophy is a lot harder than suggesting that it ought to be done.

Tim Sprod


Felicia asked:

What does Kant's categorical imperative offer to ethics? What are its drawbacks?

What is the advantage of a virtue ethics approach?

The categorical imperative brings together the idea of the objective requirements imposed upon us by others and our ability to act towards an end in itself, with the notion of a pure will free of subjective inclination.

However, I start with the drawbacks. Kant's idea of the rational free will has been criticised on the ground that it is detached from the motivational will. Normally, an agent acts on reasons that have some force with him and fit in with his overall goals, desires and intentions (Gilbert Harman in Relativism Cognitive and Moral). But when we act on the categorical imperative, we simply act towards an end, performing an act on the basis of duty. Another form of this criticism is that if we internalise social mores, so that duties become instinctual, we are not acting according to rational principle but from subjective or personal motivation (Bernard Williams, Ethics and the Limits of Philosophy). It is also pointed out (Williams) that the categorical imperative doesn't really reflect the nature of our moral agency and we don't see ourselves as legislators when we act morally and there doesn't seem to be any reason why we should. On the other hand, some philosophers (Tom Scanlon, What We Owe to Each Other and Roger Scruton, Kant) find that the categorical imperative reflects our moral nature. Firstly, there are the general points that we are governed by rational principles and we do aim at an ideal ethical community and secondly, our moral intuitions about reverence for others, the worth of duty, the struggle to overcome desire and that morality is not a matter of self-interest are reflected in the categorical imperative.

I believe it is understood to be a valid objection to Kant that we don't possess an autonomous rational will and that we cannot act as abstract rational agents towards an end which is non-hypothetical and not based upon our own goals or desires. However, Kant's characterisation of the rational will sets up the principle of the categorical imperative as based on an aspect of our common nature. The categorical imperative as a maxim is both a principle and a motive and the objection is that it is cannot be a motive. (There is also the objection that the CI is an empty principle without material content, but that has been dealt with before on this site and you can look at the search engine at philosophos.org.)

Personally, I see no problem with the idea that when we act on the categorical imperative, our will is in accordance with a rationally universalizable principle since it is only then that we are acting without reference to goals and desires- and this is what a good will is. Sometimes we simply do act for the good of other persons and it is at such times, that we do not hold reasons that refer to our own interests and goals. Those who object to this would hold that when we act from non-self-interested reasons, simply for the benefit of another, this is not grounded in a rational autonomous will, but in desire or instinct, which for Kant are non-moral since they don't involve obligation. But an instinct towards an action that could be made into a principle in the form of the categorical imperative is the nature of the operation of a good will: There is nothing hypothetical or conditional upon desire involved. The categorical imperative embodies the idea of the purity of morality and this is what Kant intended when, right at the beginning of the Groundwork, he says that the concept of duty has no application to a good will. Considerations to do with the autonomous rational will and legislation have no bearing on particular moral acts, but only function to set up a characterisation of morality as having a command upon us because of human rational capacity.

A more serious drawback, as far as I'm concerned, is that the categorical imperative has force because we are rational beings and moral principles are only directed at human beings. If our ethical attitude to animals is based in compassion and the theory doesn't extend to the recognition of a moral call from the animal kingdom then it is seriously species-ist.

I'm not sure whether you are saying that Kant's is an ethics of virtue? Duty is regarded as a virtue, but so are benevolence and compassion and all are Aristotelian virtues of character and need not underlie the Kantian moral act. As Kant recognises, we are not good all the time, even if we do develop virtuous qualities. However, it is possible — now and again — for us to act purely and virtuously. The advantage of Kant's approach to ethics, as I understand it, it is that a highly realistic description of our moral acts and our moral nature.

Rachel Browne


Courtney asked:

What are the karmic effects that are implicated on one who commits suicide?

If you're a Christian of any sort, absolutely none, since Christianity does not have a concept of karma. It's a mortal sin for some Christian sects, but that's another story. If you're Islamic, absolutely none, for the same reason. If you're a Zen Buddhist, none as far as I know. If you're a Taoist, none as far as I know. If you believe in the Greek gods, absolutely none. Same with the Norse gods, with Voodoo, and various Native American religions, and so forth. So I guess you have to have a rather specific set of beliefs to take karma seriously. My question is, why hold those beliefs, in particular? Let's generalize. Why not be able to change your beliefs to fit your circumstances, and find the set of beliefs that will bring you and those around you the most fulfilling life? Most people are not able to do that. Why not, do you suppose?

Steven Ravett Brown


Reza asked:

I'm a student and am actually doing a module on philosophy of education. I would like to have a better idea on "learning to be".

I take it that you are referring to the recent UNESCO report on education — the Delors report (? from memory). It had four strands, as I recall — learning to know, learning to do, learning to be and learning to live together.

I take the intention of these (roughly) to be about the traditional academic disciplines, about vocational training, about personal development, and about social, moral and political issues. The one that interests you, therefore, is the third — personal development.

I'm not entirely happy with this interpretation of the four strands. It is to my mind somewhat superficial. Here's what it does to the third: it can be about engaging in physical education, learning sports and leisure pastimes, perhaps drug and sex education, maybe even the fine arts subjects — those subjects that are about forming leisure interests and about personal development. This superficial interpretation of the four strands looks at the present curriculum and tries to split it up into the four areas. The first gets the 'serious' subjects, the second those subjects that are narrowly focused on the workplace, the fourth gets the 'citizenship' oriented subjects (those few that exist in the present curriculum) and the third is a grab bag for anything left over. This includes those subjects that have relatively recently appeared in school curricula because 'schools ought to be doing something about ...' — drugs or vandalism, or the lack of manners in young people, or youth suicide or... All of these are worthy, but they are also often a shifting of responsibility from parents and communities onto schools.

The way I would prefer to characterise the third strand, though — becoming a person — means it would need to take center stage. All the others would contribute to it (although, as I will explain, this is to understate the role of learning to live together).

This is because 'learning to be' now refers to becoming reflectively able to pull together the disparate elements of one's self, and to create a coherent character of which one can be proud. Everything else we do can contribute to this. Much of what is placed in the 'learning to be' area above then gets shifted to the first two categories. We don't make the snobbish distinction between learning to know about maths (serious, traditional, a 'proper' school subject) and about our health (marginal, new-fangled, a 'trendy' subject), or between learning to do carpentry (get you a proper job) and learning to 'do' footy (just fun).

Further, we have to take learning to live together more seriously, because in order to become ourselves, we must learn in a community. This is the insight due initially to Lev Vygotsky: "What the child can do in cooperation today, he can do alone tomorrow". See Jerome Bruner's "Actual Minds, Possible Worlds" (1986, ch. 5) for a good brief introduction, or my own "Philosophical Discussion in Moral Education" (2001, Routledge) for a fuller treatment. As that title indicates, I believe that to implement learning to be properly, we need to engage children in philosophical discussions in classrooms through a community of inquiry.

Tim Sprod


Nick asked:

My question is about the relationship between the mind and the computer. My thesis is that having a mind is a necessary consequence of having a body that is designed to receive information in a variety of ways, through the senses, interactions with the environment etc. In this context, there has to be an organising principle that makes sense of all this incoming information — in other words, the mind is there because it needs to be. Do you think that a computer could ever be made to behave as a mind, given that it lacks the fundamental element of 'need'?

You mean could a computer behave as a person? In certain respects, possibly all, a computer could behave as a person. It couldn't be a person though. The human mind may have developed because of need. But perhaps also because of desire to continue to live, individually and collectively. Could a computer have such a desire? Maybe. But consciousness would be essential to a desire for survival and I don't think self-consciousness is programmable.

And even if a computer could behave like a person, it couldn't be a person as there is more to a human than determinate programmable input can create. How can we programme in the ability to love, for instance? We can't even define it.

Rachel Browne


Korhan asked:

I'm looking for some knowledge about a man Macywelii or Mac Gawelli or something like that...who is that man and did he say something about:

"To reach our target everything is permissible."

Please I need your help about that...I need the name of that guy or to whom that idea belongs.

Niccolo Machiavelli (1469-1527) was an Italian political philosopher and political functionary (secretary to the war council of the Florentine Republic, for example) who wrote a book called The Prince, which certainly advances ideas like the one you quote. His views are (as one might expect) somewhat more complex than that quote might indicate.

Tim Sprod

The famous claim that the end justifies the means was coined by Niccolo Machiavelli in his greatest work The Prince. It was written in 1513 and published after Machiavelli's death in 1532. The Prince is considered one of the most influential works of Renaissance Philosophy.

Machiavelli (1469—1527), the Italian statesman and political theorist turned political thought in a new direction.

While traditional political theorists were concerned with moral evaluation of the state in terms of fulfilling its function of promoting the common good and preserving justice, Machiavelli was more interested in empirically investigating how the state could most effectively use its power to maintain law and order (political science). The claim that the end justifies the means seems to advocate the use of even immoral means to acquire and maintain political power. What Machiavelli seems to mean by this is, that sometimes in order to maintain law and order it is necessary for a ruler to do things that, considered in themselves, are not right, but which, considered in their wider context, are right because necessary to prevent greater evil.

Simone Klein


Jessica asked:

How is Heidegger's "Letter on Humanism" really anti-Humanistic?

"Letter on humanism" seems to be humanistic since it centres on the nature of man and Heidegger describes it as humanistic, but only as "primordially" so, prior to all conceptual and theoretical accounts of mankind. The Letter is not humanistic in an anthropological, biological, social or ethical sense because these are theories constructed by man in his language and do not contain truth about being, since being stretches into the future, and man himself, and the theories he constructs about himself do not give shape to the future.

Heidegger does claim that thinking about the truth of being is ethics, but not in the sense that philosophy gives to ethics. Philosophy centres ethics on the subject in his relations with others and particular actions, becoming a social or psychological science, or a theory about God, all of which is humanistic in the reductive anthropological sense of humanism. But being spans the spiritual and real and, for Heidegger, ethics lies in this wider realm. This realm cannot be reduced to theory, or it would not be the realm of being which is best glimpsed at, as happens in poetry and literature. So the Letter is an attempt to establish a different form of humanism, a different stance on man. However, it is not only anti-humanistic in being anti-anthropological and anti-biological, but it loses sight of the individuality of man as a subject in favour of man as having being in a general, non-individualistic, form. This is an anti-humanistic outcome that Heidegger might not have intended.

Rachel Browne


Angie asked:

Please could you help me with the following...if everything is equal and opposite (which seems to be the case) how do we find the central place? Does it exist? As everything is equal and opposite it can't can it?...but everything has a central point...so how does that work??

I don't understand why you say that everything is equal and opposite. Equal and opposite to what? Are you saying that, for everything that exists, there is something else that is equal to, and opposite from, it? If so, I don't see why. Some things are the opposites of others, but some aren't. Some are equal to others, but some aren't. Some things lie in between other things.

Tim Sprod


Lule asked:

According to the writings of David Hume, where do we get our idea of liberty? Are liberty and necessity opposed? Can they be reconciled? If they are not opposed to each other, what is each opposed to?

In the Enquiries Hume says quite plainly, "By liberty, then, we can only mean a power of acting or not acting, according to the determination of the will." The word 'power' was commonly used by Hume to mean 'causal efficacy,' i.e. the 'power' that A has to cause B. he goes on to offer the example that we can choose either to stay at rest or move, a universal hypothetical liberty afforded to anyone who was not a prisoner; and it is not a subject of dispute.

We are here involved with Hume's notion regarding causality; just as causation applies to material objects in space then, according to Hume, it also applies to mental or internal events. His approach to this topic is epistemological. He questions, not whether there is causal necessity, but what we can know about it. Empirical knowledge of juxtaposed events is easily obtained through the senses: Hume, however, claimed that the only knowledge available to us is just this sensa. i.e. what is given to us through the senses. "The impulse of one billiard -ball is attended with motion in the second. This is the whole that appears to the outward senses...There is not in any single particular instance of cause and effect anything which can suggest the idea of power of necessary connexion." The notion 'cause,' according to Hume, is a metaphysical concept, we are not made aware of it through the senses, it is a mental construct applied or added to the received sensa. On Hume's view two things are causally if and only if the one kind of thing regularly follows the other. However, there could be a time when it did not ! There is no guarantee that the sun will continue to rise every morning as a causal effect of the earth's motion. More pertinent to your question is the sometimes unpredictable behaviour of individuals, in either exercising liberty or demonstrating what is considered to be determined action.

Hume asserts that the absence of any feelings of constraint or compulsion when we take ourselves to be acting freely is no evidence in favour of libertarianism, we would have to be thinking of causality in terms of force or necessitation. As we have seen, Hume is sceptical of any such force once it is realised that "we know nothing further of causation of any kind than merely the constant conjunction of objects." Reflection seems to reveal that what we call free actions "have a regular conjunction with motives and circumstances and characters," so as to enable to draw inferences from one to another.

From Hume's interpretation of causal events in our everyday thinking, it seems that the answer to your fundamental question : Where do we get our idea of liberty ? lies in the notion that some decisions are taken in the absence of feelings of compulsion or constraint. Failure to feel forced to act when I think of myself acting freely has no bearing on whether the act was caused or determined in accordance with physical law. This answers your question : Are liberty and necessity opposed ? Free actions are determined by motives conjoined with circumstances and the determining factors themselves are the effects of other determining factors. But this in no way undermines the everyday distinction between free and unfree actions. A free action is not an uncaused or undetermined action ; it is, rather , an action which is determined by our own choice, as opposed to an unfree action, one imposed on us by the choice of another or by circumstances we have no control over, and "this hypothetical liberty is universally allowed to belong to everyone who is not a prisoner and in chains."

To make things a little clearer, Hume states that free will and determinism are compatible. The compatibilist position amounts to saying that all actions are determined , but that some (the free ones) are determined from within, while others (the unfree ones) are determined from without. He also claims that the libertarian is in effect thinking of free actions as uncaused ones, but, as he says in speaking of voluntary actions, the ones for which we are to be held responsible and for which we are praised or blamed are ones which are caused by us.

References and quotations from Enquiries concerning Human Understanding Third Edition, David Hume, Oxford University Press.

John Brandon


Jean (Mr) asked:

Is there a connection between The Cloud of Inconnaissance, the writings of Jan Van Ryusbroeck and those of Gregory Palamas? Are their ideas similar? Are they theologians, philosophers or mystics?

Sometimes, I ask myself whether their respective messages were or were not warnings announcing the coming of Renaissance, Reformation and today's anthropocentrism.

The anonymous author of the Cloud of Unknowing (2nd half of 14th century), Van Ruysbroeck (1293-1381) and Palamas (1296-1359) came from very different climes: England, Flanders and Mount Athos respectively speaking. Palamas, as you probably know, was the head of a large wealthy and influential household in Constantinople which included a retinue of servants. When he decided to renounce the values of worldliness and take the monastic garb, his mother, his brothers and sisters and all their servants were forced to do likewise. They went into monasteries around Constantinople, while Palamas went to Mount Athos.

I tend not to like 'connecting' theological ideas which belong to very different climes and places, because then you have a merely abstract connection, which is against the spirit of those very ideas that ostensibly connect. However, the question still arises, how do you explain the similarities? The explanation is a shared tradition. All three writers pre-suppose a Christian platonist metaphysic come down through Fathers of the Church. You might like to read some of Pre Garrigou-Lagrange to get a sense of this common structure. His Perfection Chrtienne et contemplation and particularly, The Three Ways of the Spirit (1938 tr. 1950) are worth reading in this regard. Theologians, philosophers or mystics? I would argue that thinking out of that Christian neo-Platonist metaphysic makes you all three because i) it is theological ii) it is also philosophical iii) all mysticism rests on some concept of this structure. That is why modern mysticism is an oxymoron. I doubt the spiritual writers you refer to had a prophetic sense of things to come. Rather, out of their profound understanding of the self as a relation between itself and a creator comes a knowledge, which projected is a fore-knowledge, of what happens when this understanding can no longer be upheld. They would not have conceived the Renaissance as a renewal, but as a falling back into heathen and pagan ways of thinking. The Reformation would have been unthinkable, even as it was for so many at the time.

Matthew Del Nevo


Edward asked:

"Can there be laws of war?"

I think this can be interpreted in two ways. It could mean, "Are there any patterns in war?" Or, "Should there be guidelines dictating both whether it is valid to enter a war and how a war should be waged?" I have chosen to assume it is asking the latter. This is what I have done so far:

I have dipped into Thomas Nagel's Mortal Questions which has a fascinating essay on War. Whilst listening to BBC Radio 4's 'Today' Programme, I heard that a philosopher called 'Grontius' had some thoughts on this although I have not been able to find out what he wrote. Would Aquinas be relevant?

I might start with a synopsis of how wars begin (with the help of A.J.P. Taylor's How Wars Begin). This would bring me on to the point that they begin from a breakdown of diplomacy or of laws. Thus it would seem ridiculous to suggest that laws can dictate whether a war should be and how it should be waged.

The next paragraphs would discuss the different standpoints: absolutist and utilitarian. I would need to back them up with original and convincing historical examples. I could bring in the Christian standpoint on war and the criteria that would need to be fulfilled for it to be valid.

Any source suggestions, ideas, comments on structure would be greatly appreciated.

Just a couple of suggestions. Grontius was the originator of the idea of a just war (briefly, he claims a just war must be fought in a just cause, and fought using just means). Well worth chasing down.

I don't see that it is ridiculous to say that there can be laws of war, just because the war is caused by the breakdown of laws. There can be different sets of laws for different purposes. We are familiar with this idea. If, for an individual, the laws of society break down, we can put them in jail where different laws apply. Or (probably more convincingly) if civil law breaks down, the army can impose martial law. So we can have a set of laws that govern peacetime, but which are replaced with the Rules of War when war breaks out.

Tim Sprod


Debbie asked:

In which philosopher's ethical work can the idea of "the practice of the presence of God" be found?

This is the title of a work by Brother Lawrence (c.1610-1691), The Practise of the Presence of God, tr. John J. Delaney, Image Books, NY, 1977. He I believe made this term famous in our time. But Self-abandonment to Divine Providence by Jean-Pierre de Caussade presents the same kind of understanding. It is a better known and a classic of Christian spiritual writing available in English in various translations. Also Fenelon's works are good in this regard if you can find them in translation. If you can get hold of it, The Spiritual Letters of Dom John Chapman OSB. Fourth Abbot of Downside, ed. by Dom Roger Hudleston OSB. (Sheed and Ward 1944) is a brilliant intimate view of the subject. In it he gives instructions for practise of the presence of God to religious, whose job it is to engage and realise this worthy practise. None of these writers are philosophers in the strict sense — they are philosophical — so it may not be what you are searching for.

Matthew Del Nevo


Ferhat asked:

Is Hobbes right about human nature? If there were no legal restraints, how would human beings behave toward one another?

Why is it that private property and other material goods cannot be "shared" equally by everyone? Why cannot honor or fame be shared equally by all?

I don't believe that Hobbes was right. Hobbes took, as his starting point, that humans can be considered "as if but even now sprung out of the earth, and suddenly, like mushrooms, come to full maturity, without all kind of engagement to each other". His idea of human nature itself springs from this radically disengaged picture of humans.

But humans do not arise in this way. As Seyla Benhabib points out, "the subject of reason is a human infant whose body can only be kept alive, whose needs can only be satisfied, and whose self can only develop within the human community into which it is born. The human infant becomes a "self," a being capable of speech and action, only by learning to interact in a human community. The self becomes an individual in that it becomes a "social" being capable of language, interaction and cognition. The identity of the self is constituted by a narrative unity, which integrates what "I" can do, have done and will accomplish with what you expect of "me," interpret my acts and intentions to mean, wish for me in the future etc." All of this entails that humans are not egoistic pure rational calculators, but connected individuals who have empathy and commitments to others. See also Mary Midgley for a good critique of Hobbesian views.

Without legal restraints, humans would probably act with hostility and nastiness to some, and great kindness and care to others — as we do with legal restraints. I suspect that we would work out some rules pretty soon anyway.

I don't know the answer to your second set of questions. I suspect humans will always need to feel that they own some things.

Tim Sprod

I do not wish to inhibit Ferhat from asking questions, but I would encourage him to sharpen them in order to get more satisfying answers than what follows. His first question is not specific enough. No great philosopher was 100 per cent wrong about human nature. What specific Hobbesian claim about human nature does Ferhat have in mind?

His second question almost answers itself: without legal restraints (penalties for engaging in certain behavior) there would probably be more of that behavior. A better question might be, What behavior, if any, ought to be legally penalized?

His third question is also not well-framed: there logically cannot be private property "shared equally by everyone." Perhaps the intended question was, Under what circumstances, if any should an individual have the exclusive right to control and dispose of a given material thing?, which is another way of asking if the term "private property" can refer to something real. I'm not sure how "everyone," all six billion of us, can share something equally in any meaningful way. The only exception would be goods that are abundant, like air: I can breathe all the air I need without diminishing anyone else's supply. Scarce (or nonabundant) resources, however, have to be transformed or produced at someone's cost in scarce time, scarce energy, and other scarce resources. Since that cost is not equally shared, it is not clear why such goods should be enjoyed equally. Finally, honor and fame are merited by individuals and consequently cannot be shared by nonmeriters.

Anthony Flood


Chris asked:

Heidegger seems very similar to medieval thinkers — there are parallels between the coincidentia oppositorum and Heidegger's notion that truth and falsity lie at the same essence. Also, I know that Heidegger did his essay on Duns Scotus. Heidegger has also been quoted as saying that he would close down his thinking shop if ever he were called into the faith.

My question is threefold:

  1. Is Heidegger trying to establish a revival of medieval thought without the emphasis on God?
  2. If so, do you find his attempts convincing?
  3. Furthermore, how could such a 'religious' person support Nazism?

I expect you know John Caputo's early work on Heidegger: The Mystical Element in Heidegger's Thought (1978) and Heidegger and Aquinas (1982). Caputo became disoriented about truth and method in Demythologizing Heidegger ( 1993) and then dramatically converted to Derridean theory, which of course brought him the kind of success reserved for such things. Heidegger does not believe a return to the Medieval or Medieval ways of thinking is possible, even were it desirable. He is very Hegelian (after his own unique but profound manner). He reiterates Hegel's point, only is better heard and understood, perhaps, that we are at the end of history. This, in his interpretation of Hegel, does not mean that history stops like a clock, because that is not how the being of time is. What it means, Heidegger thinks, is the end of metaphysics. Heidegger did not think the end of metaphysics meant theology had to shut down or that God was finally 'disproved'. We need to think about God otherwise, was his view. How so? Out of the question of Being. The Seinsfrage, Heidegger's one and only question, was the key to the way ahead.

Yes, I think there is something to be said for a thinking in ontology which is irreducible to Kant — based epistemology (i.e. a theory of knowledge that would subsume ontology and ontological discourse in its entirety). I think the best commentator on Heidegger is Levinas, who basically thinks that Heidegger's redemptive reasoning is not 'otherwise' enough, that we need a reasoning "beyond totality" and "otherwise than being". Most commentators write about Heidegger's philosophy, which Heidegger, while he was alive, again and again, said it was a mistake to do, and he never did it in his discussions of past thinkers. For him the history of thought was contemporaneous, all of it equidistant from that which is most worthy of thought (in a play on Ranke). As an attempt, and I think you have used the right word, it is thought-worthy (to use the Heideggerian expression). As for Nazism. Heidegger's silence about the Holocaust has been mostly interpreted as a denial of any engagement and implication of guilt, but I believe in all sincerity that it shows exactly where he is at a loss for words, and where, therefore, thought needs to begin, even thought that would think out of his thinking. Emile Fackenheim's To Mend the World (1982) is the place to start with reading therefore. That Heidegger was involved in the early 30s in the Nazi movement, but that he then distanced himself from it, but not far enough for witch-burners, is a fact. That he received Holy Unction and died a Catholic is also a fact. None of us are innocent.

Matthew Del Nevo


Christopher asked:

What do you call a question that someone asks you in reply to something you have said — something that you think (and in reality, is) blisteringly obvious and self-evident?

To give an example, I once asked someone about his obvious New York Brooklyn accent and he replied, "Do you think I have a thick Brooklyn accent?" Is there a phrase in philosophy (or perhaps rhetoric) for a question posed like this in which the answer is decidedly yes to everybody except the person who asks the question?

A short answer is that I haven't found a quick answer for you, except one you may have already thought of and dismissed but I have mentioned some readings you might want to follow up. I have provided a long answer that might indicate that your response was more complicated than could be given in adequate treatment by a single phrase or name. Part of the answer leads us into the fields of language, communication and ethics. The English philosopher R.M. Hare suggested that most ethic-message bearing sentences contain two elements, a phrastic-propositional element (p) and a neustic-force/mood element (n) on which there can be p-p logical properties and n-n logical properties even if we are unhappy about claiming logical properties for combined (np) elements. In particular he was approaching the 'not deriving an 'ought' from an 'is' problem.

He suggested the pseudo-sentence form:

  • Your doing A, Yes (and not No) = Command and Disagreement Denial.

  • Your doing A, No (and not Yes) = Negative command and Disagreement denial.

  • My doing A, Yes? = Permission request and expected assent/agreement.

  • It is P, Yes! = Strong assertion and counter evidence denial.

  • It is P? Yes ! = Assertion and expected agreement, possible but unlikely dissent.

We can group communications like this into three blocks:

M(0) =

1. Sender message M(1)

[Sender Asserts to Receiver an act or proposition]

2. Expected receiver response: M(2)

[Sender Asserts to Receiver response message.]

3. Expected receiver compliance: M(3)

{[Sender Asserts to Receiver Sender-Receiver Disagree if expected receiver response denied] But

[Sender-receiver Agree if expected receiver response affirmed.]}

If we look at the actual response to the question we can see that it was in the form of another question:

"Do you think I have a TBA"? (TBA = Thick Brooklyn Accent)

which came as response to your question in P-N form:

"You're having a TBA? Yes!, Comply!"

Your received response could be translated into P-N form as:

"((My having a TBA? No!, Noncomply!), Yes!, Comply?)"

In which you have been challenged to disagree with the receivers response that has been embedded into a new question to which have been expected to assent. You could, if you like call it the De Niro response: 'Are you talking to me!?' (Polite English version).

From the earlier systemic structure you would also be expected to understand that failure to comply is likely to end in interview termination and possible generalised disagreement with sanctions. I think this was also the drift of De Niro's speech.

You could also construct a response subtext profile in terms of the tacit satisfactions, non-satisfactions, promissory and threat values, an approach I am trying to systemise through what I have called 'SIFT' analysis in which pn junctions are the basic unit of qualitative expectations, the logical conjugations of which produce the static, (satisfied/non-satisfied) and the dynamic agents of change that act on them (Promissory and Threat agents) to produce an expectation field through which a situation may be perceived and acted on. In terms of your situation, the receiver's response field could be represented as follows:

Positive Satisfactions: 'I am happy with my accent.'

Negative Satisfactions: 'I don't expect it to be mentioned.'

Frustrations: 'Any mention of my accent.'

Dissatisfactions: 'Differentiating my accent reveals my roots and weakens my professional status and wrong foots me.'

Positive Indifference: 'I am not inhibited by my accent.'

Negative Indifference: 'I don't think with an accent.'

Promissory Value: 'Reduce my embarrassment by not reaffirming attention to my accent and comply to continue the interview.'

Threat Value: 'Increase my embarrassment by affirming my accent and non-comply with the answer I want and we discontinue this interview or continue with animosity/antagonism possibly leading to general disagreement with sanctions (possibly terminal in the extreme De Niro case)'

In reduced heuristic 'Sift' form the response could be understood as the double question:

("Has your question improved the interview? I don't think so!") in conjunction with:

("Will countering my tacit 'don't mention my accent assertion' make this interview worse? I think so!")

The name of the category of response I think you may of thought of but dismissed is that of 'rhetorical question' in which a question is both asked and answered by the same individual in one sentence. Which does seem to fit the bill but also hides other interesting complexity.

For example the response you received has some characteristics of a 'performative utterance', the most general formulation of which describes it as an utterance in which, 'saying it, makes it so', like 'I promise to...'. In the saying you raise an expectation of a specific performance and simultaneously commit yourself to the implied performance and consequent sanctions on failure to execute. I think though that not all aspects of 'performatives' have been thoroughly explored in the literature of the field. For example such utterances simultaneously name an act and exemplify it and as such could be considered to contain a 'demonstrative act' that provides a clear paradigm or central case from which a number of 'prepackaged' inferences can be unfolded. They also have some of the unassailable characteristics of what used to be the logician's holy grail, the analytical sentence, from which other indisputable truths could be derived and against which no blaspheming counter evidence could be produced.

Your respondent has sent you back your original message embedded in two other messages, the next layer of which denies communication (1) and expects your compliance and your completing the blanks in the next/ top layer communication (2) to produce communication (3) in which you have complied with your respondents compliance request and agreed with your respondents evaluation of your initial assertion i.e. that non-TBA is the case or indifference to TBA is the case. In other word the multilayered response asked and answered the question demonstrating the required response which you could perform by an indifferent or non response and continue the interview without sanction, or terminate the interview since denial was in effect impossible.

Of course if it was said with supporting facial expression and tone your received response could have been ironic dark humour given that you both probably have a TBA.

Having said all of this, I hope that you have received this message without it engendering a feeling of extreme terminal prejudice towards the sender!


R.M. Hare The Language of Morals Oxford 1952
M.A.K. Halliday & R. Hasan Language, Context and Text: Aspects of language in a social-semiotic perspective (OUP)
J.L. Austin How to do things with words Oxford 1961
J. Searle Speech Acts Cambridge 1969

On rhetoric, fallacies, informal logic:

I.Copi & Burgess-Jackson Informal Logic
A. Fisher, Ed. Critical Thinking, First Conference Proceedings University of East Anglia 1988

Neil Buckland


Lindy asked:

What is the difference between natural theology and revealed theology according to Aquinas?

Natural theology is what we would now tend to think of as philosophy, in that it is what is revealed to the light of natural reason e.g. not to murder. Revealed theology comes from 'outside' or 'beyond' reason e.g. to honour your parents. It is what God teaches us that we might otherwise never know. Analogous to the Christian distinction between natural and revealed theology, and underlying it, is the more ancient rabbinic distinction between the Noahide laws, the 7 laws that we have in common and which bind us as humans, and which must bind us if the world is not to end in disaster (as in the time of Noah) and the Ten Commandments that were handed to Moses on Mount Sinai. The first is a natural revelation of the law, the second a divine revelation. The fact that there is overlap between what man can discover for himself and what only God can reveal is interesting, for it tells us something about the relation of the divine to the human — that we a person is (potentially) god-like, that humanity is unique.

Matthew Del Nevo


Ana asked:

I'm a Portuguese student and my question might be difficult to answer: does science contribute to the meaning of life search? Does it gives reasons to live and meaning to our lives?

This is a very interesting question that touches upon a very wide range of issues. My belief is that whilst science is incredibly effective in providing us with one particular type of knowledge, the assumptions upon which it is based prevent it from answering other questions. When we think that science is the only way of gaining objective knowledge we seriously restrict our quest for a total understanding of reality.

Science can a) tell us about the workings of the natural world, b) help us make predictions concerning the natural world, and c) provide the basis upon which incredible technological advances rest.

Science undoubtedly performs these tasks extremely well. As a rigorous and systematic method of inquiry, science is unique. It is one of humankind's great achievements that has benefited the lives of us all.

Science is however based on certain assumptions that render it incapable of answering other questions such as the meaning of life. Scientific knowledge is based on a "split" between the realm of the observer and the realm of the observed. This split has its origin in Descartes' division of the world into mental substance and extended substance (between mind and matter basically). Within this division the observer (mind) is the loci and source of all value. In contrast, the observed (matter) is value-less. It exists without intrinsic value. Any value it has must be imposed on to it from the observer. The observer can manipulate this value-less matter in any way she or he chooses in order to understand its functioning. This is what happens in an experiment.

The scientist's manipulation of this matter, if performed properly, will give us information about how this matter functions. As a result of this it may be possible to make predictions about how we expect this matter to behave and we then may be able to use this information to build new technology.

It would, however, be absolutely impossible for the scientist to derive any objective meaning to life or any other philosophical knowledge from the data she or he collects. The "raw material" she or he works with is, by definition, value-less. To derive any philosophical truths from this data would therefore be a mistake. Any philosophical knowledge must come from "outside" this value-less indifferent matter. The scientist may be able to explain and predict the outcome of a particular chemical reaction for example. The answer to the question of whether this chemical reaction is morally good or not cannot, by definition, be found in scientific knowledge. The answer to this question could only be found through some other form of enquiry.

This is not to say that science should be rejected. Rather, science should be seen as one extremely effective method for gathering a certain type of knowledge about the world. It does, however, have its limitations. There are certain questions which science cannot and should not attempt to answer.

In today's age when we are presented with wonderful technological advances on a daily basis there is an unfortunate tendency however to see science as the only means by which true, objective, knowledge may be obtained. This belief can be described as scientism. The rigorous experimental methods employed by scientists, the status (and funding) that science enjoys all contribute to an increasing culture of scientism.

The problem with a scientistic culture is that it assumes that the questions which science cannot answer do not therefore have an objective true answer. Questions concerning meaning in life and ethics for example are assumed to have no objective truthful answer. As a result, relativism in the field of ethics, politics and indeed all subjects that are not amenable to scientific investigation dominates.

Simon Drew


Maximiliano asked:

I read many years a go a book about Maeister Eckhart, where he said: "Are you talking about God? Anything you could said about him is a lie". Do you know, the name of the book?

Breakthrough by Matthew Fox I thought, but having said that I can't find on which page. This saying reads like a gloss on 1 John 1:10; 2:21-22; 4:6, 20. It is also a statement of apophatic (negative) theology in which the presupposition is that God is not defined by his being or non-being, but by what he is not (His beyond-being). This can be read about in the Mystical Theology of Pseudo-Dionysius (in print in English in the Classics of Western Spirituality series, NY: Paulist Press.) I wonder about the translations of Eckhardt.. There are a lot, some of them are very loose, even I would say, made-up. Sermon XXXII Homo quidam fecit coenam magnum (Lk.14:16) in the Pfeiffer collection, with Spamer's insert from his Texte, has: We can say nothing of God because nothing is like him." Actually Eckhardt reiterates this idea throughout his sermons, tractates and sayings. The idea that anything said about God is a lie imputes moral fault. Unlike Eckhardt to be this loose-tongued with language — more like an American translator who wants to underline what they believe to be Eckhardt's anti-authoritarian disestablishment theology. There is also the logical difficulty which Eckhardt would have seen had liar been his actual word: Eckhardt's statement is either true or false. If it is false it would suggest that Eckhardt is a liar; if it is true it would mean Eckhardt is certainly a liar.

Matthew Del Nevo


Carmen asked:

I am trying to choose a topic for my IB extended essay. It should be about 4000 words and I wanted to do on some aspect in philosophy. I was wondering if you could suggest some interesting topics besides Theory of Knowledge that I could encompass in that word limit.

The important thing to remember when planning your extended essay is to choose a subject a) that you find interesting, and b) one which has easily accessible resources available.

You say that you want to write one from philosophy. If I were in your situation I would probably structure my essay around some ethical/ moral issue. You could, for example, choose an issue such as animal welfare and examine the various conflicting arguments used by philosophers. A good starting point would be the book Animal Liberation by Peter Singer. It is a very straightforward account of a utilitarian stance on the moral status of animals.

Animal welfare is, of course, only one of many ethical issues you may choose to write about. There are lots to choose from. Perhaps there is one that you feel particularly strong about. The Internet is an invaluable tool in searching for resources/quotations on any issue imaginable.

Remember that the IB examiners are not expecting some groundbreaking piece of research. They are more interested in your ability to research an issue and to present your findings in a systematic way. You must also avoid writing summaries of what various thinkers' positions are and leaving it at that. Your essay must involve the development of an argument and lead to some tangible conclusion. Ask your teacher for a copy of the Extended Essay marking scheme. It will tell you exactly what the IB examiners expect from you.

Good luck with it!

Simon Drew


Here are some questions for you.

Are you studying Philosophy in the IB? If not, then I suggest that you think very carefully before taking on a Philosophy Extended Essay. You would need to show that you understand how to do philosophy, and that you have a familiarity with standard arguments, in your essay. This is, of course, not impossible to do if you do not study the subject, but it is harder.

If your answer to the first question is 'no', do you have a potential supervisor in your school who has a good knowledge of philosophy? If not, I suggest you forget it. It is hard enough for you to get up to speed on philosophy, but it would be doubly difficult if your supervisor also did not have a good philosophy background. Read the examiner's reports on Philosophy Extended Essays — they continually talk about poor essays done in situations like this.

If you have a philosophy background, or a good supervisor and a lot of determination, then answer this: what areas of philosophy interest you? Which of the standard problems that you have seen really grab you? This is where to start. Look carefully at one of these problems and pick a small part of the argument study in more detail. Read one or more of the famous treatments, and find a part that puzzles you. Narrow down on this. You must do your essay on something that is going to keep you going, not on something that someone else is interested in.

I don't want to sound too discouraging, but the Extended Essay is a big task, and it has some fairly stringent requirements as to what counts as a good essay, so you need to pick your subject area and topic with care. If you want to discuss this further, please email me.

Tim Sprod

Ben asked:

Would you consider all reality to be virtual reality? How can you back up your argument?

You might take a look at Putnam's article: Putnam, H. (1973). 'Meaning and reference' The Journal of Philosophy 70 (19), 699-711. It's not quite about what you're asking, but close... and Putnam has another more relevant article called 'Other Minds' in Mind, language, and reality Cambridge [Eng.] ; New York : Cambridge University Press [1975]. That latter has the old "brain in a vat" idea. That is, if we are actually brains in vats, with data fed to us simulating reality, could we tell the difference? As you can see, the movie "Matrix" was about 25 years out of date. Putnam answers in the affirmative. I and many others disagree... we find his argument to be based on a linguistic trick, basically. But you may like it.

Now, when you say "all reality", just what does that mean? Some reality is certainly virtual, in the sense that we imagine things, hallucinate them, even. What you're actually asking, I believe, is the question above, viz., are we brains in vats, and if not, how could we tell? But, you know, backing up this argument in a sense depends on your metaphysical position. If you believe that we have something like "souls" (and I do not understand what that word really means) and some sort of intrinsic connection to "reality" (and I'm perhaps a teeny bit more clear on that latter, but not very), then you believe, probably, that we can intuit (know) that we are or are not brains in vats, and your argument then depends on your conception of what a soul is. Good luck avoiding circularity on that one. If you're a materialist, then you're really stuck... in the vat, since any signals can be externally duplicated, to put it simply. Or so I think. Putnam, as I say, disagrees.

Steven Ravett Brown

No. For there to be a virtual reality, it has to be in contrast with [real] reality. Otherwise, it is just reality.

Tim Sprod


Mark asked:

Does philosophy consider pop music art?

As there are no objective criteria regarding art and therefore no definition of art in absolute terms, it's very hard to make an objective statement whether some piece of human work is art or not. For this simple reason "Philosophy" as a whole won't give an answer to your question. All I can give you is some approaches to handle this question.

If we define art as the conscious use of skill and creative imagination especially in the production of aesthetic objects and an aesthetic object as a work of aesthetic value, then from a pop music composer's perspective pop music is art. Therefore pop music might be at least some kind of art for other people including philosophers, too.

It is often said that there is:

I. high art in contrast to popular art, and/or
II. good art in contrast to bad art.

Both have their problems: Why should high art not become popular (Ex.: Nigel Kennedy's interpretation of Vivaldi's Four Seasons) and why should some work regarded as good art not be (regarded as) bad after all? This can happen for example, when methods of critique change. So, these distinctions won't do the job.

Now someone could define art from the perspective of the listeners focusing on the intellectual powers of the audience saying "music can be composed for intellectual delight or for satisfaction of primitive instincts", concluding "The elite listens to intellectually high "classical music", meaning absolute music and therefore art per se and the masses are settled for primitive pop music, which is of course non-art." Now, if for example Tony Blair admires Oasis, he admires the art of the masses, and with the definition above, suddenly belongs to the masses or at least indulges in primitive instincts. This pseudo-sociological approach is no good either.

Another weak argument: Gilles Deleuze and Theodor W. Adorno argue that popular music is nothing but a giant exercise in money-making, and thereby completely devoid of aesthetic value. Though there is no doubt about the commercial character of pop music, in such comments another category mistake is committed: a million selling pop song still has some aesthetic value when it is well composed.

One reason for all these discussions going in circles lies in the fact that the term "pop" is a very blurred one: it can refer to a music category called "pop music" (meaning non-classical music, which doesn't automatically mean non-art) and it can refer to the many listeners, therefore meaning being "popular" (which has nothing to do with the piece of music itself).

To cut that story short: I believe any piece of music can be regarded as art as long as it is composed by a human being regardless whether anyone likes it or not.

Simone Klein

What are the parameters for art, and who sets them ? Art is highly subjective, what pleases one person might disgust another. Philosophers are no exception to the rule. I can only speak for myself, and being involved with music all my life, and for very many years appeared before audiences as a semi-professional classical baritone, I am obviously biased in favour of what I understand to be real music, performed by highly talented musicians. A person with a real talent is born with it, it is not manufactured to suit a commercial purpose. This I recognise as true art. For me pop does not fall into this category, in fact I find myself unable to refer to this stuff as music, I refer to it as pop, I am unable to find a category for it in the range of what I regard as acceptable music.It seems to me that anyone who knows the difference between a crotchet and a quaver can soon put together a pop tune, and there is certainly no need to have qualifications or talent as a lyricist, because most of the stuff comprises no more than half a dozen words repeated over and over again to the point of monotony, and which is often meaningless and banal.

The basis of pop is not art, in my opinion, but commercialism, and thus has as much value in the art world as a large financial institute or a large international bank. No one will ever convince me, and probably most of those in my generation, that pop is anything other than a ruthless commercially driven enterprise. The whole process seems to be driven by agents and record companies, the stuff is churned out as though from a production line: it remains popular for a short time, then, because there is little substance or depth to it, it is replaced by something much the same; a necessary process in a massive profit driven enterprise.

Compare this to the music of the great masters, much of which has been popular for hundreds of years and still pulls large audiences to concert halls all over the world. Audiences are also drawn by the great talented artists who perform these works, performers who have spent years of training and practice on developing their talents to the full. One of the things that pleases me about many of the young people in this day and age, is the way they have sidestepped the pop world with its peer pressure and clever advertising, and gone on to develop their natural musical talents, to continue to bring this great music to the ears of the world. Despite what I previously said about record companies, they earn a great deal of appreciation for bringing expertise and modern technology to the production of superbly enhanced past recordings, which sound as if they were originally produced just yesterday, and give us the benefit of hearing great music performed by great artists long since past away. All this I claim is art, pop has no comparison either in its composers or its performers, who leave a great deal to be desired, and seem to depend on visual attraction rather than anything else, what passes for singing here, to me, bears no resemblance to real singing, and this becomes painfully obvious when a singer and a pop performer appear together on the same bill.

The fact that pop is pushed down the throats of people, particularly the young, day after day by the media for commercial gain certainly begs the question of whether this can be art, or anything to do with art. To me, pop is not a progressive move in music, but a backward step to a more primitive time in our history. Anyway, these are my views for what they are worth, take them or leave them; others will no doubt have very different opinions: and, as I said at the beginning, art is subjective. Believe it or not, many of my family and friends are pop addicts, I have been trying to change them for years, with little success.

John Brandon


Rafaela asked:

Is the principle of proportionality being applied by the United States in their response to the New York and Washington terrorists attacks?

Your question assumes that the US ought to limit itself to a 'proportionate' response. However, the main line that has been taken is that the terrorist network needs to be eradicated in the interest of self-protection. In that case, a response which was out of proportion to the original terrorist act might conceivably be justified on the grounds that it prevented future acts of equal or possibly greater magnitude.

To see the problems with this argument, imagine a possible world where owing to a tip-off, the would-be hijackers were intercepted minutes before they were due to board. Their intentions, and the intentions of the people who sent them were exactly the same: to take thousands of innocent lives. But it would surely have been impossible for the USA to have used the incident as justification for the invasion of Afghanistan. Everyone would have breathed a sigh of relief, and redoubled their efforts to increase Airport security.

'Justice' has been spoken about a lot. It is clear, however, that a truly just response would be one where only the guilty suffered: i.e. hunting down and bringing to trial the actual persons responsible. Despite the propaganda, therefore, I don't think that what the USA is seeking is justice.

A third possible role for the idea of proportionality is as a prudential rule of warfare. There are two reasons why, in a war situation which is not all-out war, there might be self-interested reasons for limiting an aggressive response to an enemy attack to one that is in proportion to the harm caused:

1. The first rule is familiar to chess players, "A threat is more powerful than its execution." If there is something you can do, perhaps use type of weapon which you have not yet used, or attack a particular target, your power over the enemy is greater than if you do use that weapon, or attack that target.

2. The second principle acknowledges the importance of winning the peace. The chances of winning the peace are greater if the victorious side has been seen to exercise self-restraint in its pursuit of victory.

It is clear that neither of these principles has played much part in the thinking of the USA in conducting the invasion of Afghanistan. In their eyes, the terrorists and those supporting the terrorists seem quite impervious to any threat. For the terrorists, it seems, death at the hands of the hated enemy is a form of victory. As for winning the peace, how can that even be contemplated in the face of so implacable a foe?

If this is true, then the only ground for self-restraint is the moral ground of saving innocent lives. However, the moral argument is open to the response that, on balance, more innocent lives will be lost if the war is not taken to the enemy.

In either case, whether one is considering self-interest or morality, the decisions being taken should be open to continuous review. The situation is fluid. One may be forced by an unexpected turn of events to reassess one's view of the psychology of the enemy, or of the consequences in terms of innocent lives lost of a particular course of action.

In practice, however, it is very difficult to stop a juggernaut once it has started rolling.

I also fear that another, more shady principle might be playing a role in the thinking of the US military. This is the theory which first emerged during the Vietnam war, at the time of President Nixon and Secretary of State Kissinger: the so-called 'Madman' view. The source of the idea is from game theory. If you are seen to be rational in the way you conduct a campaign, then the enemy has a greater chance of predicting your actions, and consequently of manipulating you. So the idea instead is to convince the enemy that you are completely out of control. You are sufficiently 'mad' to do anything.

The problem with this scenario is that you may begin to suspect that your opponent has adopted this game plan. Then what do you do?

Geoffrey Klempner


Gary asked:

What, if anything do we owe the starving world?

How if at all is killing in war time different from other forms of killing?

To owe something to anyone we would necessarily be in debt. Hence we could pose your question as, are we in debt to the starving world? The answer is yes if we are in some way responsible for their plight. Perhaps you are referring to the relationship of the affluent western world to the starving millions of Africa? If the western world is responsible for the plight of the starving millions in Africa, then it owes them some recompense. There is no doubt that in the past Africa was exploited by the west. The disgusting and unforgivable slave traders ruined families and communities, kidnapping large numbers of fit men and women to be sold into slavery, leaving behind the elderly and the very young to cope with food provision, and to survive as best they could. Before and since the slave trade western countries have exploited large parts of Africa, taking their mineral wealth, occupying their land and making Africans slaves in their own countries. Many parts of Africa have never recovered from these impositions, particularly where the west has eventually pulled out and left behind chaos and disaster.

Some African countries are up to their necks in growing financial debt to the affluent west, with no way of paying off those debts and relieving themselves of the massive interest burdens, their economies have been crippled. When drought, disease and disaster strikes, they have no means of coping with the situation and are left relying on aid, which is often too little too late. Do we owe the starving anything? Decide for yourself; perhaps you can say that we might not owe them much but our ancestors certainly do, however we have been left with the bill. Anyway I don't agree with the universal we, the blame and the debt should be placed at the door of those directly responsible, i.e. the exploiters.

Your question about killing in wartime is interesting but highly complex. There are many forms of killing, but I trust you mean the killing of one human being by another. There is also a range of these; there is murder, which is against the law of the land, there are executions, carried out through the law of the land, euthanasia, described as mercy killing, killings on road and rail, sometimes described as manslaughter, and so on. However, I take your point, killing in war seems to be something separate and in a category of its own. Killing enemy soldiers is the legitimate if deplorable business of war. However, over the ages this has also become true of civilians also.

Killing in war is very different to the types of killing mentioned above, simply because war is a conflict between nations. The causes are usually political, the instigators usually politicians or the leaders of nations. However the instigators themselves do not get involved in the actual killing. It is the job of the instigators to instill into those who are going to do the killing a sense of duty, and moral acceptability of what they are persuaded to do. Religion has often been used to provide the moral grounds for their activity. In both world wars God was on the side of the allies, but, so far as the enemy was concerned, he was on their side.

Language is used as a powerful tool in both rhetoric and propaganda : nowhere is Wittgenstein's original maxim, 'don't ask for the meaning of a word, ask for its use' more in evidence. Words used to great effect for conditioning purposes are duty, morality, God, right, self defence, hatred, threat, enemy, sacrifice, cowardice, cause, loyalty, patriotism, and so on, each having many connotations, but limited to a specific use for maximum impact. Thus killing in wars becomes a duty, a moral right, a form of self defence, an indication of loyalty to the cause, destruction of the enemy, a just cause, etc.

I was fortunate, I missed the actual fighting of the second world war by a whisker, but I was in the army of occupation in Germany just after the fighting ceased. We were, however, still in great danger, and I found myself in one or two hazardous situations, but I was never obliged to kill anyone, whether I could have or not is another question. Another conditioning word is target,'and I had been trained to shoot at targets, in wartime something moving two hundred yards ahead of you is a target, you can then substitute 'I have killed someone' for, 'I have hit the target,' less personal and more comfortable to the conscience. However the bayonet and the handgun at close range, well that is a different story, and the term 'self defence' now comes into play. "It's 'im or you lad," the nco's would shout, "make sure it's 'im." There are no debates on how you come to be facing "'im" in the first place, and are not instead at home weeding your garden. Also you have never met "' im" in your life before, how can he be your enemy? Simply because someone has told you he is, logic and rational thinking go out of the window in wartime.

Bomber crews dropped their bombs on targets, these targets eventually became towns and cities, in the case of the allies, bombing was switched from the munitions factories to the populace which produced the tanks and guns, claimed to be legitimate to bring the war to an early end, simply a different target. There was also a sense of revenge for the indiscriminate bombing by the enemy of not only British towns and cities but towns and cities across Europe. As war progresses less and less attention is paid to loss of human life, which somehow becomes an accepted consequence of what is happening, and numbers of deaths become meaningless as representative of lost human lives; ten thousand here, twenty thousand there, at least forty thousand British and allied soldiers were lost on the first day of the battle of the Somme in the first world war. Eventually only bare figures are quoted, no one refers to x thousand people/soldiers, but simply x thousand. Towns, cities, ships, planes, factories, etc. all become targets, the fact that people are inside them somehow becomes a side issue.

Are wars ever just? Well that is another question for debate; however when we consider the atrocities committed by the Nazis and the Japanese before and during the second world war, and what might have happened if no one had stood up to them, leaves me believing that there was some justice in that conflict. The first world war was a different story and the meaningless carnage of that conflict for no gain will, to me, always remain a stain on the sanity of the human race.

John Brandon

1. It seems to me that a Kantian ethics — based on the Categorical Imperative — requires us to treat everyone (including ourselves) equally, and so to give away our goods until we are as poor as the poorest. Our duties towards others are identical, and depend only on the situation in which they find themselves, not who they are. Utilitarian ethics — that the right thing to do is whatever leads to the greatest happiness of the greatest number — also points in the same direction.

This is a hard demand to live up to, and I don't think that any but (maybe) a small handful of people even seriously try to do so. It has the unwelcome implication that we owe no special care to our family and friends. For this reason, many philosophers (especially those of a feminist bent) think that we need an ethical theory that recognises the moral worth of caring and community. Aristotle's virtue ethics is often cited as such a theory.

One interesting area where this has practical applications is in the study of moral development. Lawrence Kohlberg set up, on the basis of a long term study of moral development, a six stage model of such development, where the highest stage is adherence to moral principles (very like Kant's theory). Unfortunately, he did all his empirical work on boys. When Carol Gilligan tried to replicate this work with a group of girls, she found that the girls did not reach Stage 6, coming instead to a moral stance that emphasised caring for others, which Kohlberg had called Stage 4. So, it seemed, boys were, on average, more moral than girls. Given, amongst other things, the relative numbers of males and females in jails, this seems bizarre. (See Gilligan's excellent and easy to read book "In a Different Voice" for more details).

Note that any ethics of care that emphasises our own kin and community too much runs the danger of warranting ignoring those we don't know, which many find equally distasteful.

2. As a pacifist, I don't believe there is any significant moral difference. There may be circumstances that justify killing in peacetime (e.g. self defence), and they may also justify a killing in wartime. However, one deliberately puts oneself in a position where one is likely to invoke self defence in a war, so this may mitigate against one's justification.

If, on the other hand, you believe there is such a thing as a just war, then killing may be justified in pursuit of a just war. Just War Theory usually invokes two conditions: that the war is fought in a just cause (e.g. to prevent a greater evil), and that the war is prosecuted using just methods.

Tim Sprod


Gonzalo asked:

I would like to know if you could provide me with a list of books that you find fundamental to the study of ethics. I would also like to point out that I am not very familiar with the subject, but I am rather interested to find out what is all about.

Also, I wanted to ask you if you think that morals, values or traditions present, or, could present themselves as an obstacle to the development or realizations of the objectives of one's lives. I ask you this because I want to ask your opinion on the following: I have a friend (female) who does not want to express her feelings to a person of the opposite sex, because she is afraid that she will lose her "dignity" (she is very conservative and traditional). Now, I wonder if concepts such as this could undermine, as I have said before, the realizations of really important objectives in one's life, such as meeting the person who one will marry.

In short, I think that honesty is more important than dignity (especially if "dignity" is causing you distress and doubt). It may not have the results that she desires, but it will allow her to be responsible for her actions and she won't be able to blame concepts such as: tradition, values (dignity), morals for her failures.

"But I am easily hurt and afraid of being hurt." — To protect oneself in this way (dignity) is the death of all love. For real love one needs courage...

Don't be too cowardly to put a person's friendship to the test. "The walking-stick that looks pretty so long as one carries it, but bends as soon as you rest your weight upon it, is worth nothing." Ludwig Wittgenstein.

— I hope that I have managed to present my case in a coherent manner.

I'll have a go at a couple of things you ask. Firstly, I would say that Aristotle's Nicomachean Ethics, Kant's Groundwork for the Metaphysics of Morals and J. S. Mill's Utilitarianism are central to the study of ethics. They are, to my mind, the classic texts that cover the three main ethical theories (though there are many more, of course). If you want an introductory text, James Rachel's The Elements of Moral Philosophy (McGraw-Hill) is excellent.

As to your question about your friend, I think that you need to make distinctions between the three things that you seem to lump together — morals, values and traditions. I don't think they are at all the same things, or even the same sorts of things. She values dignity, maybe because it is traditional to do so, but I am not sure that I would call dignity here a moral value. It is a tricky question, though!

Even granting that dignity is a moral value, I don't find it surprising that one moral value should interfere with another. Unfortunately, it seems to me that our moral values are bound to come into conflict with one another, and with other imperatives we have — to happiness for example. Making the best choices in the midst of this complexity is difficult, and we don't always get it right. To open ourselves to love, as you say, is risky, and we have no guarantee of success if we do. Your friend is responsible for her actions whether she chooses to take the risk, or to keep her dignity. It also seems to me that we can — indeed, must — blame our values if the choice turns out badly, because our choices spring from our values, be they moral or otherwise. That does not necessarily mean the values in question are bad ones to have: just that giving them the major say in this case turned out badly.

Nor is it to say that all our present values must be the ones we live by, either. If choices based on a certain set of them turn out badly for us (judged, as they must be, by other values we hold), then we may decide that we wish to readjust our hierarchy of values, or even change those values altogether. So, we may find that what we valued as dignity is now seen more as an aloof reserve, or a too extreme shyness. As Aristotle says, we must take care with our values, and hence care about the sort of person we are.

Tim Sprod

At university, our basic reading on ethics was the utilitarian philosopher who bases morality on the greatest good (J S Mill) the rational philosopher who bases ethics in duty (Kant), and the philosopher who bases morality in sentiment (Hume). Sadly, we were even made to think of ethics in terms of "rational bargaining" — Gilbert Harman, I think.

I would suggest:

J L Mackie: Ethics:Inventing Right and Wrong (as a general introduction) And more interesting books that seem to me to be along the right lines: R Gaita: Good and Evil: An absolute Conception, P Winch: Trying to Make Sense I Murdoch: Metaphysics as a Guide to Morals, M Buber: Knowledge of Man. And though this includes the notion of dignity, I think it is obligatory to read this book: Kant: Groundwork of the Metaphysic of Morals

Dignity is a concept belonging to ethics as duty. The Kantian idea is that man is worthy of respect insofar as he is a rational, and thereby dignified being. It is also arises in the ethics of Emmanuel Levinas. For Levinas, man does not have dignity because he is rational but because he has the power to command an ethical response. But this is moral dignity rather than repression posing as dignity. Moral dignity is about recognising commands upon us from others. When a person makes claims about their own dignity, they are talking how they want to be seen, and this the psychological sense of the term. There is also a descriptive sense to the term. There are people we call dignified because we respect them for their personal qualities.

I absolutely agree that honesty is a better human quality than dignity — though not everyone would. I'd say that lack of dignity goes hand in hand with honesty and must do so because man is not really dignified (so a small of dignity or repression is needed if the nature of one's honesty is to be appropriate). Ethical theories can be understood as idealizations attempting to account for moral feeling or as descriptions of how things seem to be, and in both cases the non-moral language of psychology gets used. In the non-moral sense, dignity is a mode of behaviour much favoured by the Victorians and now favoured by those who aim to receive respect, or it's a defence mechanism. In the case of your friend, the claim to dignity seems to be defensive, a form of repressed closure against others, which is sad, but perhaps you have to let such people be and respect their claims to dignity. We can change people's natures to a certain extent, through closeness and friendship, but defences run deep.

Ethics isn't an obstacle to personal development, though social values might be: But even then I'm not sure to what extent social values in Victorian times would have halted personal development, rather than limiting freedom. In the case of your friend, the obstacle to her personal development lies, perhaps, in her history which determined the nature of her personality.

If a person is sensitive and easily hurt, she has to find a way of dealing with the possibility of hurt which is appropriate to her nature. A person can't just harden up. And deliberately putting a person's friendship to the test seems a bit cruel and pointless and Wittgenstein probably didn't mean that this is something we should do, but rather that real friendship would stand up to a test. Friendship is built on trust and trust doesn't naturally give rise to the requirement for test. But I do think a defensive claim to dignity or any such repression makes the development of a relationship very difficult and your friend's encounter with your own honesty may be good for her.

Rachel Browne

Well, here's a list:

Very good intro book: Sommers, C., & Sommers, F. (1985) Vice & virtue in everyday life; introductory readings in ethics Fort Worth, TX: Harcourt Brace & Company. Good: Sinnott-Armstrong, W., & Timmons, M. (1996) Moral knowledge?: new readings in moral epistemology New York: Oxford University Press. Audi, R. (1997) Moral knowledge and ethical character New York, NY: Oxford University Press. May, L., Friedman, M., & Clark, A. E. (1998) Mind and morals: essays on cognitive science and ethics Boston: Massachusetts Institute of Technology.

Specific points of view: MacIntyre, A. (1984) After virtue Notre Dame: University of Notre Dame Press. MacIntyre, A. (1988) Whose justice? Which rationality? Notre Dame: University of Notre Dame Press. Dewey, J. (1988) Human nature and conduct, 1922 Carbondale: Southern Illinois University Press. Williams, B. (1985) Ethics and the limits of philosophy Cambridge: Harvard University Press. Rawls, J. (1995) A theory of justice Cambridge: Harvard University Press. (Nussbaum has a book out recently which is supposed to be very good; look it up also.)

General issues in meta-ethics: BonJour, L. (1985) The structure of empirical knowledge Cambridge: Harvard University Press.

That should get you started, anyway. There are many others; these are just some off my shelf.

As far as "dignity" goes... I could get into a real rant here... but, you know, I sympathize with her, to some extent. I don't think that this is, actually, so much an ethical issue (of course it is to some extent) as it is a social one. First, how much does or should one expose themselves and risk rejection and hurt? Second, what social consequences would there be for her, from her family, friends, etc.? Well, I don't know, do I? Only she can evaluate that. Now, on the other hand, as far as I'm concerned, there is no doubt that unthinking adherence to tradition ruins lives. You're asking a philosopher about thinking about things? What other answer would I give? On the other hand, there are many people who get a very clear sense of identity and security from their traditions, etc., and as long as they can be reasonably flexible (which, unfortunately, they usually cannot, in my experience) about adhering to them, why not? So again, it's a matter of balance, isn't it.

I actually don't think that you need or want a strictly philosophical answer here. It sounds like you, or both of you, need to go talk to a counselor, an older wiser friend, or something like that.

Steven Ravett Brown


Judy asked:

If something is the law, do I have to obey it? For example, some legislation can be seen as inherently immoral. I might say that I disagree with war, but if I were drafted to fight, it is law, and I must obey. But should I?

Martin Luther King grapples with this question in his Letter from Birmingham Jail. It is one that interests me greatly, as a former draft resister. My view, like King's, is that there are certain laws which are inherently immoral, and that therefore (provided the immorality is serious enough) I am morally bound to disobey them.

The difficult philosophical question — one that still worries me — is what grounds I have for claiming that a law is inherently immoral. King had an answer to that one — an immoral law is one that is at odds with God's Law. However, there are serious problems with basing morality on God's Law, as Socrates/ Plato pointed out in the Euthyphro. Does God promulgate His law because it is right (in which case what makes it right is something other than God's word), or is anything that God endorses therefore right — even the murder of innocent children (cf Abraham and Isaac)?

Of course, many answers have been advanced as to how we can tell what is inherently moral or immoral, while others have argued that there are no inherently moral truths — that morality is relative. The latter view makes civil disobedience very problematical, and I don't hold it. But this is not the place to defend my particular moral theory.

Tim Sprod


Christelle asked:

What is Reality?

There is a trite but obviously not satisfactory answer to this question. Reality is what you make it ! Over the years from before the early Greeks to the present day reality has been defined in many ways. There are those who are convinced that reality refers to tangible things existing in a material world; reality to them is solid , consisting of objects that have shape, weight and measurable dimensions. Those who disagree with this concept claim that no proof can be provided to establish beyond doubt that external objects exist, they claim that reality is based upon the perceptions given to us by our senses, i.e. sensa or sense data. These sensa, though considered by some untrustworthy, are, unfortunately, the only form of reality we have available to us.

Some claim that to say sense data represent objects in an external world is, to say the least, presumptuous. The theory is that we cannot know anything outside the restriction of the five senses, and to pretend that we can is silly. The only way we can investigate our senses is by using our senses to carry out the investigation, hence, we are locked within our sensa, which comprises reality. Some philosophers carry this idea further, they consider that the senses stimulate ideas in the mind , these ideas are what is regarded as reality. Everything is a product of mind, there is no 'out there' in the material sense. The argument is complicated, but I am sure that it has been discussed in these pages before, if you care to pursue it, you may find something under the heading of "Idealism" or "Empirical Idealism."

There is also a sense in which the things we consider as abstract are more real than than things we consider to be solid. The Greek mathematician, Pythagoras, not only believed that mathematics ( geometry ) were the ultimate reality, but that they also contained a mystic element. The Greek philosopher, Plato, believed in the reality of "Forms" this concept later graduated into the notion of "Universals."

Plato, in his search for definitions, i.e. an enquiry into the nature of things, came to the conclusion that things like beauty, courage, love, etc., had genuine existence and were, therefore, real things in their own right : he called them "Forms"; thus a beautiful flower, a beautiful scene, a beautiful woman, etc., share in the Form of Beauty. Form has been translated as "Idea," but this is misleading in that, unlike the English word 'idea' it does not carry the suggestion that the entity in question exists only in someone's mind. According to Plato, then, Beauty itself or the Form of Beauty would exist whether or not there were any particular beautiful things.

A later development of the notion of Forms produced the notion "Universals." Some philosophers believe in the existence of Universals, some do not. The arguments for and against are complicated, but the general notion of one school of thought is, that if things resemble each other they share a Universal quality, a quality that exists independently and, hence, is a real entity, e.g. White things share in the quality of "Whiteness," triangles share in the quality of "Triangularity" and so on. (If you are interested, I recommend you read an interesting little book called, Universals" by Hilary Staniland, Anchor Books ISBN : 0-385-04481).

The idea of reality then is rather more extended than is generally considered; it covers material existence, ideas in the mind, phenomena, Forms, Universals, monism, dualism, nature, God, and Kant's idea of "Things in Themselves" ( again rather complicated). Some ideas are self — eliminating, if you believe one you falsify another; some could exist side by side, suggesting that reality is a plurality. As I said at the beginning, at the moment, reality is what you make it, the arguments for and against the proposed ideas have to be weighed very carefully : then again, it might be very different from anything so far envisaged.

John Brandon

Hey, let's start with the Big One!

I can see two possible answers. The first is that Reality is the sum total of everything that really exists. Which leads to several more Big Questions: What really exists? How can we know what really exists? This sort of answer paints reality as existing independently of us. The trouble is that we have no direct access to such a reality — all our access is mediated by our senses. Or maybe we have access to some parts only — our thoughts, for example. Then, maybe, our thoughts aren't real — only material things are.

The question as to how we can know what exists hints at the second answer. Reality (for me) is all of which I am aware, or (alternatively) reality (for us) is all of which we are intersubjectively aware. This overcomes the problem of access, but creates problems of its own. If we construct reality, then is their any objective reality, independent of us? It seems that there can't be. Further, if you and I (or my society and yours) disagree about what is, then is there any way to decide?

As you can see, there are many, many questions opened up by your question.

Tim Sprod


Tak asked:

I'm a Malaysian Chinese, studying in the Sedaya International College. I'm currently taking my philosophy courses there, doing my major in Philosophy.

Lately I have discovered a difficult matter to be understood:

"Some people argue that because non-human animals can think, humans are not unique at all. What is the difference between thinking and reasoning? What mental states indicate a thinking process? Would you say that reasoning presumes thinking but that thinking does not presume reasoning?"

Since you haven't defined what you mean by "thinking" or "reasoning", I cannot answer your question. If you look at the literature, you find an enormous amount of controversy on the issue of human vs. "animal" (quotes because I consider humans to be animals) thought. In addition, this is not (I believe) really a philosophical issue, but a psychological (cognitive) one. You might take a look at: Bickerton, D. (1990) Language & Species. Allen, C. (1997) Species of mind: the philosophy and biology of cognitive ethology Cambridge, MA: The MIT Press. Midgley, M. (1995) Beast and man: the roots of human nature New York, NY: Routledge. Armstrong, D. F., Stokoe, W. C., & Wilcox, S. E. (1996) Gesture and the nature of language Cambridge, England: Cambridge University Press.

Also, there are two conferences being held:

Oxford, 3-4 October 2002

Royaumont Abbey, France, 24-26 May 2002

You might look into them.

Now. Here are some of the issues, as I see them. First, the only people who have settled, for themselves, the question of what language is are those with very particular positions on that matter. There is no general consensus. Do apes have language? Can they be taught (sign) language? Can grey parrots speak (i.e., do they have language)? What about feral children? According to Bickerton, as I understand him (and others), one (maybe the only real categorical difference) major difference between animals' "languages" and ours is the absence of various markers for tenses, action, etc.; and "pigeon" languages are the equivalent of what apes have when they learn sign language. Suppose this is true; how significant a difference is this; what does it indicate about ape vs. human thought? Suppose it is not true, and apes, etc., do not have language in any generative sense, but only conditioned responses. Can they nonetheless think? Are they rational? Well, without language, how do you test that? Defining "rational" would help. There are recent studies that seem to indicate that mice dream of the mazes they've run that day. Is that thinking? Kohler, a long time ago, seemed to observe an ape make an intuitive leap involving a stick and a banana. Was that thinking?

Apes don't do abstractions to the extent we do; I think there's no doubt about that. The level of abstraction that we can manipulate and comprehend seems beyond them. They can't do mathematical equations or symbolic logical thought, for example. Is that the difference between "rational" and "non-rational"? What about the next level of abstraction, the one beyond us, that we cannot imagine, just as apes cannot imagine mathematics? Are entities who think at that level "rational", while we are not truly rational? What about the level beyond that one, etc.?

If your question is, at base, "are some animals conscious?", then the answer, for apes, is almost certainly "yes". Their neuroanatomy is just too similar to ours. And by "conscious", I do indeed mean self-conscious. And this has been tested; look up the "mirror test", where apes recognize themselves in a mirror. Again, there is a huge literature in this whole area; you can start with the refs in the above books, etc.

Steven Ravett Brown


Craig asked:

Is it better to be useful or popular?

The most problematic part of this question is not the word 'useful', or the word 'popular'. We understand them well enough. The problem here is the word 'better'. Better for who? or what?

The answer to that depends on what we're asking the question about. I think I would rather choose a useful but unpopular rucksack for walking in the mountains. But someone who wanted a fashionable pack for shopping in town might choose what we could call a 'popular but useless' bag.

Of course, I know perfectly well that what you really want to know is, whether it's better for a person to be useful or popular. I suspect that we would all prefer to be both useful and popular! But suppose we had to choose one or the other — and someone might really have to, for example in making career choices. Is it then better to be: useful and unpopular, or...popular and useless?

Well, if I had to choose one or the other, I would rather be useful and unpopular, because part of my satisfaction in life comes from thinking I am doing something to help others (e.g. in my job — I work in a nursery school). So I have taken 'better' to mean 'more fulfilling for me'. I wouldn't be too worried about being unpopular. Even unpopular people are usually valued and admired by somebody, even if only by their family or a few close friends. And being popular doesn't necessarily mean you have any really close friends.

Philosophy is a bit unpopular with many people. I wonder whether it's better for philosophy to be useful or popular — it's frequently regarded as neither! If philosophy is made more 'popular' (i.e. less academic), is anything lost, does it become less valuable? Actually, I think it becomes more useful — and by that I mean it becomes accessible to more people, more relevant to everyday life, with more practical uses.

So you see, whether it is better to be useful or popular depends very much on what we're talking about, and what exactly we mean by 'better'.

Katharine Hunt


Chris asked:

Since the existence of, or the non-existence of God can't be empirically demonstrated, then would it follow that free will or a person's choices cannot be (assuredly) attributed to the person as a moral agent. Is man a free moral agent?

This troubles me. Firstly, I am not convinced that the existence of God cannot be empirically demonstrated — just that it hasn't been (with no implication that it ever will be). But let's just, for the sake of argument, accept that it can't.

Even so, I fail to see why it is that this fact has anything to do with the existence or non-existence of free will (or moral agency). The two are quite distinct questions, to me. My answer to your last question is 'yes', but for reasons that are totally independent of the existence (or not) of God. [My reasons are to do with the inescapable experience we have of choice.]

Tim Sprod


Roslyn asked:

Can you explain the Frequency and Propensity Theories versus the Bayesian Confirmation Theory?

Well, I'm feeling particularly masochistic this evening, so I'll actually attempt some sort of answer to this question. First, I have some weak understanding of the frequentist position, and some weak understanding of Bayesian statistics, but I do not know what you mean by "Propensity" theory. So I'll skip that. Moving right along, I should tell you that this area is really a horrible one, and not only to outsiders. There has been, and still is, a continuous and sometimes acrimonious debate between frequentists (the school founded by Fisher, Neyman and Pearson) and Bayesians. There are journals which do not allow confidence intervals, or frown on them (some Bayesian journals), and others who will not publish an article without confidence intervals. Article after article, book after book have been written expounding and arguing for one and against another of these viewpoints. So any summary I can give will probably be too brief, inconclusive, confusing, and debatable, if not simply incoherent. But here goes.

The difference is between a picture of reality in which there are definite properties of things that we can discover if we're ingenious enough, and one in which we are floating in a kind of flux, from which we (hopefully) extract regularities. That is, the former, frequentist approach assumes that there are properties of coins, say, which ensure that if we flip them enough times, we will get half heads and half tails (neglecting the edges). The Bayesian approach says something like, that's all very well, but in the real world all we can do is start by assuming something like the coin will land half the time heads and half tails, toss the coin, and see what happens. There are no assumptions in the latter approach as to what the coin "really" is, just an estimate of how the tests are going to turn out. One starts by assuming something or other (a "prior"), flips the coin, and modifies one's initial estimate on the basis of that flip. And so forth. At this point, the frequentist starts making remarks to the effect that there is a real world out there somewhere, with real things that have real properties, and we need to find out what all those things are, so let's sample those properties with some coin flipping. The Bayesian replies, yes, sure, and how do we do that, all we're really doing is flipping the coin, right? They start different journals, and off we go.

So given the frequentist approach, what you want to do is, under suitably controlled conditions, take samples and compare them with the image of reality you have: a real coin has equal chance of heads or tails. You look at your samples, and use math to see how likely, given what a real coin (you think) is, those samples are accurate. If they're really off, then you either look at how you're sampling, or you have to revise your ideas about coins.

In the Bayesian approach, you start with a prior estimate, that heads and tails have equal probability, for example, take the samples, do the math, and find what the new probabilities are based on the initial estimate and the new data. You just keep doing that until you get a good idea of where the system is going, i.e., until things don't change too much after a while. At that point maybe you say something about what real coins are like, or maybe you don't, if you're really hard-core.

Both of these approaches have problems, but the Bayesian one is really the one favored now in many areas, and I also like it myself. After, all, shouldn't you do science by taking into account as much of the previous data as you can, changing your idea of reality as you go, based on that data, rather than keeping a constant idea with which to compare your data? Perhaps it comes down to a kind of Kuhnian question: shouldn't one be on one's toes in case a paradigm shift has to be made? I think so, anyway.

You might take a look at what Popper (frequentist) and Carnap (Bayes) have to say on this, but really, there's tons of recent literature, easy to look up. I liked: Resnik, M. D. (1997) Choices: an introduction to decision theory, but it's not my area, and there are probably better introductions.

Steven Ravett Brown


Jessica asked:

"A person has a body."
"A person is a body."

Which of these statements more accurately expresses the truth about personal identity?

To me, they are both misleading half truths. "A person has a body" implies that the person is separate from the body, and stands in the relationships of owner to possession. "A person is a body" contains the implied phrase "nothing more than" between 'is' and 'a'. Neither implication is warranted. The first encapsulates a dualist answer to the mind-body problem, while the second conveys a materialist answer.

I follow neither view. Rather, I see a person as a mind-body, or (as Peter Strawson puts it) an individual. This view, also advanced by Spinoza, is called a double-aspect theory. Minds and bodies are merely different ways of looking at persons, which are in themselves one multi-aspectual sort of thing.

Tim Sprod


Oneide asked:

How to think about the world having the body as foundation?

To Descartes, the foundation wasn't the body, but reason. What are the consequences of thinking like Descartes?

The body as foundation is actually a growing interest for a group of cognitive linguists and some phenomenologists. First, there is always Merleau-Ponty, M. (1968) The structure of behavior Boston, MA: Beacon Press., and even better, perhaps: Merleau-Ponty, M. (1970) Phenomenology of perception New York, NY: Routledge & Kegan Paul.

Merleau-Ponty was really one of the first to emphasize the role of the body in generating our conceptualizations of the world and of others. Husserl did in some of his late works, but not as clearly. You might also look at Leder, D. (1990) The absent body Chicago, IL: The University of Chicago Press. for a more modern treatment, and also Johnson, M. (1987) The body in the mind Chicago, IL: University of Chicago Press. Johnson, M. (1993) Moral imagination: implications of cognitive science for ethics Chicago: The University of Chicago Press. Lakoff, G., & Johnson, M. (1999) Philosophy in the flesh: the embodied mind and its challenge to western thought New York, NY: Basic Books. Lakoff, G. (1990) Women, fire, and dangerous things Chicago, IL: The University of Chicago Press. there's also Stern, D. N. (1985) The interpersonal world of the infant New York, NY: Basic Books.

Anyway, you can find a multitude of refs in the above books, and you can find lots of commentary on Descartes and how miserable (in most of the above people's opinion) his ideas were. I actually mostly agree. But I'm not going to go into it here; there's just too much, and the above covers it very well.

Steven Ravett Brown


Shelly asked:

What separates mens football from womens football?

Why is football classed as mans game?

and Michelle asked:

Is football unethically masculine?

What is football? The obvious answer (at least, to me) is Australian Rules, but some misguided people think that it is Soccer, Rugby Union, Rugby League or even (arghh) Gridiron.

What separates men's football from women's football is who plays it — men, or women, or both. It is common, for example, for boys and girls to play soccer together before puberty. Football is classified as a men's game (when it is) for purely historical reasons — historically, only men played it. I have the caveat because I am told that in the USA, Rugby Union is considered a women's game — an idea that astounds most Australians because women here are (quite sensibly) far more likely to play soccer.

For me, only Michelle's question touches on the philosophical. Is it unethical that, historically, women were excluded from playing certain games — notably football? I'm inclined to give two answers: on equity grounds, the answer is yes, it is unethical. However, turning to more caring grounds, the answer is a little less clear. In football, especially the rougher codes, less physically strong people are likely to get hurt, so there is some justification for excluding women from men's teams. There are two problems with this, however. Firstly, it does not seem to rule out women only sides, which have been frowned upon or even banned until recently. More seriously, this argument seems also to rule out less robust males from playing football — something that has seldom or never been enforced.

Tim Sprod


Olivia asked:

What percent of our brain capabilities do we use?

Well, no one really knows, but it's probably as nearly 100% as we can get. We do badly enough, with all the effort and time we put in to working on problems, don't we. As for what you might be thinking of, i.e., "we only use 10% of our brains"... take a look at the sites below, which explode that myth.


Steven Ravett Brown


Damon asked:

Is it possible to combine the ideas of free-will and determinism, by saying that you cannot have one without the other?

This is the approach taken by the view commonly called soft determinism. It has some attractive features but, in my view, it ultimately fails because it collapses into what is known as hard determinism. Perhaps the best defence of (his own particular version of) it that I have seen is Daniel Dennett's "Elbow Room" — a very readable book (as is usual of Dennett's stuff) and quite persuasive.

Here's my version of the soft determinist argument. Free will requires that our choices are caused by us and, further, that we make our choices for good reasons. If we merely make our choices randomly, then we can escape determinism, but it seems that those choices are not really ours. However, good reasons require a whole chain of reasoning — they cannot be arbitrary either. So our reasons are caused by events that happen, the values we have, the thoughts that we have and so on. Indeed, for our decisions to be ours, these warrants for our decisions must lead directly to those decisions — they cannot leave it as a matter of luck. In other words, they should determine the choice — that is the only way that the choice can be ours, and we can be said to have made it. Hence, our free will requires that our choices are determined by our thoughts, values etc.

The reason that I think this account collapses into hard determinism (where our choices are determined by events completely outside of us) is this. What is true of any particular choice, which depends on other of our mental events (as well as events outside us), is also true of those precursor reasons, thoughts, values etc. So, they themselves depend on former thoughts etc — and so on, until eventually all the determinants of all our decisions are events outside ourselves. That is hard determinism.

Tim Sprod


Rick asked:

If I understand this correctly, when one looks to the night sky, and sees all those beautiful, twinkling stars, one is looking into the past.

Therefore, very hypothetically speaking, could there not be a place in the universe, or even a universe next door to ours, where one could stand, and look out upon the night sky and see the earth as it was after the Big Bang? Or indeed, see the Big Bang itself?

It's not hypothetical at all; that's why the Hubble telescope was put up into orbit. Take a look at the pictures and text at this site:


You'll love them.

That's for seeing the Big Bang (or as close as we can get at this point)... as for seeing the earth... are you kidding? First, we'd have to look around the curvature of space, but at this point the latest theories are that space in fact isn't curved enough for that. Second, even if we could, the earth is much more recent than the origins of the universe. Go look around that Hubble site; you'll learn about all this.

Now, the interesting question is, why, when we look at the night sky, don't we see nothing except bright light? Why are there spaces between the stars? After all, there are stars in every direction we look, and in between those, and so forth. This question was posed quite a while ago, and was only relatively recently answered. I won't tell you the answer... have fun looking around for it.

Steven Ravett Brown


Danielle asked:

Are you responsible for everything that happens to you in your life?

To me it seems that the obvious answer is 'no'. I am not responsible for things which are beyond my control. If a meteorite were to crash through my roof right now and smash off my big toe, I would not be responsible for the loss of my big toe.

However, if I have a choice which directly, in a manner I have foreseen, leads to the outcome, then I am responsible. So, if I know that pushing this glass will knock it off the table, and that it is then likely to break, and do it anyway, then I am responsible for breaking the glass.

Complications enter in several different ways. What about when I ought to have foreseen the outcome, but did not? To me, it seems that I am still responsible to the degree that it was an easy and obvious possible outcome to predict, and that I was negligent in not foreseeing it. This sets up a continuum of responsibility, from full to lesser.

Similarly, I may be an actor in a complex situation where the actions of other actor all contribute to the outcome. Here, I can also be only partly responsible, to the extent that my own actions contributed foreseeably to the whole situation.

I might add that it is becoming increasingly common (in our culture of victimhood) to claim that I am not responsible for my actions because some outside event means that I did not choose freely. For (hypothetical!) example, I am not responsible for my choice to rob you, because my parents beat me as a child. Or (not such a hypothetical example!) a Prime Minister of Australia is not responsible for lying to the electorate and fraudulently stealing the election because he didn't hear his public servants telling him that his claim that refugee children were thrown into the water was inaccurate. I must say that I find this trend deeply disturbing and dangerous to our society.

Tim Sprod


Shirley asked:

1. Are people free to move anywhere in the world?
2. What is the advantage of speaking more than one language?
3. How can you encourage young people to read books?
4. If history is the thing of the past why are we concerned about it today?
5. How has technology invaded our privacy?

To answer each of these questions fully would require 5 essays, and there isn't space for that on these pages! So how can I best help you? Perhaps the best thing — and I'm guessing that you're stuck on these questions and don't know where to begin — is for me to split each question up into several questions.

1. Are people free to move anywhere in the world? Is everyone in the world equally free? Are people in some countries more free than others? Are you free to move anywhere in the world? Is it a good, a desirable thing, to be free to move anywhere in the world? Does money give you more freedom to move?

2. What is the advantage of speaking more than one language? Can you speak more than one language? If not, do you wish you could? Why? What benefits might it bring you? Are some languages more advantageous to master than others? Why?

3. How can you encourage young people to read books? There are lots of different ways in which teachers, librarians, educators, parents etc. are actually trying to encourage young people to read books — for example, Storysacks, in which a reading book is bagged up together with a non-fiction book, a soft toy, and a game, or something similar. A local school or library might be able to tell you other methods they are using, and perhaps give you their opinions on the success or failure of those methods. When you have found out about how people actually encourage young people to read books, you could then try to think of other possible ways of doing it.

4. If history is the thing of the past why are we concerned about it today? Are you concerned about history? If not, who is? Why are you / they concerned about it? Do you agree that history is 'the thing of the past'? Can knowing about history help us in any way? Is it necessary for historical knowledge to be useful for us to be concerned about it?

5. How has technology invaded our privacy? Has it invaded your privacy?...now you continue!

Katharine Hunt


Keelie asked:

How is the Problem of Evil overcome by the existence of an Afterlife?

Quite simply, provided the afterlife is run properly. Any evil you do in this world is infinitely paid for in the next. Any evil done to you in this world is negated by an infinite good in the next. In these two ways, the balance of good and evil is always skewed towards greater good, the world as a whole is as good as it could be, and there is no problem of evil.

Tim Sprod


Rebecca asked:

I am writing a dissertation entitled, 'Freewill, autonomy, and mental illness', any suggestions? I have yet to begin it!

I can only suggest reading Ilham Dilman's book Free Will and a book called Love and Will by Rollo May, but both will lead you to further reading.

Rachel Browne


Carmen asked:

I'm having trouble with a question on which I'm doing my 'theory of knowledge' essay. I must talk about knowledge and I believe the question is about belief, truth and epistemology but I don't know the answer. Maybe you could tell me what it addresses specifically and what you believe is the answer. Here it is:

"Is a knower's personal point of view an asset or an obstacle to be overcome in his pursuit of knowledge?"

As a Theory of Knowledge teacher, my first comment is to say that nobody is looking for 'the answer' to this question. What is wanted is your answer. That is a different thing. My second comment is that you have 10 questions to choose from. If you don't understand this one, you should look for another you do understand.

Having said that, here is some guidance. Every person has a point of view. We have a history, a set of experiences, a culture, a place in society and so on. We see the world from this perspective, influenced by all these, and other, factors. What you see as a chaotic scrabble of people on a large oval piece of grass, I see as a great game of Aussie Rules footy.

The question here is: does this existing set of experiences, knowledge and beliefs aid you in gaining knowledge, or does it hinder you? You should also think about whether your answer would differ if the sort of knowledge you were after was of a different sort. Does your background make it easier to understand football matches and harder to understand integral calculus? Think about the different Ways of Knowing you have studied in your course.

Above all, remember the following questions lie behind this essay topic: What is your view? How would you support that view?

Tim Sprod


Nathan asked:

Who would you regard as triumphant in the war over legal principles, H.L.A. Hart or Ronald Dworkin or neither?

Neither can be taken as triumphant in a general way partly because each philosopher can be understood as taking in different aspects of the concept of law. One is interpreting the nature of law with reference to that which lies behind the law, and the other is describing the actual structure of the law. There is a difference between them which follows from this concerning legal principles. Dworkin sees adjudication as drawing on moral integrity whereas Hart takes it that the judge interprets the letter of the law, that legal principles are written and we have a rule which enables us to recognise what a legal principle is and a system within which to determine what these principles are.

On this, I'd be inclined to go with Hart. It may well be that the operation of the law is posterior to integrity, and it may well be that the judge needs integrity, but as I see the process of case law, there is very little reliance on integrity. Rather, it seems that the process is one of consistency, and an attempt to abide by the letter of the law. A judge cannot simply make new law, so how far can moral integrity be involved in actual fact? It may be a necessary quality of judges, but it doesn't follow there is individual input to any major extent. Also, I think that law has to be divorced from morality because it is essentially a generalisation of rules for particular situations and has a different sort of force from the moral.

Have you read Hart's Postscript edited by Jules Coleman comparing Dworkin and Hart?

Rachel Browne


Darren asked:

What is the inductivist model of science and what are the problems associated with it?

I'll try to put it in a nutshell. The inductivist view is that a scientist looks at a whole lot of similar events or facts (evidence), and comes up with a generalisation that covers them (a scientific law). A major problem with this view is that it assumes that facts are neutral things that are just given to us. They stand out without any need to have a theory under which they can be seen as facts ("the theory independence of facts"). However, when we identify facts in the world, we use some sort of theory to allow us to carve out some part of the world as a distinct fact. We also use a theory of some sort to group together a number of distinct events as facts that need a common explanation. Our 'facts' are theory-dependent.

This is very well covered in an excellent and very readable book called "What is this thing called science? by AF Chalmers (U Queensland Press). I highly recommend it.

Tim Sprod


Gary asked:

I am 14 years old and I am reading Sophie's World and Plato to Nato and before I started I wrote a small book on my ideas of everything, religion, the world, life after death etc. Can you send me few more ideas to cover which you think I might be interested in?

There is an enormous range of philosophical issues you can cover. You might consider the nature of morality, personal identity, time and space or whether man has free will, for instance. You could look at Thomas Nagel's book Mortal Questions and also his book What does it all mean. All Nagel's papers are of general interest and raise more particular issues than those I mention. You could look at new book by Simon Blackburn called Think which is introductory and easy to read.

Rachel Browne


Daniel asked:

Can you reflect over you own reflections?

Of course. Try it now. Reflect on what you had for breakfast today (maybe it was muesli). Now reflect on why you are reflecting on your breakfast (probably it was because I suggested it). You are reflecting on your reflection. Here's another example: reflect about the last time you made a choice (perhaps it was the choice to have a look at this web page). Now, reflect as to whether it was a good choice. Again, you are reflecting about your reflection.

In this latter example, however, you second-level reflection (your meta-cognition, to give it a more technical term) enables you to judge whether your first level action (the choice) was done well or badly, and gives you a chance to be able to do it better in the future. You can go to a higher level again, by reflecting on what makes a choice a good choice.

Meta-cognition is important because it enables us to improve our thinking. Philosophy often involves meta-cognition.

Tim Sprod


Serge asked:

Could you kindly give an account of the ontological argument, which is meant to be one of the attempts to prove God's existence?

There are various forms of the ontological argument, the most famous ones being by St Anselm and Rene Descartes. A slightly tongue-in-cheek summary of Anselm's version is to be found in Bluff your way in Philosophy (Jim Hankinson, Ravette Books):

It goes like this: think of something greater than which nothing can exist; but existence itself is a property that makes something better. So if this greatest thing (i.e. God) doesn't exist, there would be a yet greater thing imaginable, namely an existent God, having all the same properties as the other one with the added bonus of existence. But we can conceive this. So God must exist.

Katharine Hunt


Emily asked:

I have to do a report for my 8th grade science class. The question is to compare Aristotle's ideas on motion with the ideas of motion of current philosophers. What are the differences between the two, if any and who are the current philosophers who discuss this?

I can offer a little guidance. Aristotle had a number of ideas about motion that were accepted as true up until around the time of Galileo (the early 1600's). For example, Aristotle believed that when we throw a ball, some of the force we put on it remains with the ball to keep it moving, but gradually gets used up.

Galileo had a different explanation — one we now accept. The force only acts on the ball while we are in contact with it. Once it leaves our hand, it is the inertia of the ball (its resistance to a change in its speed and direction) which means it keeps going, and only another force (air resistance) will stop it.

I haven't given a full account of the differences between Aristotle's understanding and ours. You can look them up elsewhere. Nevertheless, there are several interesting things to say about this.

Galileo used a different way of working out motion from Aristotle. Aristotle observed and then thought. Galileo did these two, but then he tried experiments to test whether his thoughts were accurate. With this change, Galileo moved the study of motion from philosophy to science — indeed, some people will say that he invented modern science by doing so.

Consequently, it is no longer philosophers who generate ideas about motion. Now, it is scientists. So, I cannot answer your last question. (Philosophers discuss the importance of Galileo and how science works, but they don't discuss ideas of how motion works any more).

Another interesting thing, though, is this. Many people, when they think about motion, think about it in ways very similar to how Aristotle did. This is because Aristotle's way seems more in line with our experience. The "scientific" way is not so easy to understand. This is one of the reasons why learning the Laws of Motion can be so hard — we have to learn to think about the world in a different way.

Tim Sprod


May asked:

What is Descartes's argument to show that God is not a deceiver? In Meditation 4 Descartes describes the role of "understanding" and "will" in human judgements, in an argument to show that God cannot be blamed for our mistakes. But I'm having trouble understanding the two roles.

In his 4th Meditation, an analysis of the origin of error, Descartes first considers whether God could be the cause of errors directly. He rejects this, since God cannot be a deceiver: "For, first of all, I recognise it to be impossible that He should ever deceive me; for in all fraud and deception some imperfection is to be found, and although it may appear that the power of deception is a mark of subtilty or power, yet the desire to deceive without doubt testifies to malice or feebleness, and accordingly cannot be found in God. [since God is perfect]."

In other words: God might deceive us:

I. to spare truth, or
II.to manipulate us

But if so,

III. why doesn't God being all powerful not simply adjust reality instead of appearance?

IV.this would mean God needed something from us. But being all powerful God wouldn't need anybody!

So either way, a perfect being can't be a deceiver (and therefore not the source of error).

Later, Descartes offers this argument, involving the faculties of understanding and will:

Fact: God gave me finite understanding and infinite will Suspicion: Error is caused by the discrepancy between my finite understanding and my infinite will.

The Argument in short:

V. God can't be blamed for giving me finite understanding.

VI.God can't be blamed for giving me infinite will.

VII.But error springs from the fact that "the scope of the will is wider than that of the intellect" Therefore, again God can't be blamed for error, but it's my fault.

The Argument in detail:

VIII. Our reason cannot be faulted since, first of all, the ideas we do have cannot be considered formally false. And, secondly, concerning the ideas we do not have, this lack cannot be counted a positive defect of the intellect (a flaw), but merely a characterisation of its finitude (The fact that my printer cannot also create the documents that it prints out would not normally be considered a flaw in the device, merely a limitation of it). I cannot complain of God that He did not give me a greater faculty of understanding (could the printer complain not being the writer?). Therefore, the intellect has no inherent defect, but it does have limits.

IX. The faculty of the will itself does not produce error since the will is a perfect faculty (I can will anything) — indeed, my will is as perfect as God's (God's is only greater in terms of power, knowledge, and the objects He can affect).

X. As Descartes can find no reason to hold either of these faculties individually responsible for error, he concludes "Whence then come my errors? They come from the sole fact that since the will is much wider in its range and compass than the understanding, I do not restrain it within the same bounds, but extend it also to things which I do not understand: and as the will is of itself indifferent to these, it easily falls into error and sin, and chooses the evil for the good, or the false for the true."

Taking all this into consideration, error seems actually to be caused by the discrepancy between my finite understanding and my infinite will. Convincing?

Simone Klein


Geoffrey asked:

Could you recommend a good book on Medieval philosophy?

There are so many books on the subject of Medieval Philosophy, but The History of Christian Philosophy in the Middle Ages by Etienne Gilson has the best authority. Few writers in the area have Gilson's philosophical weight. Also he can really communicate like a great teacher. Gilson is really inside what he is writing about, but fluently speaks the language of the outside (the rest of us). This is a philosophical history too, not just a chronology in which events, whether mental or physical, are sequenced according to the speculations, become conventions, of 'cause and effect'. As a Christian philosophical history it presupposes (without the vulgarity of 'mentioning it') the hand of providence subtlety at work in the freedom of will and operating through the life of the mind to reach the tongues, hands and feet of the people.

Matthew Del Nevo


Jen asked:

I am really stuck on "Moral truths exist independently of us." — I get this bit about Plato etc. It's the next bit that gets me since we haven't really go many notes on it:

"Discuss using examples and illustrate your view."

What does it want me to do?

Think. That's what it wants you to do. The question asks for examples. Think whether there has been any time in your life, or anything that has come within your experience, that raised the question whether right-and-wrong is something real, independent of the moral attitudes of this or that person, or group of people. That would be an example which illustrated the question 'whether moral truths exist independently of us'.

It won't be in your notes, because your teacher hasn't lived your life, hasn't experienced the things that you've experienced.

Let me give just two examples, to start you off.

On this page, there is a discussion of abortion. Those people who are against a law allowing abortion in cases where a pregnancy is unwanted are passionately against it, and think that abortion is a great evil. Those people who are in favour of 'a woman's right to choose' believe just as strongly that it is the anti-abortionists who are in the wrong. Is there an right answer to this question, in reality? How would we know? And how can we discover what that 'right answer' is?

Here's another example, which might seem to point in a different direction. Not so very long ago, slavery was thought to be morally acceptable. Nowadays, most of the people you are likely to meet would say that slavery is morally unacceptable. How did this change of view come about? Is it just an example of different groups of people holding different views? Or is it a case of something that really is wrong, although people at first did not accept that it was wrong, and only later came to see the error of their ways?

Over to you.

Geoffrey Klempner


Sam asked:

I would like to know what the moral and philosophical view on abortion is.

There is no "the" view. There are as many views on this as there are people to have them, almost. This is, contrary to the seeming simplicity of the question, an enormously complicated issue, and not just from the social/cultural implications. Just what is "abortion", anyway? It has something to do with stopping the development of a human fetus. Does it necessarily imply that the fetus is killed? What is a human fetus? Is it the fertilized egg? The first two cells resulting from the initial division of that egg? A three-month embryo? Six-month?

What are the ethical principles we should use to evaluate this issue, once we've decided what the issue is? We certainly can't use religious ones, because we have no way of deciding which religious ones to use... that would depend on some religion being "the" correct one, and which would that be? So we have to use something else. Ok, what? Human life is sacred? Great... and just what does that mean, exactly? Human babies are sacred? Ok... is a fetus a baby? But we don't even know yet what a fetus is, do we. How sacred? Enough to kill the mother for, if it came to that? Enough to continue the pregnancy even though we can tell that the baby, once born, will die soon after?

I haven't even touched the social issues, etc., and probably even the minimal set of issues I've raised above will anger many people. So how do you even approach this issue, much less "solve" it? Let me demonstrate the difficulty.

Let us take my usual vague ethical criteria: the enhancement of life. That's what I'll use to evaluate this issue, and I'll just assume we know what it means, even though we actually don't, with any clarity. But I'm going to use it for simplicity. Let us say, in addition, that life has value. We don't know how much (or even, really, what "how much" means), and how it depends on circumstances, etc., but human life, I'll assume, virtually always has positive value. So, the first issue: is a baby human? Well, we pretty much have to answer that positively, since we grant that children are human, and there's no real boundary between a "baby" and a "child". So a baby is valuable. Next issue: is a fetus a baby? Well, we've come to bomb number one, haven't we. How about a fetus just literally in the act of being born? What distinguishes that fetus from a baby? Well, it's still attached to the mother by an umbilical cord, and it hasn't started breathing yet. Now, are we to use these as criteria for distinguishing a "human" from a "not-human", so that different ethical criteria apply? What arguments would support this distinction?

There are two possible arguments I can see that might have any validity. One is that the more "potential" or likelihood a fetus has of surviving, the more we should consider it human. That is, given that there is risk in going through the birth canal, in removing the umbilical cord, and in starting to breathe, an unborn (but in the act of being born) baby has somewhat less potential or likelihood to survive to adulthood as a baby who has been born. There's no doubt that this is true, but should it count as an ethical point? The second possible criterion for being human that seems at all reasonable to me is the possession of a functioning human central nervous system (CNS). But this is an iffy one also, because we don't really have a complete functioning nervous system, according to the latest studies, until we're in our late teens/early twenties. That's how long it takes for the prefrontal lobes to mature physically (never mind learning how to use those faculties). Even earlier studies put a fully functional physical CNS no earlier than puberty. So by that criterion, pre-teens (and now, teenagers) are not fully human. Well, it's getting sticky, isn't it?

But first, what other argument could there be? The act of giving birth can't be used, or anyone born through a Caesarean wouldn't be human, right? How about independence? A baby after being born is independent of the mother? Really? That's absurd, as a moment's thought will show. The type of independence? A baby is no longer nourished by blood? Well, that's a bit weird, but true enough... but most babies are nourished by another bodily fluid, i.e., the mother's milk. It seems a bit much to use the type of bodily fluid ingested to distinguish human beings from non-human. Breathing? A baby breathes, and a fetus doesn't? Yes, true... so we're going to distinguish human from not-human on that basis? But what if, in the future, technology improves to the point where a child could develop and grow without breathing, using highly oxygenated blood or some other fluid? Would they not be human, and thus have value? We could ask the same about any continuing physical connection to the mother, or some technological substitute. After all, there are people on kidney machines, artificial hearts, artificial lungs, and so forth. So that doesn't seem to work as a criterion. And I've run out of alternatives. Maybe you can think of something, but I'm going to move on.

So what I'm going to try to do, then, is more-or-less wave my arms around and say that I'm combining the two criteria above, just to give myself some ground to stand on. The better developed the CNS and the more likely to survive to adulthood, the more "human" a fetus is. Notice that I'm only putting the likelihood of survival in terms of a fetus, its own development, the mother's womb, etc... not in terms of wars going on around it, etc. Ok, so that means, that given some likelihood of survival, and some CNS development, we have to some extent a human being, with some "degree of" value. The CNS starts developing at week 3, roughly speaking, and becomes progressively more differentiated and developed. An embryo is termed a "fetus" at about week 8: two months, and that's really when there begins to be real coordinated neural activity, although it's still primitive stuff. Meanwhile, there are all sorts of things that can go wrong up to (and later than) then, and many spontaneous abortions and losses of embryos and fetuses.

So what do we have? Well, given all the arguments above, before week three we've got at most half of the criteria for a human being; that is, we have some potential for survival of the embryo, but no CNS. Up to about week 8, it's still pretty weak. Not really a CNS, just neurons developing, and a better chance of survival. After week 8, we've got something resembling a very undeveloped baby. But there is no time when we can say that we have absolutely no criteria for some degree of humanness, given the above arguments. But does that mean aborting a 2-week embryo is immoral? After all, we've legalized killing of adults, haven't we, in appropriate circumstances: war, self-defense, murderous killers, and a few others. We (most of us, let us say, for simplicity) regard killing in those circumstances as moral, so why can't we regard killing an embryo or a fetus, under appropriate circumstances, as moral? All right... what circumstances? Risking the life of the mother sounds reasonable to start the discussion, but then we have to ask questions like, how much risk?

Let's take another scenario: right now, with present technology, we can take an egg from a fertile woman, artificially fertilize it, implant it into a host mother, and raise a baby. But if that's the case, don't all eggs of all women have some potential for becoming adults? Yes, they do; however impractical that is for some woman in, say, a village in China vs., say, a rich woman in the States, it is not impossible. So if we're taking potential as an absolute criterion, then we have a moral obligation to extract all the eggs of all fertile women, fertilize them, and raise all those babies. ...no? Why not?

You see the problems here? If we're taking any potential as an absolute determiner of humanness, we're stuck with setting up a huge host-mother program, extracting the eggs of all fertile women, etc. But no one will seriously hold that position. So we can't take potential as an absolute criterion. But then we've got real problems. Just how much potential qualifies? We can get the CNS part pretty well down to a reasonable time period (which assumes we're actually going to use that as a criterion, remember)... but "potential"? And this argument eliminates, as far as I can tell, the position that claims that a fertilized egg alone is enough to qualify as a human being, because any egg can now be fertilized. There isn't anything special, now, about that one sperm happening to hit that one egg... we can take them all out, dump sperm all over them, and fertilize them all, if we want to badly enough (and we're rich enough to afford it).

Um... have I mentioned the mother's desires and circumstances here? The father's? How do they weigh in this?

Do I need to go on? Does this demonstrate the difficulties of the issues? Where does that leave us? Certainly not with "the" philosophical position on abortion, I can tell you. My own position, for what it's worth, is that one should take the above as a beginning and fumble through as best one can.

Steven Ravett Brown


Fille asked:

How can we understand that we have fallen in love?

If we think of philosophical investigations as a learning activity then I think most people would agree that it is probably a bad thing if the activity kills the pupil.

When the subject of enquiry is one such as this we have to consider how to proceed safely for both subject and the owner of the question. Philosophical thinking would seem to be the most harmless activity in the world yet it can be quite harmful in two distinct ways. The first is its tendency to pare away at an object until it has been reduced to nothing or something quite different from its original form. Secondly, our thinking can be taken in completely the opposite direction and lead us to over simple generalisations that leave no scope for tolerance, compromise, revision of ideas or moderate action.

So given these safety warnings, how should we advise someone to undertake the philosophical analysis of the concept of love? We might begin by asking a question that could stop us in our tracks, which is, given the criteria we often attach to knowledge and understanding, is this particular subject one that it is possible to answer in principle? The question, "when do we know that we are in love" seems clear and simple but if we restrict our thinking about knowledge of love to a classical philosophical approach to the subject of knowledge then we shall be forced to demand that only those objects capable of carrying truth values and in particular only the value 'true' can be objects of knowledge.

A lover-philosopher of this sort would say that they could only know that they have love when they are certain that they have true love. The Platonist lover- philosopher restricts their thinking even more and demands of themselves that the object of love is not simply a true-love but a necessarily true-love. They would follow this line of thinking, as if they were a very unsubtle automaton and force the view on themselves that nothing 'in-the-world' can be necessarily true and the only type of things that they can love are abstract entities that have truth by virtue of their internal form independently of any material or contingent fact.

While I'm not sure that the Platonists intended to characterise the fault-blind stage of interpersonal love in their theory of knowledge it does seem to approximate to one phase of the phenomenon. Love on their terms would be of an idealised individual who's material form represented a sort of matchmaker's profile that actual individuals fit to a greater or lesser degree. One problem for an individual, who tries to fit the person they love into such an abstract, unchanging framework is that real people are very material and guaranteed to change. The holder of such a view then is almost certain to experience catastrophic dissatisfaction with love themselves sooner or later and probably induce dissatisfaction in their partner sooner rather than later. In terms of non-destructive testing of the idea under consideration the approach taken so far to analysing the problem of knowledge of love, has done no more than introduce into the discussion a truth-based philosophical concept of knowledge which when applied to love yields a model of some aspect of the experience. The game being played is one of, what if love were like this, or supposing love is thought of in this way.

Although we should be protective of the ideas and the owners of ideas undergoing philosophical surgery we can and probably should be quite entertainingly vicious about the techniques or tools we use for such dissections. So we would be quite justified in raising doubts about the logical possibility of undertaking any analysis of love in the same way that we can doubt the possibility of ever teaching an automaton to genuinely have humour until they also have a nervous system, language and cultural experience that has very little difference to ours. The objection to analysis here is based on the idea of analysis as reduction in which to analyse something is to identify the parts of which it consists but at the same time to insist that those parts cannot belong to the same category of thing as the object of analysis. Under this theory the analysis of as joke would take the constituents of the joke apart but the parts could not be jokes themselves. As a good example of this look at the end jokes in the British sitcom, The Vicar of Dibley.

Similarly, he analysis of love must be explained under this theory of analysis in terms of non-love constituents.

Therein lies an interesting paradox. If a good analysis implies that everything that is true of the object of analysis must also be true of the parts into which it is analysed then how can what is true of 'A' also be true of non-A if 'A' and non-A cannot be true at the same time. J.O Urmson an English philosopher suggested an approach (Philosophical Analysis OUP 1956) which he called 'same-level analysis' which had some similarities to the concept of 'explication' Rudolf Carnap advocated (An Introduction To Symbolic Logic Dover) in which the concept of analysis could be understood in the imagery of the unfolding of a curled up flower, so that the parts and the interconnections that were wrapped up on the inside of a sphere become visible and separated as the petals containing the parts lay out flat. The analysis of the parts of the flower do not then require equating to objects that are non-flowers. Similarly the analysis of love on this model requires only the unfolding of the concepts shielding its internal working from our view without the requirement to surgically remove them from the living organism.

Neil Buckland


Estella asked:

How does contemporary theory about the ultimate nature of reality compare with the theory of Democritus and the atomists?

You know, it's minor pet peeve of mine; the idea that some people (I'm not saying you) have that the Greeks did it all, that "all philosophy is a footnote to Plato" or whatever, and that we're just sort of sorting it out or elaborating on it. No. There is basically no resemblance at all between Democritus' idea of "atoms" and the contemporary idea, now current, that "strings" or even that "elementary particles" are fundamental to physical reality.

First, what you have to remember, is that the ancient Greeks spoke and mostly thought in, yes, ancient Greek. What was the Greek word that Democritus used that could be translated "atom"? Here are some possibilities, all of which might translate as "atom" in some sense or context: athroisma, eidos, phusis, kataxaino, sphairion, suntribo, schema, and schematismos (as we would spell them in our Latin alphabet).

Democritus believed that people were made of very small globules of fire, and I believe that "sphairion" was probably the term he employed. Why fire? Well, the elements were, roughly, earth, air, fire and water. Humans, according to him, were fire. What was fire, for Democritus? Probably something vaguely like our conception of a glowing hot gas. So human "atoms" were little globular bits of hot gas, roughly speaking. But remember, they did not separate the "physical" and the "mental" as we do, nor make any number of other distinctions that we are not even conscious of, until they are brought rather forcibly to our attention. He did not, most emphatically, see, think, react to the same world as we do. A globule of "fire", for Democritus, had both of what we would now term physical and mental attributes, and there were no connotations of oxidation, chemical reactions, the nature of heat as motion, and so forth and so on. Therefore, does his conception resemble an atom in contemporary physics? An electron, etc.? Well, we can make nice metaphors if we want, but otherwise, no.

The Greeks (some few of them) were, in the West, the pioneers of rational thought. They spun off some brilliant ideas in attempts to go beyond the religious and cultural "explanations" of the time for various phenomena. Those explanations consisted mostly of stories, myths, parables, mostly verbal, intended to instruct people in what their culture wanted them to learn about the world, about morality, about how to act in various situations. A remarkable set of efforts, and it is amazing that more of them weren't executed by their societies, like Socrates. But it's one thing to try, however valiantly, to institute reasoned explanations as a reaction to learning sets of precepts and stories, and another thing entirely to get those explanations right.

Steven Ravett Brown


Catrina asked:

Do you ever get the feeling that we are moving with ever increasing speed towards self-destruction?

Do you think that humanity, in it's lust for power, is sealing it's own fate?

Are we all brain-washed by advertising? Do you ever get sick and tired of being swamped by advertising?

My answer to every question is yes, apart from, "Are we all brain-washed by advertising?" To which I can say no, I am not; but I see your point, many people are. The reason? Probably the use of professional psychologists who now go into advertising as a career, and find it easy to pull the wool over the eyes of most of the populace. To stand up to these people you now require to be a philosopher or a logician. It is a pity that young people, at an early age, are not taught logic in school. When I tried, along with others, to establish this we found little support, as we expected in a capitalist society, too many 'thinkers' could be detrimental to profits.

More and more people are beginning to recognise the doom and gloom scenario that you portray, which is a good sign that there is still some common sense and moral awareness left in the world. Protests against global capitalism are now on the increase. In my opinion, and I believe I am supported by many others, the great threat to this world has always been capitalism, rather than socialism. So far as I know, socialism, in it's true concept of rule by the people for the people, has never been established anywhere. Police states and oppressive dictatorships have masqueraded as socialist states, and provided the excuse for capitalists to identify them as such and to condemn them.

One Prime Minister of this country was famously allowed to say, without challenge, that socialism had been brought to an end in Eastern Europe by the demolishing of the Berlin Wall. The same Prime Minister said that there was no such thing as society, which led to the "never mind you Jack I'm alright" society which is prevalent today. Following this, unrestricted capitalism was let loose upon us in this country, the national assets of water, energy, railways, public transport and telephones were virtually given away to profit motivated organisations. Prices rose, profits soared and members of society ceased to exist, they became known as 'consumers,' to make use of consumers and exploit them to the full, you have to constantly let them know what you have for sale, and to keep updating what is now loosely known as a 'product.' This ethos has now rubbed off onto all kinds of profit inspired business. This brings us full circle to the answer to your questions on advertising.

The unfortunate thing about all this is that capitalism has been equated with democracy, the public are conditioned to refer to Western capitalism as Western democracy. There is a rather weird idea abroad that capitalist societies are free societies, I harbour the idea that you can be free in these societies if you can afford it. But there again, I'm old fashioned, I belong to another generation. Banks, financial institutions, insurance companies and big business, all seem to have loads of freedom, perhaps this is what is meant.

Your questions have to be modified a little. "...we are moving...towards self destruction." would be better posed as; "...we are being moved towards....etc." Secondly; "...humanity in its lust for power, is sealing it's own fate." Would be more acceptable as "Certain elements in their lust for power are sealing the fate of humanity." These 'certain elements' want us to believe that what is happening is the universal fault of all humanity, it relieves them of blame.

As governments of all political persuasions condone this state of affairs it is difficult to predict the outcome. The worst scenario would be global anarchy, but perhaps nature will intervene before then. The button for self destruction, as you call it, has already been pressed, global warming is no longer a myth but a reality, rise in sea level is evident, and millions of people are starving in droughts that seem to have no end in sight. Capitalist greed has been let loose on the world and who is powerful enough to stop it?

John Brandon

Regarding your third question, I am not sure that I can answer you actually asked but I can offer you an answer that may provide you with a 'defense against the dark arts'. You might find it interesting to read an article in The Guardian Unlimited an online version of a UK paper containing an article by Madeline Bunting entitled; 'Slaves of our desires' in which she discusses evidence of the disturbing systematic and deliberate use of psychoanalytic thinking at corporate and later state level originating with Edward Bernays, a nephew of Sigmund Freud, to promote a concept of self and a view of human nature that it could manipulate. In particular the view, almost certainly not true that humans are nothing but a bundle of irrational, emotional responses and desires often contradictory and often infantile. She argues that clever market research enabled corporations and later politics, to understand and respond to those emotions and desires to manipulate based on Freud's view that, 'democracy is impossible because people are irrational and ignorant'.

Bunting argues that there is evidence that the strategy of social engineering in conjunction with capitalism is so powerfully adaptive that even counter-culture movements such as those of the 60's and 70's were transformed into virtues of the cultures they opposed. The prisoners became the guards and imprisoned their own children. Freedom loving hippies wanted their true selves so insurance companies presented their financial goods as though they would provide exactly those ends. So how can you defend yourself against the dark arts? (C.f. Harry Potter and The Philosopher's Stone).

The logical analysis of advertising and political spin is difficult to deal with partly because logic is like the T-Rex food object in Jurassic Park, if it doesn't move it ain't there, so you can't eat it. In logic's case if it's specific types of imagery then it ain't there, so you can't criticise it. So how do we make the images of advertising and spin arouse our critical hunting instinct? If we take one example from UK TV advertising you may begin to see how you can create a logical potion powerful enough to counter the bewitching charms of the media.

If we consider that TV adverts contains two types of strategy then we can characterise them as property affirming or perception denying or a combination of the two. Property affirming advert elements simply list the properties of a product without touching the effects that ownership could have on the individual. More sophisticated versions of this type also list the properties of 'toy' rivals, compare the benefits of the two products and declare theirs the winner. These types are often given credibility by actors playing the role of experts or scientists (men with glasses, white coats, a serious expression and tone of voice). Perception denying adverts concentrate more on the effect of ownership on the individual. The example I wanted to look at is mostly of the second kind. There is an alcoholic drink advertised in the UK, Guinness, which was not being bought for reasons partly to do with its properties but more to do with specific absolute perceptions of the drink and relative perceptions of the drink compared to new drinks such as lagers. My knowledge of this subject by the way stems more from observations and inquisitiveness than professional or personal experience in the either the advertising or drinking fields.

If we characterise the perceived dissatisfactions, i.e. those features we have but do not want in the drink we could list them as follows:

  1. It was drunk by safe, cautious, unadventurous, unattractive, elderly people in dingy run down pubs (UK drinking houses).
  2. It was boring, conventional, unexciting, and serious.
  3. It took a long time to pour. (One of my sons tells me that some pubs will pour the drink by 'phone order half an hour before you go to the pub. This is now considered a quirky good thing.)
  4. It was messy to drink. It has a large, white, frothy head that sticks to your lips and dribbles down your chin.
  5. It was a drink of the poor and lonely.
  6. Drinkers of Guinness were quiet, isolated, unapproved by the majority and especially the young.
  7. The taste was sour and metallic.
  8. It was a form of medication. (Iron for the anemic)
  9. It was drunk in very small measures.

Guinness advertisers have countered all these perceptions or truths with two, at least particularly successful TV advertising campaigns. The first featured a young, male, different looking, individualistic, idiosyncratic, understatedly uncaring (of approval) comic dancer who hesitatingly but deliberately delayed his approach to an already poured drink and deliberately delayed the slaking of his thirst.

The imagery was so successful because it created and showed controlled frustration leading to positive satisfaction through the medium of understated visual and musical (accompanying sound track) humour. Humour is a particularly good medium for delivering adverts and spin because it is part of the logic of humour that it essentially unanalysable because the analysis of jokes are themselves not jokes and we think of that which is self sustainingly true to be necessarily true, (there is a famous argument concerning the 'analytic-synthetic debate and the attack on the notion of analytic truth by the mighty Quine, a noted US logician). So humour is the analogue of unanalysable truth and may not switch on the critical brain. The packets of information being carried in the humour parcel may well slip into your mental mailbox without you even noticing it. It is only possibly later that you might begin to realise that there is critical food in front of you and begin to question what has now become visible particularly when the humour begins to fade through over frequent exposure. Though by now the drink and its alleged effects may have taken on mythical proportions for the enchanted.

What dark magic did the advert work on you? First it did not attempt to send a double negative message of the form:

'Some people say the Guinness is bad because a,b,c,........,n, but in fact, a is not bad because p, b is not bad because q,.......etc'

The advert restated the denial of the dissatisfactions in positive property form; the denial of dissatisfaction (a) = x, the denial of dissatisfaction (b) = y,....etc. It then built all of these counter properties into the image of the individual personifying the ideal Guinness drinker and in some small measure the drink as well. So that the advert asserts that:

G (the drink) and G (drinkers) Have (x,y,.........., z) characteristics.

It particular it showed or demonstrated rather than stated the argument that, 'if you drink Guinness then you will create the following perceptions in your observers including yourself as self-observer: You are amusing, idiosyncratic, strongly independent, admirably eccentric, unconcerned, easy going but discerning.'

We could summarise this advert as containing the following elements:

  1. An image consisting of positive versions of counters to specific dissatisfactions,
  2. The meta-message that, having 'strong frustrations that will eventually be satisfied,' is a good thing i.e. something you ought to want and can get by having ownership of Guinness. Guinness in other word has promissory value for the frustrations and dissatisfactions identified.
  3. Protagonist-viewer identification. For the target audience and possibly beyond the message carriers is likeable and like the message receivers.

Following Bernays principle the advertisers have focussed on the general cognitive effects of frustration and satisfaction in the inference pattern:

  1. the frustrated-drinker becomes the satisfied-drinker
  2. [leads to]
  3. the frustrated person becomes the satisfied person
  4. any frustrated person can become a satisfied person
  5. [following from]
  6. a frustrated person becoming a frustrated drinker.

In short a general or specific sense of frustration can be satisfied by drinking Guinness specifically.

The advertisement does not need to create a sense of frustration or dissatisfaction in the viewer but only needs to show that when it exists it can changed to satisfaction by ownership of the product given that the first cognitive module of the thinking process is concerned with motivational arousal determined by perception of frustration or dissatisfaction. (See Luria, The Working Brain ch; Thinking, Penguin). The brain has been charmed into letting images into its thought production mechanism, which carry highly charged frustration-satisfaction images. You might consider this process to be brainwashing.

This advertising campaign was followed by an even more visually striking one that could be considered to have some of the features of a work of photographic art independent of its dark purpose and therefore a source of self-sustaining and unanalysable 'truth' akin to the 'do not attempt to analyse this' prohibition warning attached to humour and performative utterances in general like promises. (See Austin, How to do things with words Oxford.)

The dissatisfaction features it worked hard at countering were the dissatisfaction sources the previous campaign did not touch which were the drinks' medicinal, unpleasant metallic taste and the small amounts drunk at one time. This advert features an individualistic, eccentric, idiosyncratic surfer who against the odds rode and tamed a towering white, foaming wave that transformed itself into white, foaming horses (probably mares since they were tamed by males (not my view!!)) to the acclamation of his surfing friends. The salty taste of the sea horse would be a token of the trial and triumph (c.f. The trials of Hercules in the Odyssey) and power of the victor over the wild, dangerous challenge from nature. (There may even be a sexual Freudian subtext going on here).

The towering, foaming sea became a large glass of foaming Guinness and the challenge was to drink large amounts of it. The advertisers have again slipped past the sleeping analytical dragon without arousing it by wrapping their message in the invisibility cloak of artistic pleasure. This like humour allows the Trojan message to jump out into your thinking mechanism resulting in your thinking that you want to be a hero amongst men and when you are in a pub you can satisfy that goal for yourself at least by drinking a lot of Guinness and you will probably feel you are being applauded for it.

This advert works harder than the last one at creating a motivational sense of desire via an exciting, compulsive, throbbing bass line together with a dour, insistent 'beat' poem voice-over with a counter mechanistic message. I suspect it may also countering by making it a virtue, the throbbing headache you can get from the drink or asserting the Zen-like quietism the surfer had that a drinker would need in the midst of the noise and babble of a noisy aggressive pub.

What transfigurations and meta-messages has this potion charmed you with? All of the previous adverts plus:

  1. The re-enforcement of universalised dissatisfactions: the desire to stand out from the crowd and the desire to be a hero,
  2. Protagonist-viewer identification and tacit universal assertion that; all life is competition and a frustration and dissatisfaction producer and satisfaction denying, you have to make a sustained effort to achieve satisfaction through adversity.
  3. Specific tacit assertions: effort makes you thirsty,
  4. The means for current goal Satisfaction: Having Guinness will deny all of these non-satisfactions and lead to immediate satisfaction,
  5. The means for repeatable goal satisfaction: defeating the taste will give the drinker a universal token of triumph as a universal (every time) satisfaction.

I have yet to see how they deal with the exorbitant price dissatisfaction associated with the product.

In summary then I am suggesting that the dark arts of the advertisers, spin-doctors, financial advisers can be countered by conducting an inner dialogue in which you construct first person observation sentences of the form:

  1. " The images that I am now seeing have the following characteristics...."
  2. And asking yourself the questions:
    (i) "These images are countering what product dissatisfactions?"
    (ii) "These images are countering what personal dissatisfactions?
    (iii) "These images are asserting what tacit universals?"
    (iv) "What goals are these images asserting I should want to satisfy?"
  3. Finally identify the agents of promissory value by asking yourself," what product to they want me to buy in order to achieve these goals and deny these non-satisfactions?"

Formally we can think of these adverts, but not all adverts as having the following logical structure:

  1. assertion of frustration and dissatisfaction examples; characteristics (initial conditions, premises, problem identification, motivational arousal),
  2. assertion of achievable goal; characteristics (terminal condition, conclusion, desired end, satisfaction)
  3. assertion of plan, schema and tactics: characteristics (agent of change for the better, transformational agent, product ownership)

Another defense would be to switch the TV off, but you can also turn the frustrations and dissatisfactions induced in you from adverts by treating them as a very expensively produced resource for philosophical analysis from which you can develop and strengthen your own analytical potions.


The Guardian Unlimited
The Working Brain Luria
Word and Object, 'Two Dogmas of Empiricism' Quine
Contemporary Readings in Logical Theory Copi & Gould
Philosophy of Logics Susan Haack
How to do things with words J L Austin: Oxford
The Zen Doctrine of No Mind D T Suzuki
Harry Potter and The Philosopher's Stone J K Rawling

Neil Buckland


Eric asked:

I'm inclined to believe that logical languages are definitionally true. Yet, I still perplex. Some think that logic is the analysis of the propositions and their implications, and others think that logic is a study concerned with things and their relations, and still others think logic is a cognitive model employed to increase consistency within the system of knowledge per se. I'm not sure if this is much of a difference. Meaning that, although the thing and the symbol representing the thing are different, we only speak of one and the same. My question may or may not have an absolute answer, but I am just searching for one more well-developed then my own — I am only an undergraduate philosophy student.

My questions are: What are desirable attributes of logical languages? i.e. provability power, consistency, strong or weak entailment rules, few assumptions and/ or definitions, etc.

Also, If no logical system can prove or speak of itself and meta-languages are needed to do so, are the relationships between things or propositions (or both) representative of the nature of reality or is all that can be hoped for definitional truths like '2+2=4' or 'All men are mortal'?

I'll say at the outset: this is not really my area of expertise, but I am interested in some small and specialized aspects of the field of logic and analytic philosophy, so I'll say a bit on this question.

Now, as far as logic goes, there are people who hold a) that logic involves only propositions and their implications, some who hold that b) logic is concerned with things and their relations, and others who maintain that c) logic is a cognitively-based and derived model. The first two may not be mutually exclusive, depending on your metaphysics. If you believe that the world is ultimately comprised of some sort of "atoms", in a very general sense of that term, and that they interact and interrelate through rules or laws, then you can hold both a) and b) above. In order to hold all three, however, it is necessary that one believe, in addition, that the mind is describable as (indeed, equivalent to) a Turing machine (to put it rather baldly). These are extremely contentious issues, and there are and have been enormous debates around all of them. My own tendency is decidedly nontraditional; I think that the world is not (ultimately) comprised of atoms, and in addition I do not think that the mind is Turing-equivalent. But I am nonetheless an empirically-oriented materialist. A nasty position to hold, I'll tell you.

I'm not even sure what you mean by "logical language". According to people like, say, Chomsky and even Pincker, and those of similar bent, all languages are "logical" in the sense that they all consist of a finite set of elements operated on by a finite set of rules. I do not agree with this interpretation, but that puts me in a distinct minority. So what "desirable" means is up for grabs, depending on what you want out of a language. You want ease of translation? Ease of computability? Ease of metaphorical expression? Ease of application to some particular area, like mathematics or logic? All those requirements might impose different restrictions, and result in different languages.

Are the relationships between things and propositions representative of the nature of reality? I'm tempted to be flippant... what do I or anyone know about the nature of reality and its relation to language? That question is one that people have been contemplating for... oh... 3000—5000 years, I'd say offhand. At the least. It's great that you're thinking about all this, but you've just jumped, as far as I can tell, head first into an issue that has been around forever, and has been hotly debated for about the last 200—300 years.

If you really want to even begin to understand how to address these questions, much less actually address them, you need to read, read, read. And read. I mean, I don't want to discourage original thought... but, you know, it's hard to have an original thought (i.e., one that someone else hasn't had) in this area. Why not take advantage of others' thinking? After all, that's what you'd want, isn't it?

I am not going to suggest a reading list. Any reading list, any set of intro courses, especially analytically oriented, will get you into this area. If you really want hard-core stuff, look at Alonzo Church, but he's not easy; Turing and Kleene, among others, were his pupils. If you specifically want language/ thought relationships... well, I'd really recommend starting with Kant before you get into modern analytics. Then there are the empirically-oriented fields of linguistics, psycholinguistics and cognition, not to mention various aspects of computer programming. Have fun.

Steven Ravett Brown


Katy asked:

What significance does "meaning" have in postmodern philosophy? How does it differ from other branches of philosophy?

Post-modernism describes a wide range of philosophy, but as far as meaning is concerned the post-modernist takes a stance against logic, realism and truth as correspondence to reality. Meaning is created, made, rather than about something beyond the word. So a theory of reference such as that proposed by Kripke, or the account of sense and reference as put forward by Frege, are rejected as theories of meaning. Traditional theorists presuppose that there are determinate states of affairs and things out there in the world which we talk about. The postmodernist, on the other hand, things that we have a creative and changing language and that this is not determined by the way things are out there in a world that is independent of us. Postmodernists are against theories, really, so they cannot be held to have a "theory" of meaning. Meanings shift and slide away, they suggest, they are supple and they are supplementative. They are indeterminate. This seems to be anti-meaning, but meanings do suggest and imply beyond accepted senses.

Post-modernists do not agree, because they have no theory to agree upon, nor is there a particular post-modernist stance. Post-modernists are into irony and interpretation and vary in degrees weirdness. But Richard Rorty, who is relatively readable says "We need to make a distinction between the claim that the world is out there and the claim that truth is out there. To say that the world is out there, that it is not our creation is to say, with common sense, that most things in space and time are the effects of causes which do not include human mental states. To say that truth is not our there is simply to say that where there are no sentences there is no truth, that sentences are elements of human languages and that human languages are human creations." (Contingency, Irony and Solidarity). That is acceptable to non-post-modernists, and Rorty has said (Ironists and Metaphysicians in The Fontana Post-modernism Reader) that post-modernism is compatible with nominalism, which is an anti-realist theory of meaning acceptable to analytical philosophers. It is also compatible with historicism. Both theories reject truth and knowledge as elements in meaning since we create our own language. Both theories are compatible with the characterisation of meaning as part of a language game allowing that meanings can change over time. In both cases the meaning of a term or proposition is determined by reference to our conceptual scheme, terms are translatable and meanings are elucidated by other concepts, rather than by states of affairs in the world.

This characterisation of postmodernism, however, allows in theory. Nominalism and historicism are theoretical constructs, and the very strong postmodernist, such as Derrida, would find such theories of meaning unacceptable. A strong post-modernist moves away from the concept of meaning as that which can be reduced to theory. Although nominalism and conceptualism and the whole idea that we make meaning indicates that meanings are not determinate because they can change, the stronger postmodernist thinks that writing can introduce new words and ways of thinking., Words can be created and meaning is creative, and so we have writers like Heidegger, Derrida and Levinas who introduce new words, the meaning of which is supposedly irreducible to and non-translatable into determinate concepts.

However, that newly introduced words and concepts define new ways of thinking isn't any refutation of realism. Nominalism isn't a refutation either. Newly introduced words and concepts can supplement realism because words are supple, they suggest and can be used metaphorically, but they can be taken as supplementary only insofar as realism about meaning exists. If all we had was postmodernism there would be no determinate meaning and we wouldn't have a grasp of basic meanings on the back of which postmodernism rides.

Realism and nominalism share in the idea of common understanding and given that we have an understanding of what words and concepts mean, we can trifle in postmodernism, introduce derivative words, veer in and out of interpretative suggestions as our fancy takes us. But we can only do this if we accept some form of conceptual determinacy. Derrida's position is that intentional content, the meaningfulness of words, the terms in which we think, isn't exhausted by theory of meaning.

Rachel Browne


Alex asked:

My question isn't really about philosophy, but about philosophers. Did A. J. Ayer and C. L. Stevenson know each other? Did they ever spend time working together?

Let me make a guess about where this question is coming from. In 1937, C.L. Stevenson wrote a paper called 'The Emotive Meaning of Ethical Terms' in which he argued that ethical assertions are not factual statements which can be true or false, but rather expressions of emotion. An expression of emotion does not 'say' anything. Rather, it shows that you have a certain attitude towards the thing that prompts the expression of emotion, which can be favourable or unfavourable, for or against.

In his book Language, Truth and Logic, which was first published in 1936, A.J. Ayer proposed a very similar, or identical account of the significance of ethical statements. Ayer argued that if ethical statements were factual, then according to the verification principle, they must be capable, in principle, of being verified by empirical investigation. But there is no way of empirically verifying ethical statements. Therefore, ethical statements are not factual, but merely expressions of emotion.

Given the closeness in time, it is a matter of speculation which philosopher thought of the emotive theory first. However, it wasn't necessary for them to have met or spent time working together, or even influenced one another. Ayer's book brought the 'logical positivism' of Rudolph Carnap (Der logische Aufbau der Welt 1928, later translated as, The Logical Structure of The World) to the attention of British philosophers — I am not so sure about the USA — and made a big impact when it came out. The most likely explanation is that both philosophers were influenced by the prior writings of Carnap.

Geoffrey Klempner


Sid asked:

Could you please tell me where (in which museum/ library) the originals of Plato's and Aristotle's writings are kept? What I am interested in is to know the original text on which the current English translation is based.

If the available English translation of the writings of the ancient Greek philosophers are based on their Latin/ Arabic translation, then could you please tell me where those Arabic translations are held at present?

I was leaving this one for someone else, because I don't really know the answer. But since no one replied, I'll say that as I recall, the museum in which the originals of Plato and Aristotle were kept, if there was such a collection, was in the library at Alexandria, established by Alexander of Macedon, and totally destroyed by fire over 1000 years ago. One of the most barbaric acts, and irreparable losses, in Western history. Thousands of original manuscripts from antiquity were destroyed in that fire. The only other place that those manuscripts might possibly reside are in the Vatican library, but I doubt that they do. The English translations of those writers, then, are based on both Greek and Latin revisions and translations, and those latter, I believe, are in the Vatican library (remember Thomas Aquinas). But there are, I'm sure, copies of them available pretty much anywhere (take a look, for example, at the Hippias site on the web for the Greek)... you might ask the British Museum, for example. I have no idea whatsoever about Arabic translations.

Steven Ravett Brown


Esmond asked:

I am a university student in Ghana.

I actually want to know in plain language the difference between an analytically true statement and a syntactically true statement.

An "analytically true statement" is a statement that is true solely because of the meanings of it terms and would be true whatever happens in the world. For example: The statement, all tadpoles are frogs is analytically true. That is because the term "tadpole" is English means "little frog." So what is really being said is that all little frogs are frogs, and that must, of course, be true. Again, all brothers are male persons, is analytically true, since (as you know) the term "brother" means "male sibling," and, therefore, the statement that all brothers are male persons means that all male siblings are male persons, and that, again must be true solely because of the meanings of the terms involved.

You also ask the difference between analytic truths and what you call "syntactic truths." I have never heard the term "syntactic truth" but I think that would mean a statement true just because of its syntax or grammar. I don't believe that if we take that idea literally, that there are any "syntactic truths."

I think, however, that you do not mean "syntactic truth," but rather "synthetic truth" a concept that is contrasted with "analytic truth" which I discussed above.

Now, a synthetic truth is one whose truth (or falsity) is not dependent solely on the meanings of its terms. For instance, consider the statement that all dogs are meat-eaters. It is not a part of the meaning of the word "dog" that it is a meat-eater, so that is not an analytically true statements. A person who knew the meaning of the word "dog" but who knew nothing about dogs (the animal) could not know that it was true that all dogs eat meat. In order to know that, the person would have to have knowledge about dogs and their eating habits. In other words, to know whether or not dogs eat meat or not, it is not enough to discover the meaning of the word "dog." This is very different from the statement, "all dogs are animals." To know that, you have only to know what the word "dog" means. If you do not know all dogs are animals, then you cannot know the meaning of the word "dog." But you may not know that all dogs eat meat, and yet know the meaning of the word "dog."

Ken Stern

I think the questioner might mean 'syntactic', not 'synthetic'. A syntactic truth is a truth which is guaranteed by the syntax of the language alone. In other words, a syntactic truth is a logical truth. An example of a syntactic truth is, 'If it is windy and it is raining, then it is raining'.

In order to know whether a statement which is not syntactically true is analytically true, you need to know certain facts about the language in question, namely what certain terms in the language mean. This is not a problem for an artificial language, where we simply stipulate that term A is to be interchangeable in all contexts with term B. The problem, as Quine argued in his famous paper, 'Two Dogmas of Empiricism' (1953, reprinted in From a Logical Point of View) is that when it comes to the language we actually use, the question whether or not two terms have the same meaning becomes quasi-empirical. Our intuitions about equivalences of meaning are not always correct. This led Quine famously to attack the idea of an 'analytic' truth.

Geoffrey Klempner


Jean asked:

"Empty is the argument of the philosopher which does not relieve any human suffering".

Would you please give the exact reference of this saying by Epicurus?

According to my copy of The Hellenistic Philosophers (A.A. Long & D.N. Sedley), the saying is quoted by the 3rd century AD philosopher Porphyry. The reference is given as follows:

To Marcella [Ad Marcellam, A. Nauck, Teubner ed., Opuscula selecta, Leipzig [1886] 31

Katharine Hunt


Carlos asked:

I'm finishing my graduation in Philosophy and I have an opportunity to teach philosophy to a group of teenager students with poor backgrounds who live precariously in "favelas" (a sort of shantytown dwelling place). I guess this course should emphasize vivid experiences rather than conceptualization. Anyway, I'm confused about how to approach this and I would like to have your suggestions.

There was (and I believe still is) a program involving teaching philosophy to prisoners, sponsored by the University of Chicago. In general, my advice to you for finding philosophy sources and directions in your situation would be something like that program. There are also societies which teach therapy based on philosophy: philosophical counseling. Try the ASPCP: American Society for Philosophical Counseling and Psychotherapy, at http://www.aspcp.org/. Also, American Philosophical Practitioners Association, at http://www.appa.edu/ and Café Philosophy, at http://www.philosophy-shop.com/cafeinfo.html.

Steven Ravett Brown


Peter asked:

How would you finish this sentence?

"To live is......"

When we answer the questions set in Pathways with only the web site as silent witness we are in a similar position to that occupied by a questioner and responder in Alan Turing's famous test for determining machine intelligence. In this test if the answers sent back to the questioner could not be distinguished from the answers a human would send then we could conclude that the machine is responding as intelligently as a human would so that we could not be sure if the sender is a living human or a non-living machine.

In the game we are playing, we can't be sure if questioner or the answer provider is taking the position of the machine, or if in fact there are only machines communicating. I am pretty certain that if I am a machine then I am indistinguishable from every other human machine. Furthermore if I thought I was responding to a non-human machine the kind of answer I would give would be different to the kind of answer I would give if I thought the receiver was human and I also had some idea of the context in which the question was being asked. I could, with no concern for the consequences of the discussion pursue an analysis of the concept of 'being alive' if the analysis was only a word game. But if the philosopher-player believes there could be life changing consequences for the questioner-player there should be constraints on the scope of the answer provided.

What follows from these considerations is that philosophical investigations that take on practical issues ought to work on a principle of non-indifference with respect to the learning and actions that may follow from the interchange.

Suppose that 'Peter' is a pseudonym for a woman in the UK who is currently seeking the right to die because she does not want to continue her life in a severely reduced and dependent form. She is alive but not independently alive. Should the correct philosophical Turing response be to neutrally elicit from the questioner what her understanding is of the phrase 'to be alive' as she was before her present condition, what it is now and what she thinks it will be? Given the questioner's position would it also for the sake of logical completeness be the correct response to offer interpretations of the key phrase not included in her perspective and persuade her that she may not choose to reject some of those meanings? For a person in this position the question we are thinking about is clearly very heavily weighted with both issues of fact and issues of value. So that a logico-linguistic approach to the analysis of the question may only provide one approach to unfolding the complexities of the issue or even changing minds.

If the questioner has full mental competence, as in fact the individual in question does, then Descartes might try and persuade her that she is no less alive in her present position than she was before, given the belief that the essence of being is thinking. If thinking though was not a source of satisfaction either because it was not something she particularly excelled in or practiced very often or it was not something that she would place in her list of preferences then an approach that would compliment the previous one mentioned would be to elicit from the questioner what are the satisfactions and non-satisfactions of being alive.

The Turing P-game has now altered so that it is not simply a matter of discrete question, response, evaluation, decision and closure but more a continuum of interchange in which information, ideas, questions, learning and teaching are flowing in both directions. But if the elicitation of knowledge in the context of a philosophical investigation has the form of a directed interview then the philosopher should have some idea about the direction the interview can be steered in and those it should not, in the context of a philosophical and not legal or medical inquiry or any other kind of inquiry. The practical philosopher should be aware of what has been, in the history of ideas considered to be sources of satisfaction for individuals in being alive but they can also take an alternative, more general approach of considering individual satisfaction to be an indivisible part of dualistic satisfaction. Decisions then relating to being alive or choosing to not be alive then become inseparable from how the satisfaction and non-satisfaction of others are affected either as classes of individuals such as relatives, such as husbands, wives, children or parents, others in similar positions now or in the future or others in the abstract as represented in medicine as the abstract entity, 'the patient', 'the defendant' in law, the social services 'client', the 'child' in family law, the therapy client.

In answering this question I felt that it was necessary to first talk about the logical delicacy of philosophical investigations conducted blind and suggested that the philosopher player should work within the ground rules of non-indifference in such contexts.

Secondly I have suggested that in the context of the questioner-player making life changing decisions as a consequence of the game then the philosopher-player can use a variety of techniques for practical philosophy:

Identifying particular philosopher's approach to the question,

a logico-linguistic explication of questioner meaning (mind-mapping)

an explication of questioner related truths and satisfactions,

question answering as a dual learning experience,

knowledge of the history of relevant ideas,

investigation of the question taking separately a monadic view of individuals and secondly a dualistic view of individuals.

Finally, it also seems to me that given the tendency of most individuals in the position of questioner producer, answer provider or both, to make two kinds of cognitive mistake of scale at some time due to stress, material or political circumstances which can be characterised as mistakes of over generalisation and mistakes of under discrimination then all philosophical investigations should be conducted within the constraints of non-indifference to consequences.


Turing Test: Alan Turing Andrew Hodges London 1985
Monism & Pluralism: D.W. Hamlyn Metaphysics Cambridge 1984
Modern Epistemology N Everitt & A Fisher

Neil Buckland


Kirsty asked:

Why must all things that live ultimately die?


Is humanity alone in the universe?

Two interesting questions, the answers to which are currently well beyond the scope of human knowledge. Before asking the question , Why must all things die? It might be wise to ask, What is the purpose of life in the first place — why does anything live at all? What seems evident to every observer is that living things go through an inevitable sequence of birth, development, decline and death. In other words, birth is the start of the road to death.

Of course, philosophically, this is a materialistic view of life which is adopted by the majority of the world's human population without question. However, there are those who do not accept the finality of this naive observation. It is, of course, well known that followers of several religious factions believe that life does not end with death of the material body. Some believe in a future material resurrection, some in a spiritual life here — after, and some believe that we are reincarnated in a different body to the one we discard at death.

Some idealist concepts which refute the argument for the existence of matter, find it easier to dispose of the concept of death because the difficult transfer from matter to non-matter does not really apply. Though the argument is more complex than this, as even for an idealist the 'idea' of death is still a fact. There is also the notion that dualism provides possibilities for immortality.

The unfortunate situation with regard to death is that, although there have been claims for the proof of spiritual survival, most people are fairly certain that no one has been back from the 'other side' to tell us about it. We sometimes hear of those who are brought back from the brink; and, oddly enough, they all relate the same experience of a peaceful drift down a long tunnel towards a bright light, and some are very annoyed at having been dragged back. Of course, neurologists and psychologists do not accept that this indicates transfer to another form of existence; drifting towards a bright light is to them an indication of the last flickering electrical discharges of the dying brain. Until the real truth is revealed it seems that the answer to your question is confined to the simple scientific explanation, that all living things die to make room for the next generation. However, none of us are forced to accept it, and, in philosophy at least, the search for the truth goes on.

As for your second question, we are still confined to scientific speculation. Considering that there are one hundred thousand million stars in our galaxy alone, and that innumerable galaxies exist in the universe, it is highly probable that several of the planets orbiting these stars support life very similar, if not identical, to the life found on our own planet.However, the truth is, as yet, not forthcoming. You may be aware that physicists are now discussing parallel universes in which each of us will be represented. The search goes on and one day, like death and the after life, all may be revealed.

John Brandon

Well, look at it this way. Suppose you were immortal, in the sense that you didn't age. This will certainly come to pass for people, as we discover how our bodies work, anyway (assuming we don't destroy ourselves in the meantime). Ok... you cross the street (or whatever) and get killed by a passing truck. A new virus, a mutation that our bodies can't handle, kills you. Your spaceship is hit by a meteor. Someone shoots you. And so forth. In other words, in a universe which we do not control, the odds are finite that we will be killed. In an infinite length of time, any finite probability will happen. So we will be killed. Hey, even the stars die, eventually.

However, if you are asking why we are now mortal... our bodies are inefficient. One might argue that in the beginnings of evolution we may have acquired this in order to cut down population pressure. My own feeling is that latter argument is not convincing because single-celled animals are effectively immortal (they divide, in effect becoming their own offspring). So solving the ageing problem is solving a problem (a very hard set of problems, mind you) with our bodies. Unfortunately (that's my feeling anyway), we ourselves won't live to reap the benefits of that research... maybe in the next century. I don't see this as even a particularly difficult problem, especially given the computing power that will be available in the next few decades. Problems like how people can get along with each other are much harder, in my opinion.

Is humanity alone? No. Period. As Douglas Adams says, the universe is a really big place. The question is, why haven't we had contact? Well, you know, I actually don't think that's a hard question. First, how long have we been looking? 50 years, max? Second, how could aliens find us, out here on the edge of the galaxy, until we waved at them, which we've only been doing for about 100 years, a mere tick of the clock, using a method so primitive that even creatures with our minimal intelligence can employ it (i.e., radio)? Third, and most important, how have we been looking? We've been using telescopes, a ridiculous method, and lately, with SETI, some few radio frequencies. Not unreasonable, given that aliens are like us... but why should that be true?

Let's put this in perspective. Suppose that dogs were trying to signal another species of dog. How would they do it? By barking or howling, right? Would we notice that, as a signal of that type? Would we care? How much smarter, given some theoretical maximal potential for intelligence, are we than dogs? Infinitesimally, I would say. Our brains must fit, badly, inside our heads, folded up, in order to have expanded to the amount they have, which is about all our bodies will take, both in volume and metabolically.

Suppose we found out how to increase head size, or produce more efficient folding, or better, connect ourselves to our computers? Where would our intelligence go then? In the latter case, the practical limits would be... well, I certainly can't even begin to envision it. Now, given that we could be, let us conservatively say, 100 times more intelligent than now, how would we signal... what indeed would our picture of the universe be, our physics, our electronics? We are not now 100 times smarter than dogs. How would our physics compare with our present idea of physics? Etc. You see my point? To aliens, if they notice us at all, we are, until we can consciously increase our intelligence, merely another species of animal on this planet. So why should they want to contact us, any more than we would want to contact those dogs? And how would we notice or understand it, if they did, any more than dogs could conceive, build, and use a radio set?

That's one of my theories, I think the most likely one. The next most likely one is this: look at the way computers are going. What if we are able to "scan" our brains totally, convert our neural dynamics into another form, and embody ourselves within a computer (a very large one, of course, and a radically different type from present digital computers, but those are — for the purpose of this discussion — quibbles). I mean, totally move into the computer, as a dynamic pattern in it. And not just ourselves, but everyone and our civilization... living in a virtual world, having anything we want happen (virtually, of course, but we wouldn't see any difference), and being immortal (as long as the computer wasn't hit by a meteor, etc.). Now wouldn't that be wonderful? Think about it. Anything you wanted, any life, any environment, any physical laws, no risk, no death... as long as you want.

Now, if I were an alien civilization faced with that possibility, vs. living in the cold, hard, limited real universe, hey, why not? So the second theory is that we can't contact them because when a civilization gets advanced to that stage, they just all move into their computer(s) and live happily ever after, in a virtual heaven of their own design, with the computer protected behind layers of armor and powered by something reasonably perpetual. Sounds good to me, anyway. So that's my second theory as to why we haven't and won't contact them. They're out there, zillions of them. They're just living luxuriously in the basement, so to speak, hoping to go unnoticed for as long as possible.

Or it could be a combination of the two above, with extremely advanced virtual civilizations communicating with each other by means unavailable (and incomprehensible) to us, until we get to that point.

Steven Ravett Brown


Jean asked:

Vegetarian scholars say that we can now live free of meat. Today, there are many people (in the United Kingdom, about four million vegetarians, demi-vegetarians and vegans) who do so and live a long time and well. If this is true, then does eating animals constitute what Professor Stephen Clark calls "empty gluttony"? (The Moral Status of Animals p.83).

We don't need scholars to tell that us that don't need meat. It's not an academic issue. All eating is glutinous when it exceeds fuelling and nutritional requirements. Today our nutritional requirements can be met by dietary supplements and we don't need meat.

Rachel Browne


Massimo asked:

I'm doing a paper on skepticism, and I was wondering if you can give me a few examples and consequences about this topic.

"Skepticism" derives from the Greek word for "doubt," "Skepsis." The Skeptic is one who doubts. But, doubts what? Here we should distinguish between "ordinary skepticism," and "philosophical skepticism."

1. In ordinary language and circumstances, a skeptic will doubt that something exists or is true. For instance, he will, if he is a (ordinary) "religious skeptic," doubt whether God exists ( and he may even doubt that God exists.) Or, if he is a (ordinary) moral skeptic, he will doubt whether (or even that) anything has any moral value (whatever he may mean by that.)

2. But a philosophical skeptic directs his doubts against knowledge. He will, unlike the religious skeptic I discussed above, say, "I ( and nobody) can know whether or not God exists." But this is compatible (you should notice) with not being an ordinary religious skeptic: For the philosophical religious skeptic (unlike the ordinary religious skeptic who doubts the existence of God) may, consistent with his philosophical skepticism still believe in God! He says he (and nobody) knows there is a God, but that doesn't prevent him from believing in God. (Of course, the philosophical skeptic need not believe in God either.)

The great 18th century British philosopher, David Hume is, I believe, best understood as a philosophical skeptic. He believed many things he thought it was impossible for anyone to know. For instance, he believed (and thought no one could help believing) that there was an "external material world" beyond our senses, but also held that it was impossible to know such a thing.

Ken Stern


Jenna asked:

Why is it that when your neighbour is mowing his lawn you can smell it a good distance away?

I have a friend who likes this guy, but she doesn't know that he and I have been together. What should I do?

This question about the lawn would actually be regarded as a scientific rather than a philosophical question; although science was itself once a part of philosophy, called 'natural philosophy'. Why is it a scientific question? Because it can be explained in terms of the physical structure of things; or to put it another way, the answer can be discovered by investigating, experimenting, observing.

Although I've said your question is scientific rather than philosophical, I do think we shouldn't be over-eager to draw lines cutting philosophy off from other areas of knowledge and study. There are areas of overlap — for example, cosmology and quantum physics seem at present to be leading scientists in the direction of more philosophical questions.

As I studied quite a lot of science at school, I will try to answer your question as best I can. As far as I understand it, when your neighbour mows his lawn, the grass is cut and crushed by the mower, releasing its juice, which is the smelly part of the grass. The juice rises into the surrounding air as a vapour. When this reaches your nose — perhaps quite a little way away — you smell the grass.

Your second question could be regarded as a philosophical question — a matter of practical ethics. To put things very simply, in ethics, you can basically either decide what action to take by considering the possible consequences, or you can choose to act in accordance with certain ethical principles.

If you were to consider the consequences of action in this case, you would need to think about what the effects on your friend and the guy she likes might be. Would a possible course of action have good or bad effects? You could, for example, decide to avoid telling your friend that you and this guy have 'been together', if you feel this will cause less pain or unhappiness to your friend than telling her. Alternatively, you might think it will cause less unhappiness if you tell her now, than if she found out later.

If you were to act on an ethical principle, it might be something like 'It is always wrong to hide the truth from a friend'. Do you agree with that?

How will your friend feel if she finds out that you and the guy she likes have been together? Will she be angry with him? Or with you? Do you fear you might lose her friendship? Or his? Do you know unpleasant facts about him that you feel she should know? There are many issues to consider here, and no simple correct answers.

Katharine Hunt

With reference to your first question, I have been mulling it over for a few days trying to connect with what it reminded me of. I have now remembered that when I was quite young I was blown over by a gust of wind. Picking myself up I recall wondering how something we cannot see can exist, yet it must exist because it has force. With hindsight I would like to think that this line of thinking was symptomatic of the beginnings of philosophical rather than scientific thinking. Though probably the two are indistinguishable at this age. This aspect of my brain promptly fell asleep for the next fifteen years. Your question can I think be seen in similar way, you may have reasoned along the lines that; something is happening some way from me. I am not in contact with the thing that is happening yet I have experience of it. Can there be action without connection?

I am inclined to think that your question was a crossover between our personal experience of the world and our philosophical and scientific understanding of it, and that it was a symptom of philosophical awakening.

With regard to your second question, I have been thinking about what should be a philosophical response to matters that engage on issues of personal relationships. We can identify three analytical techniques commonly used by philosophers, though not exclusively philosophers. The techniques I had in mind are those of; logic modeling, generalisation, and semantic modeling.

I just want to look at one aspect of one of these approaches and consider how it engages on one aspect of the question. This aspect is the logic modeling of the concept of friendship.

If we think of the values of true and false and their analogues as images of the symbols that stand for them then we can also think of the standard logical connectives as directing the flow of images in logical space and when we visualise the world under consideration in this manner then these connectives also direct the flow of mental images and therefore influence the way we think.

It is appropriate to characterise by digital values the flow of energy in control systems such as electronics or even neural networks, however it does not have intuitive appeal to represent concepts such as friendship as having on/off states rendering them capable of unfeeling calculation.

We can move in a middle ground between rational analysis represented by binary or first order logic at one end of the analytical spectrum and response-emotivism at the other end by choosing the values satisfaction and non-satisfaction to act as the vehicles of cognitive energy given quantity and direction by the concepts of friend or lover. Supposing we take friendship to require the condition that each friendship pair shares common values, then in terms of the model under discussion this would be the requirement that a pair of friends have satisfaction in common. Referring back to the original question we could then consider how the situation described could affect both the quality and quantity of satisfaction in the friendship space. So by informing a friend of a past love-relationship the question then becomes, "is my proposed action likely to increase, unchange or diminish the friendship?" Would something be taken away, added or will the response be one of indifference to your friend but one of loss to you? Will there also be similar transformations between your friend and her lover as a second friendship pair and also between you and your ex-lover as the third friendship pair.

As we are taking logical connectives as our modeling paradigm we are not restricted to using convergence as a model of the friendship relationship in fact it would be a bad model if we did because not all of the abstract characteristics of friendship are captured by convergence of satisfaction. We could as individuals have a broader concept of friendship such that each pair has individual satisfaction as well as those held in common. The concept of independent satisfaction allows the possibility that the individuals in each friendship pair may have things that add satisfaction to one but not to the other without damaging the strength of the friendship or in the worst case changing friendship into non-friendship.

In terms of the initiating situation the love-pair may add satisfaction to their world but not to your world without reducing the satisfaction held in common by your friendship pair. If your friend has an independence view of friendship whether you do or not, the new information is not likely to reduce the quality of your friendship. This broader, divergent view of friendship gives the relationship some immunity to differences of satisfaction. There must be some convergence but total convergence is not required. There can be differences of satisfaction i.e. there can be situations that lead to satisfaction to either member of the pair without degeneration of the friendship. We could consider that friendship based on divergent-independent grounds is tolerant to differences and disagreement-indifferent in that the denial of friendship is not brought about by their occurrence. Under this concept the limits of friendship are wider.

We could characterise the view so far taken of the situation as one of constructing a static evaluation in which we undertake to identify the logical constraints the concept of friendship places on the satisfactions owned by the individuals concerned. Within this view of friendship the first model could be considered as reducing or restrictive and the second model as opening/dilating or permitting.

We can take another view of the logical effects of friendship which we could characterise as a dynamic evaluation in which we see friendship as an operator for change on the satisfaction states of the affected individuals. Just as we can identify the two main dimensions of satisfaction and non-satisfaction and the third derivative dimension of indifference within the static view we can also identify two dimensions or fields of thought within the dynamic perspective. We could typify friendship as a relationship that at its best has the effect of changing things for the better for one or both parties. Friendship as an agent of change can be considered to always have promissory value and rarely threat value for the individuals within its scope. We can understand the meaning of this when we connect the dynamic viewpoint to the static viewpoint and look at friendship as an agent of minor promissory value in that its occurrence will produce or increase satisfaction without necessarily having any effect on the involved individuals dissatisfactions.

So how could viewing friendship as having the logical characteristics I have described offer something to the situation that initiated this response? The originating question can be understood to be a request for guidance on making a decision: to tell or not to tell, to tell some or to tell all and implicitly, would such an action lead to diminishing or strengthening the friendship?

We could argue that if 'telling' as an agent of change reduced friendship then telling is not an act of friendship, since friendship acts do not diminish friendship. Therefore the advice would be not to tell.

We could also argue that if 'telling' as an agent of change increased satisfaction then it is an act of friendship since friendship acts increase satisfaction, therefore tell. We also have to consider the possibility of the act of not-telling. If not-telling reduces satisfaction then it is not an act of friendship, therefore don't tell. If on the other hand not-telling increases satisfaction then it is an act of friendship, so tell.

We also need to consider the interaction of these outcomes against the two models of friendship discussed earlier, first of convergence-reduction-prohibition and second of divergence-dilation-permission. Looking at the consequences of the act of telling or not telling under the convergent model of friendship you could not be sure what outcome would follow your action. This may not matter to you. It may be that you want to create some uncertainty in the friendship. If, however you want to maintain stability in your friendship it would be require that at least one of the friends has an independence model of friendship and since it is you that is proposing to act, to guarantee a stability maintaining outcome you should acquire an independence model of friendship if you don't have one already.

All this may seem a long-winded approach to an emotionally charged issue, but the delaying effect may be a good aspect of practical philosophical analysis. Golman (Emotional Intelligence) suggests that the emotional drives of the 'neural circuitry of fear' are those that can lead us into impulsive and damaging actions.

If you need a quick response to emotional impulses that will provide you with breathing space while you reason out the balance of values that could follow an action impulse then the approach outlined can still help.Ask the question, "is the proposed action likely to produce a change for the worse". If you don't want this outcome then don't act while you think of an alternative. This question essentially identifies the possible threat value of the proposed action. In terms of the question, to tell or not, the uncertainty implies the possibility of damage to your friendship is a concern therefore don't act, i.e. don't tell while you work out other possibilities or the situation changes.

The previous deeper analysis identifies more delicate possibilities including those that could bring about changes for the better, those that have promissory value, in particular it would almost certainly require the revision of your concept of friendship, even if it is not expressed in the way it has been in the analysis in the body of this reply. It may be for example that you are led to look critically at the particular satisfactions that go to make up your concept of friendship and not the formal connections that have concentrated on in the previous discussion. You may for example include in your 'core belief' (Beck) set of friendship satisfactions the universal propositions that friendship always requires truth telling, or that friends always share knowledge. The question you have raised is clearly not a simple one.


Emotional Intelligence D. Golman, Bloomsbury 1996
Cognitive Therapy, Basics & Beyond J. Beck, Guilford, 1995

Neil Buckland


Colin asked:

Is belief in God realism or is it escapism?

The term 'realism' is used in many senses; I presume that you refer to realism in this case as an affirmation of the 'common sense' standpoint, and contrasted to 'idealism:' it follows from this that God is being considered as an object of reality, independent of mind, in other words, God is not just a thought or idea.

I further presume that your question is aimed at the general concept of God within religious society, as opposed to consideration of the complex ontological and cosmological arguments engaged in by philosophers, and of which many examples are presented in the pages of answers in 'Ask a Philosopher'.

I have no doubt that, in the sense to which I refer, God is a realistic belief in religious communities. You might say that this does not make sense when we consider that naive realists maintain that we can perceive things in the world, and that we are able to point to real objects, and are able to describe them. For God to be real then, he should be located in space and we should be able to point to him.Religious believers will tell you that this will be possible when he chooses to appear. It is a bit like knowing that Halley's comet exists but we can only point to it when it appears. Living, as I do, in the north of England, and believing that they have not ben demolished since my last visits, I am reasonably sure that both Wells Cathedral and Westminster Abbey exist without my being able to point to them. These are perhaps not good philosophical arguments, but we are here discussing what religious communities believe, and to them these arguments are supportive of their beliefs.

Many believers base their notions of a real God on the Old Testament of the Bible, where he appears as a very solid being in the Garden of Eden, in his conversations with prophets, his appearance to Moses, etc.. However, there is great contradiction between what is understood here and what is understood about the New Testament, where God is a spirit. The confusion comes in trying to identify the God of Jesus as the Jewish tribal God, Jahweh or Jehovah, they are not the same. Jesus himself told the priests and pharisees that they knew not the Father to whom he prayed. I have always maintained that the two books should not be under the same cover, if Jesus had been born in Poland they would not have been, this is to do with churchianity not christianity.

The accepted reality of God bears on the second part of your question, in so much that it is the concept of reality which affords escapism for many believers, particularly those who are fearful of the growing secularism in modern society. Surely, they will argue, there must be someone or some universal power to whom there can be a final appeal against the frightening increase in the materialism which is overtaking the world. Such people try desperately to cling on to their religious values, and live in the hope that the world will be put back on track by the creator. To those of an earlier generation the shallowness of the modern world with its commercially controlled stuff that passes for art, music, poetry, sport and entertainment, and which now affords no escape route from humdrum everyday activity, the prospects are scary; if they lost God as well the results could be disastrous.

John Brandon


Garen asked:

I am a Junior in High School. I am interested in philosophy of many sorts and have been exposed to a fair amount of it through my research as a debater. Recently I took on the task of writing a research paper that critically examines the quote, "I don't know anything about art, but I know what I like." This is in a persuasive essay format, and I am writing from the standpoint that it is, in many ways, an acceptable position as an individual, but when it becomes a societal maxim it destroys the marketplace of ideas and makes it less likely that people will take the time to learn about art. My definition of art is fairly broad, and it in many ways encompasses other things besides painting. If anybody could provide sources or permission to use them as a source with their response I would be much obliged.

I agree with your attitude that art is more than a matter of taste and the term is confined to painting, and so do most people. Hence there is a massive amount of literature on aesthetics. Have a look at my answer to VM at answers 15 about Hume, the 18th Century philosopher's attempt to create a standard. You could also look at an answer, Dear Lima at answers 14, on differences of opinion people have as to the status of their own taste and that which they take to be authoritatively good.

As far as visual art goes, which includes painting, it is widely thought that the institution of art sets the standard of what is good. The institution is constituted by galleries, critics, award ceremonies, etc, and Hume might have supported this approach to value in art. There are a vast number of books and journals on aesthetics of music, literature, poetry, architecture, the film and the canon etc and it is doubtful that there is an introductory book covering different standards between types of art but the idea of the institution can probably be applied to all media.

A major problem with the institutional view is that value is determined by social trends. At the moment, good works of visual work need to be conceptual rather than skilled. As Kant noted, the unsuccessful poet cannot be said to produce art if art is that which is acknowledged. If we understand the institution as ideally not influenced by social trends, which is possible, then we can say the great but unsuccessful poet does create works of value but the institution has yet to recognise it.

An interesting book on the subject of aesthetics which distinguishes art from non-art is The Principles of Art by R.G. Collingwood.

Rachel Browne

Goodman, N. (1976). Languages of art Indianapolis, Hackett Publishing Company
Goodman, N. (1988). Ways of worldmaking Indianapolis, Hackett Publishing Company
Kosuth, J. (1991). Art after philosophy and after: collected writings, 1966—1990 Cambridge, MA: The MIT Press
Levinson, J. (1990). Music, art, and metaphysics: essays in philosophical aesthetics Ithaca, NY: Cornell University Press
Maus, F. E. (1997). Music as drama Ithaca, NY: Cornell University Press
Arnheim, R. (1974) Art and visual perception; a psychology of the creative eye
Santayana, G. (1955). The sense of beauty

Steven Ravett Brown


Ris asked:

I have just read Leo Tolstoy's Anna Karenin and am perplexed. I enjoyed the novel but somehow I feel I have missed something. Tolstoy's views on marriage, etc. are apparently quite clear, however Tolstoy seems to be saying something else. I am interested to hear your views on ANY issues raised in the novel — the more the better — and any other site you could point me to in my quest to satisfy my curiosity about Anna Karenin.

See my answer to Amy at answers 10, on what we can learn about people from literature. That answer focussed on a moral change. It's about the passage where Vronsky comes to see Karenin differently, as dignified, and to recognise at the same time, his own weakness and the reality of his position in relation to Anna.

But the novel isn't simply about morality. It's about life and people. At one level the novel is about personality, character development, insecurity, strengths and weaknesses. At another level it's about appearance and reality. What people are really like is evidenced in their behaviour as time passes.

Anna seems at first to be a dazzling and strong person, but turns out to be weak, vain and competitive (with Kitty). Kitty appears to be young and innocent, but she is not susceptible to flattery in the same way that Anna is, and she grows and develops into a happy person with the ability to recognise the worth of Levin in contrast to Vronsky. Vronsky and Anna, are shallow and underdeveloped personalities, the relationship is weak because it is based on flattery and neither has the sense of responsibility which Karenin and Levin have.

Then it is also about suffering. Anna's suicide signifies an inability to bear suffering not simply because of the events, but because she lacks the wherewithal to endure pain.

On the internet most information is about the film. You could subscribe to the Slavic Studies Journal, but since you are at university it would be a better to find books on Tolstoy in the literature department of your library.

Rachel Browne