1st series [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]  2nd series [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49]

  View the latest questions and answers at askaphilosopher.org

Ask a Philosopher: Questions and Answers 48 (2nd series)

When referring to an answer on this page, please quote the page number followed by the answer number. The first answer on this page is 48/1.

(1) Penny asked:

This is a question about justice, the law and the duty of bystanders who witness a crime.

A philosopher friend criticised my son as a 'snitch' for going to court to testify in a case as a witness (whereas I thought he was being public spirited, and argued that justice through the courts can only be achieved when people are prepared to testify even in the face of intimidation).

My son and three teenage friends were the only other customers in a family-run Pakistani restaurant which a group of aggressive white men trashed when they were unhappy with the service. They also attacked and injured one of the waiters, a clever sixth former in the same school my son and his friends attended, leaving him with some brain damage.

After much discussion of this and other hypothetical examples, my philosopher friend's reasoning seemed to be that giving evidence against people who have done nothing to you is not your business. If that evidence is given to authorities with coercive authority over people, it constitutes an act of aggression against others. He argued that it goes against Kant's first formulation; and challenged me to devise an appropriate maxim that would always hold true and I couldn't. As he wrote to me about it, 'If a principle cannot be universalised without contradiction it is not true and cannot be true. It may be an emotionally attractive principle and make you feel better, but it still isn't true.'

He agreed that I could report a robbery (or other crime) in progress to the police to allow them to do their duty and then go about my business, or I could intervene directly in the situation myself. But he claimed that I could not justify giving evidence in court after the fact.

I am interested in philosophy but am very poor at following through to logical conclusions. I asked if his was a very hardline Kantian position, as I couldn't imagine any of the usual secular humanist Kantian philosophers whose articles I read in the Guardian or wherever taking the same line, but he claimed that was the logical application of the CI in this case and there was no getting round it.

Is he right?

---

No, your friend is not right. The claim is that witnesses to a crime not only do not have the moral obligation to testify in court, but indeed are morally obliged not to testify. As justification for this claim your friend offers the first formulation of Kant's Categorical Imperative:

Act only according to that maxim whereby you can at the same time will that it should become a universal law.

Immanuel Kant, Groundwork of the Metaphysic of Morals (Quoted from the Wikipedia article on Kant's Categorical Imperative)

So we have to propositions to consider: First, whether witnesses to a crime are under a moral obligation not to offer themselves up voluntarily in order to testify in court; Second, whether this claim follows from Kant's Categorical Imperative, or, more specifically, from the first formulation of the Categorical Imperative.

Let's look at the first claim. One of the basic regulative principles which govern the way arguments in moral philosophy are conducted concerns the way we test proposed moral theories or philosophical claims about ethics against our intuitions, i.e. our ethical beliefs prior to conducting a philosophical examination. The American philosopher John Rawls, author of A Theory of Justice (1971) has coined a nice term for this, which has become part of the contemporary philosophical vocabulary: he calls it reflective equilibrium.

When you make a claim, on the basis of a theory, which goes against unreflective moral intuitions then there are potentially two possible outcomes. Either one rejects the intuitions, or one rejects the theory. No moral theory is sacrosanct in this regard.

If witnesses to a crime never have the moral obligation or even the right to testify in court that would strike a blow at the very basis of our system of justice. The outcome would be intolerable in a civilized society. You know this. That is why the response from your 'philosopher' friend has left you so perplexed.

Now, it could well be that your friend has seized on this example as an argument against Kant's Categorical Imperative. This is familiar territory for moral philosophers. Even if one does not accept Kant's Categorical Imperative, one would be disinclined to accept the conclusion that Kant was just stupid, and didn't see an obvious negative consequence of his view. (Here, I am invoking another regulative principle, the Principle of Charity.) In other words, even philosophers who are not Kantians, have an interest in showing how Kant might have dealt with this challenge to his theory.

Suppose you were to say, 'Any time someone finds themself in the circumstances I have described [you then go on to describe the circumstances in detail] is under a moral obligation to testify.' This looks like a cheat, and it is. Kant would reply that more is required to make a maxim truly 'universal' than simply expressing it in the logical form of a universal statement.

Yet surely it is not the case that at all times and at all places, a witness to a 'crime' is morally obliged to testify in court. If as a student during the Third Reich I had the misfortune to hear my professor uttering words of criticism of Adolf Hitler, I am not morally obliged (even though I may be obliged by Nazi law) to attend as a witness for the prosecution. (There is, of course, a potential moral dilemma here for anyone who holds that there is a moral obligation to always obey the law, whether you agree with it or not: The issues are explored in the ISFP Fellowship dissertation by George Brooks on Positive Law Theory and its application to the case of Nazi Germany.)

The challenge for Kantians would be to find an acceptable path between the overly lax and overly rigid formulations of what the maxim of your action would be in this case. The result which we want is one where there is a moral obligation to testify in cases like that of the restaurant thugs, but no moral obligation to testify, or indeed a moral obligation not to testify, in cases like that of the outspoken professor.

One possibility would be to incorporate the caveat that testifying 'serves the interests of justice'. Once again, however, that makes things too easy. The Categorical Imperative was supposed to be the infallible touchstone of moral action, but now we would be appealing to a prior understanding of what is 'justice' or what actions are 'just' or 'unjust'. Nor, indeed, would we want it to be the case that whenever witnesses are asked to testify, they first have to decide for themselves what does or does not serve the interests of justice. That is why we have judges.

In some ways, the challenge to the Categorical Imperative looks similar to the case of lying. Kant notoriously argued that it was never right to tell a lie, even in the case where a crazed axeman is pursuing his intended victim and demands to know, 'Which way did he go?' (In his essay, 'On the Supposed Right to Lie Because of Philanthropic Concerns', Kant argues, unconvincingly, that e.g. if you say, 'He went left' thinking that he went right, and in fact unknown to you the victim did go left, then you would bear full moral responsibility for the outcome.)

Despite the well-known objections, I do think that Kant is onto something important in the case of lying (see Unit 5 of the Ethical Dilemmas program). We have to recognize — as Kant apparently did not — that even for the impeccably 'good will' some times there can be irresolvable ethical dilemmas. Whatever you do will be 'wrong', so you have choose the lesser of two evils.

In the case of the obligation to testify, more is needed than simply the rule that one must always tell the truth. I can simply refuse to enter into the court room. So the challenge for the Kantian in the case of the obligation to testify is, if anything, harder than the challenge in the case of apparent counterexamples to the moral principle that one should never tell a lie.

If the challenge can't be met, then that is bad news for the claim of the Categorical Imperative to provide an infallible touchstone for ethics, and your moral intuitions about your son testifying in court survive. On the other hand, if the challenge can be met, then once again your moral intuitions survive. Either way you are right and your 'philosopher friend' is wrong.

Can the challenge to Kant's Categorical Imperative be met? My hunch is that Kant's strategy would be to invoke the Third formulation of the Categorical Imperative:

Therefore, every rational being must so act as if he were through his maxim always a legislating member in the universal kingdom of ends.

Immanuel Kant, Groundwork of the Metaphysic of Morals (ibid.)

A 'kingdom of ends' in Kant's conception is not a mere collection of isolated individuals, each of whom takes care not to encroach on the moral rights of others. On the contrary, Kant's vision is overtly teleological, something that was not apparent in the first (or indeed the second) formulation. In a kingdom of ends each of us has a responsibility for actively supporting the state and the rule of law.

That doesn't mean I have to set myself up as judge and jury. It does mean that one has to acknowledge one's duties as a citizen. In contemporary terms, that includes voting, jury service, and, where necessary, attending as a witness in court.

My intuition is that there is indeed a fine line between responsible citizenship and being a busybody or a 'snitch'. In a relatively trivial matter like littering or indecent behaviour I would rather not be called upon to play my part in oiling the wheels of justice. In such cases, the Categorical Imperative does look like a rather blunt instrument, but I don't know of any moral theory which would fare better. — So much the worse, some would say, for 'moral theory'.

Geoffrey Klempner

The first thing you need to realise is that philosophy is not science. Kant didn't make some fundamental discoveries like Newton's laws of gravity that we are all agreed on. Like your friend I am also trained as a philosopher and I have read all of Kant's writings. Unlike your friend I think that everything Kant wrote is nonsense. Its not obvious nonsense but it is nonsense and I don't know many other philosophers who agree with Kant about moral principles.

I think if you ask your friends you will find that they don't know anything about Kant and that they have to do their moral reasoning without reference to obscure philosophers.

To say that it is wrong to give evidence in court, against criminals who have caused severe injury to someone, by using spurious pseudo philosophical reasoning shows that your friend is dangerously wrong. He is not only a poor philosopher but he has a dangerously defective sense of morality. I feel sorry for him and I certainly wouldn't trust him as a friend.

However let's follow your friend's nonsensical reasoning. Suppose your friend saw you son being shot and killed and that he knew who the murderer was. Since the police have no time to prevent the murder and he has no time to intervene all that is left is bringing the killer to justice. However your friend who saw the murder would refuse to give evidence in court and would excuse himself by using his own spurious twisted version of something written by Kant that is not true even when interpreted correctly.

I would suggest that your friend needs philosophical re-education. However he also urgently needs moral re-education. Your son acted bravely and should be praised for what he did not subject to the ignorant criticism of a pedantic ignorant person.

Shaun Williamson

I make no pretence to be qualified to offer an interpretation of Kant's ethical theories. (I am in fact going to be very interested to see what answers those better qualified have offered you, when Geoffrey posts those answers on a new Answers page.)

So I don't intend a direct answer to your question. What I am going to do is offer a few thoughts from a different ethical perspective. The difficulty I find with Kant's approach to 'Duty Ethics,' and his maxim that one should treat other people as if they are ends in themselves and not as means to your own ends, is that it seems to be incompatible with the fundamental observation that human beings are an evolved species. Kant's ethical principles result in behavioural rules that never could have evolved. Hence they must be provided with some other form of evolutionary basis.

Kant would probably argue that his ethical principles are the logically necessary result of the application of reason. And it is our ability to reason that has evolved, rather than the ethical principles themselves. But as David Hume pointed out, reason itself is not a motivator. We can apply all the reason we want, and still not be motivated to implement the dictates of our reasoning. What is necessary is a want, desire, or need (or a 'passion': as Hume calls it) to motivate us to implement what reason dictates.

Kant's ethical system therefore relies for it effectiveness on the education and training of the people. Only by inculcating into the populace a stand-alone (and unquestioning) desire to 'do the right thing' (and/or 'avoid the wrong thing' and/or 'do as reason dictates') can a Kantian hope to motivate people to follow a Kantian ethical judgement with the appropriate action.

To me, this result is unsatisfactory and counter intuitive. I would seek the basis of ethics in an evolutionarily sound fundamental principle that is self-motivating. Oddly enough, you can use such an evolutionary principle as a reply to your friend's challenge. You mentioned that your friend challenged you to come up with a universalizable ethical principle, and claimed 'If a principle cannot be universalised without contradiction it is not true and cannot be true. It may be an emotionally attractive principle and make you feel better, but it still isn't true.'

Try this one on your friend — 'Act always so as to maximize the probability that your genes will maximally flourish over the longest time frame possible.' Of course, relying on this principle necessarily means that you have to dismiss Kant's maxim about means and ends. But the advantage of the principle is that it is evolutionarily sound, and nicely universalizable.

And contrary to a Kantian means-end motivated criticism, the principle does not imply a narrow-minded ego-centric blindness to the interests of other people. Given the social nature of our species, our genes tend to flourish better when we cooperate in mutually beneficial ways. It pays, history has taught us, to voluntarily cooperate with others rather than not. A social environment that is conducive to such cooperative efforts is better than one that is not. As a rule of thumb, therefore, it is better to respect the self-interests of others, than not. And we're back to Kant's mean-end principle — albeit as a rule of thumb, rather than a categorical imperative.

How it applies to the particular scenario you laid out in your question is much more readily apparent than might be any Kantian analysis. Your (or rather your son's) judgement was that the social environment within which his genes must flourish would be better if he participated in the judicial treatment of the malefactors, than otherwise. By this ethical rule, therefore, your son did the right thing.

This is not, of course, a direct answer to your question. I don't care whether your friend was right or not in his interpretation of how the Kantian system of ethics should analyze your son's choice of actions. It is rather an indirect answer to your question — I think your friend was applying the wrong ethical principles to your scenario. By applying the principles of evolutionary ethics, it is clear that your son's choice was the right one. And on this basis your friend's criticism was all wrong.

Stuart Burns

back

(2) Kim asked:

What argument for God's existence was Anselm famous for?

---

Kim, the argument later became known as the Ontological Argument for the existence of God. In Chapter 1-4 of his Proslogion, Anselm [1033-1109] argued that understanding the nature of God entails his actual existence.

Pondering the nature of God reveals He is 'something than which nothing greater can be thought'. This definition remains in the understanding. However, that which a greater thing cannot be thought cannot exist in the understanding alone. For if restricted to the understanding, something greater would exist in which God does not exist — reality outside understanding. God would be limited to being an idea. But nothing greater than God can be thought. So, something than which a greater cannot be thought exists both in the understanding and in reality.

Secondly, on Anselm's definition, God cannot not exist. He exists necessarily from out of his nature. If He did not exist, something greater than Him would exist — his non-existence. To say both God is that which a greater cannot be thought' and 'He does not exist', negates or contradicts 'that which nothing greater can be thought'. To avoid the contradiction, it must be admitted that God not only exists, but He exists necessarily. He cannot not exist.

Immanuel Kant [1724-1804] in The Ideals of Pure Reason in his Critique of Pure Reason challenges this argument on the grounds that existence cannot be a predicate of an analytic proposition. It is permissible to say if a triangle exists; it must have three sides and angles of 180 degrees. Whether it exists or not is another matter. Similarly, if God exists, then he would necessarily exist. But whether He exists is another matter. The Ontological argument describes God's nature alone; it cannot prove his actual existence.

Martin Jenkins

back

(3) Mckay asked:

Foucault considers truth to be:

a. Unreal

b. A system of arranged concepts and procedures

c. Relative to institutions of power

d. B and C

e. A and B

---

D.

For Michel Foucault [1924-1984] truth is constructed by various interacting regimes of knowledge. The regimes are scientific, sociological, medical, criminological, psychiatrical; interacting to create the matrix of the human 'subject'. The regimes of knowledge are not objective or impartial according to Foucault; they are also manifestations of Power. Building of the insights of Friedrich Nietzsche [1844-1900]; Foucault maintains that when applied to society, the will to knowledge as will to truth, is also will to power.

As regimes of Power, they can be co-opted by various existing authorities. For example, what Foucault calls 'Bio-Power'. This is the concern of regimes of knowledge/power with the health and physiology of human subjects. Think of all the 'scientifically based' scares concerning foods to be eaten/not eaten, some of which contradict themselves in successive time. Are they based on objective scientific evidence which is unearthing 'the truth'. Or, are they built on constructions, which utilise ambiguous propositions but which, are not the same as nonsense.

Power is not only centralised in the state, power is everywhere and can only be accounted for genealogically. The state analyses of Power as found in Liberalism or Marxism [which Foucault terms Juridical] is too insensitive to the micro and multiple activities of the operations of Power. Moreover, resistance to Power, if arising from revolution at the state level, is too crass and merely seeks in reinscribe new 'global' operations of Power. [Meet the new Boss, same as the old Boss]. So Foucault argues that resistance — which is the continuation of the Enlightenment project of making authority transparent- must be local, specific and if bigger than this, it must be a contingent alliance. Any other act of resistance is too insensitive and actually, ineffective in pursuing freedom.

Martin Jenkins

back

(4) Jonathan asked:

Is this a question?

---

Yes it is. It is a self referential question. Another example of this is 'Does this sentence contain six words?'

Shaun Williamson

back

(5) Peter asked:

I was watching a video about a Festival, in which participants believe they are fairies in a past life. I thought that like all people they have the right to live their lives as they like, as long it is not harmful to others. That is debatable but by Western standards I think it reasonable. However I came to thought that although there is little valuable evidence that they are faeries but there is little evidence disproving it. So whether you think they are detached from reality is one thing but I think this applicable to many other situations in relation to delusions.

So in short if a reality is said to exist and there is no proof disproving or proving; where does one start? If there is a system that is beyond tangible interaction what proves or disproves it exists? Is this question too metaphysical or even relative to philosophy? If anything I need direction for this subject matter of realities and perceived realities; because I am still in HS and I don't have a class addressing such matters. Any reference would be nice and if this is not even a philosophical question, I apologize.

---

Not only is your question a philosophical one, but it is a most important one. In the past philosophers distinguished between phenomenal knowledge, known through the senses, and noumenal knowledge, known by the mind. Later, F. H. Bradley made the same distinction in his book 'Appearance and Reality.' And in modern science the same distinction is made between empirical knowledge and theoretical knowledge. The distinction arises, not because of delusions, but because of illusions. (Delusions are clearly false beliefs; illusions are false perceptions: the noumenal-phenomenal distinction again.) The important thing about illusions is that they are unreal; some of them are obviously so because of contradictions between different senses, as with the half-immersed stick in a glass of water, which is bent to the sight and straight to the touch; others are contractions between what you perceive and well established belief, as with the apparent sizes of the Sun and the Moon being equal. And no contradiction can be real, so illusions are unreal. If someone asked you to point to an empirical object that was wholly free of illusion, could you do it? And if you thought you could, how would you know it to be so? Or consider our most important sense, vision. Visible size diminishes with distance, in all three dimensions; shape varies with viewpoint; and colours are secondary qualities, manufactured by the eyes; what is left that is real?

So if the common sense belief in realism — the belief that the empirical world that we each perceive around us is real — is false (it is at best only partly real) then we have to speculate about the nature of reality. Such speculation gives us noumenal, or theoretical, knowledge of reality: knowledge that is empirically unknowable. And, also, knowledge that may easily be dead wrong. In the past such knowledge was called metaphysics but nowadays it is theoretical science. Theoretical science is strictly disciplined by having to conform to empirical data; and if you consider such theoretical ideas such as the curvature of four-dimensional space-time, the Big Bang, wave-particles, and the like, you realise that theoretical science is far removed from common sense.

Another point in all this is that noumenal ideas are invented in order to explain phenomenal knowledge: theoretical science explains what empirical science describes. Explanation is causal: to describe causes is to explain their effects. We need this because there are no empirical causes, only empirical correlations. All noumenal studies attempt to explain, by means of unperceived entities: myth by means of hidden spirits, theology, by means of God, metaphysics by means of substances and attributes, theoretical science by mathematical entities — even common sense, by means of things that cannot be perceived, such as minds other than one's own and the continued existence of empirical objects when no one is perceiving them. If you ask a theoretical physicist what it is that theoretical physics describes, the usual answer is that it describes the underlying causes of empirical phenomena; and 'underlying' is a metaphor for non-empirical.

However, there are difficulties. The distinction between reality and appearance was in the past attributed to the representational theory of perception, which said that empirical objects are not real objects, they are only representations of real objects; and in so far as they are false representations, so are they illusory. Today this theory has become the causal theory of perception, strongly backed up by science: real objects cause images, or representations, of themselves in the brain of the perceiver. Real objects are outside the head of the perceiver, they are public, and they are material; while images are inside the perceiver's head, are private, and are mental. And all empirical objects are outside our heads, public, and material. Therefore empirical objects are real objects, not images of real objects. This latter view is called realism, or, sometimes, common sense realism, or, sometimes, naive realism. And for the past century it has dominated philosophy: English language philosophy has almost all been analytic philosophy, which disallows speculation, and continental philosophy, such as phenomenology and existentialism, has also been realistic. The trouble with realism is that it cannot account for the extraordinary success of theoretical science.

These difficulties can be resolved, but not in the space available here. If you would like to try to resolve them for yourself, try to answer two questions: is your own empirical body a real object, or just an image of one? And, if the latter, where is your real body? Alternatively, you could look at my e-book, 'Belief Shock,' downloadable free from www.sharebooks.ca.

Helier Robinson

back

(6) Cynthia asked:

Compare and contrast Nietzsche's account of the 'self' with Descartes' version of the soul. After describing each of their views, show why Nietzsche criticises Descartes' theory.

---

Descartes

For Descartes, the 'self' or soul is a non-extended substance defined by thinking. When I think, I am. Its necessary existence is distilled from doubting all that can be doubted. That it exists and is characterised by Thinking is known with certainty: for doubting/thinking requires an existing and thinking thing. It is primary to and more clearly known than the body. It is eternal, living on after the death of the finite body- thus the dualism between mind/soul and body, inherent to orthodox Christianity is perpetuated. The thinking Mind is the source of knowledge by means of the correct application of judgement to clear and distinct ideas. The mind carries more objective reality than the body and its senses. Mind is primary, it is transparent, it is a single, indivisible thing. This enables human being to be distinct from Nature.

Nietzsche

For Nietzsche, the 'self' is a transient hierarchy of drives, emotions: affects. Each is driven to seek ascendency as each is essentially power or energy discharging itself [Will to Power]. As hierarchical, order is given to the chaos although it remains malleable. 'Thinking' is not initiated by a 'self', but arrives at awareness after it has been initiated elsewhere. The 'clear and distinct ideas' of philosophy are not pure Thought, predicated to a Thinking Mind; they are rarefied, sophisticated affects. The self is not a closed, private world. It is shaped by external factors [such as pain, socialisation] and continually shaped by the interaction between external and internal factors. Being is becoming. Human beings are not separate from Nature, they are part of it.

Nietzsche contra Descartes

Relating to Descartes' analyses of the self as 'I think', [BGE 16] Nietzsche criticises it by asking is it 'I' who does the thinking? Is the 'I' always the cause of the effect that is thinking? Does an 'I' exist separately from Thinking? Do we know what thinking is; how is it different from willing or feeling? Far from being clearly known, the self becomes almost mysterious.The apparent certainties of Descartes are far from certain.

In criticising materialist atomism and citing Boscovitch, [BGE 12], Nietzsche is criticising the need or prejudice in philosophical thought requiring a ground, a foundation for building epistemological structures. This prejudice is found as much in materialism -with its need for atoms — as in metaphysical Philosophy with its belief in substance and more importantly, eternal soul. Instead of the Cartesian, singular 'self', perhaps it can be thought as 'multiplicity of the subject', as the 'social construct of drives and emotions'. The latter ultimately being changing configurations of power/energy

Against Descartes, the self is not purely subjective and internal — 'coming from within', it is a transient hierarchy of drives, emotions: effects where some have been permitted expression although shaped and rarefied and others repressed. This morphological authorisation occurs through socialisation in early and indeed, all life. If consciousness is the awareness of affects post-factum, this has historically and socially developed from once rudimentary but intense sensations such as pain and pleasure [see Guilt, Bad Conscience and Related Matters, On the Genealogy of Morality]. Further, this raises the possibility that consciousness might not be aware of other affects that exist beyond it. So contra Descartes, the Mind is not the source of knowledge of 'reality'.

Martin Jenkins

back

(7) Ira asked:

What did Friedrich Nietzsche mean when he said 'God is dead' and 'Will to Power'?

---

By 'God Is Dead' Nietzsche means that the way of thinking and acting due to a belief in God, is over. Remove the keystone that is God, and the supporting moral-epistemological edifice collapses. This means all the existing values and valuations which constituted the edifice, will need to be evaluated. This is what Nietzsche's philosophy is about.

His philosophy is ontologically grounded in the Will to Power. This is the fundamental 'building block' of all life, of all nature, including human beings. It a multitude of instantiations of power or energy existing in configurations. These are based on the more powerful incorporating the less powerful into hierarchical structures. Power/energy is discharging itself, being what it is, to grow and become more. Hence power has a 'will' so to speak. This translates into physiological material such as the human being. Being physiological, stimuli or affects of an external body[ies] into a body [ies] and the response of that body[ies] can form valuations. This becomes more complicated in social scenario's with more factors being taken into the equation. Hence Nietzsche uses genealogy to map the complicated, discontinuous development of valuations throughout human history as opposed to the simplistic teleology of a thinker like Hegel.
In his time, Nietzsche sought to account for why Judeo-Christian values were becoming secularised and the consequences for Western civilization of this. Identifying the consequences and the epistemological basis of these values, Nietzsche concluded that they would limit the expressions of life, particularly its highest creative types, with an egalitarian levelling. Life would be asphyxiated, prevented from achieving what it otherwise could. This is unnecessary, as the Judeo-Christian values which formed the basis of European morality no longer held a monopoly and could be challenged by emerging new values. Hence the dictum that 'God is Dead'. New evaluations, naturally occurring through life and living — which is the activity of Will to Power- will be created in the shadow of his death.

For some, this does not entail atheism but the rethinking of a God created under definite socio-political-conditions at a specific historical time.

Martin Jenkins

back

(8) Benjamin asked:

True or False: The fact that we often have difficulty putting our thoughts into words disconfirms the view that we think in the language in which we speak.

---

False. The question isn't whether, in fact, we think in the language in which we speak (e.g. English) or whether we think in some other language (e.g. Jerry Fodor's 'language of thought'). That's a big debate, which we don't have to go into.

What Benjamin's question specifically asks is whether the fact that we often have difficulty putting our thoughts into words is compelling evidence against the view that we think in the language in which we speak, or, what amounts to the same thing, whether it is evidence for the view that we think in some other 'language'.

It is not. The standard reply is that, 'If you can't find the words, you don't have a clear thought.' There's something you think you are thinking about, a thought you are trying to think, but you haven't succeeded in actually thinking that thought. When you do finally find the words, then the thought comes into being and not before. All the feelings that you have prior to that point, the feeling of unease, of something tugging at your mind, or whatever it is, are just that and nothing more.

But how do I know that? I don't need to know. So far as one is able to tell purely from introspection, it might be true that a thought comes into being with the words that express it. It might also be true that the thought is prior to the words, but that's irrelevant. All one needs to defeat the claim in question is that it doesn't follow from the fact that we sometimes strain to express a thought that the thought is prior to the words.

It looks like I don't have much to write today. But actually, there's something tugging hard at my mind that tells me that this is all too superficial. I don't like it.

I read Fodor's book Language of Thought (1975) at the beginning of my first year as a graduate student at Oxford. His thesis seemed rather fanciful to me, not least because of what he said about Wittgenstein's argument against a private language. I'd just picked the book up at Blackwell's Bookshop because it looked interesting. It never occurred to me that it would generate the vast body of literature that it has. — That fact doesn't make me feel the least bit sorry about my initial judgement.

However, I want to come at this from a different angle. I've lost my taste for the technical complexities of this debate, which takes in philosophy of language, philosophy of mind, cognitive science and AI.

At Oxford, there were a couple of other books which I read, by a philosopher who is hardly discussed today, Justus Buchler. (There's no Wikipedia entry, a telling sign.) The titles are The Nature of Judgement (1955) and Metaphysics of Natural Complexes (1966). I remember discussing what I'd read with my supervisor John McDowell who'd never heard of Buchler. But then no-one else I mentioned him to had either.

On Buchler's theory of judgement, a pole vaulter's leap, a painting, a skyscraper, or a sentence in English can all be called, without equivocation or metaphor, 'judgements'. When a pole vaulter vaults, or when an artist paints a painting, or when an architect designs a building, each is engaged in a thoughtful activity which exists side by side with, and in a sense independently of, the thoughtful activity of forming sentences in speech. But also more than just an 'activity'. The final result is an entity that exists in its own right, as a product of what went before, in the same sense that a verbal judgement is a final product of the activity or process of thinking.

Let's just try to imagine what it's like to be that pole vaulter. You've just failed the last jump, You were nearly over, but your left heel just caught the bar. What's going on in your mind? Lots of words, to be sure, perhaps a few swear words. But there's something else there too. As you feel the weight of the pole balancing in your palm, as you get ready to sprint, eyes fixed on the bar, the words that come into your head aren't the essential thing.

The run, the leap, the twist, every part of the choreographed movement is an action, which forms part of an articulated sequence of actions, just as words form parts of a sentence. Just as Frege held that the meaning of a word consists in its contribution to the meaning of a sentence or statement, whose aim is to state something true, so the 'meaning' of that particular vault depends upon its contribution to the attempt to clear the bar. The whole action, the 'judgement' succeeds or fails, just as a statement succeeds or fails in stating the truth.

Contemporary philosophers of mind and action probably wouldn't find too much here to argue with. However, what is important for me is the emphasis. Too much emphasis is placed by philosophers on the language question. What Buchler's account suggests is that there is a far greater richness to our mental life than the verbal thoughts we think. This extra component is not just 'experience' or 'feeling' but rather rational activity, a form of reasoning which exists apart from words, and which cannot be reduced to language.

To be sure, this 'rational activity' is not just some process in the head. If anything, it is far more obvious that doing a pole vault in your head isn't doing a pole vault even though the ability to imaginatively represent, accurately, the intended action or sequence of actions is part of what constitutes the pole vaulter's mastery of this particular field sport.

Could a creature who did not have verbal language 'reason' in this way? Here we are pulled different ways. It is human reason and judgement, expressed in words, that is involved in evaluating a piece of architecture or a work of art. In a similar way, in a diving contest the judges are able to defend, in words, the marks that they award. In a jump or a vault, on the other hand, success or failure is a simple verifiable fact. The bar is cleared or it is not cleared.

What that overlooks, however, is the fact that the ability to reason and form judgements in words is essential in an athlete's training. There is a science of sport. The quest for greater performance is, at least in part, a scientific endeavour.

So what does all this show about thought and language? One's initial reaction might be that Buchler has presented a clear case that there are forms of thinking or judging which do not involve words. There are other forms of ratiocination besides linguistic ratiocination. Perhaps no-one will ever write the definitive 'logic' of pole vaulting, but that merely reflects the unique capacity of language to make a particular and very important species of judgement — linguistic judgement — possible.

On second thoughts, surely what this shows is that we need to rephrase the question. Language is essential to linguistic or 'logical' thought. Other forms of thought require their own media — whether it be the pole vaulter's body and pole, or the painter's eyes, hand and paint brush — which are just as much part of our shared, common reality as words are.

Geoffrey Klempner

back

(9) Colin asked:

Who are some of the contemporary philosophers who generally accept Plato's metaphysics? Who are some who generally accept neoPlatonic metaphysics? Are these philosophers in the minority today?

---

Not that they mightn't exist, but I can't think of any contemporary followers of Plotinus within academic philosophy — assuming that is what you might mean by 'neoplatonic'. I've a feeling — and it is no more — that I've heard of Plotinus' One being taken seriously by the occasional theologian.

Plato, on the other hand has had at least one rather illustrious minority follower who, if not contemporary, is at any rate deeply engaged with the giants of the contemporary scene and very recent: Iris Murdoch.

I should add, however, that it is not universally agreed what 'Plato's Metaphysics' amount to, and if you want to know what Murdoch agrees with, you will have to read Murdoch. You will get very little idea of what Murdoch is for by reading any of the widely cited commentaries on The Republic.

My own opinion is that Murdoch is right about Plato, so that it one understood Plato aright, one would understand him as Murdoch does, rather than as do Ryle, Popper, Vlastos, Annas, etc. As I see it, it is a tremendous help to Murdoch's discovery of Plato's good sense that she is actually looking for it.

We tend to distinguish Plato scholarship from mere Platon*ism*, as if distinguishing the professional from the mere amateur. My sense however is (and this is an important thought backed up in Murdoch by a great deal of influential argument concerning Mother in Laws and the Ontological Argument) that to understand someone aright it may be necessary to attend to them with love.

David Robjant

back

(10) Benjamin asked:

True or False: The fact that we often have difficulty putting our thoughts into words disconfirms the view that we think in the language in which we speak

---

A very interesting question. Strictly speaking, yes, it disconfirms the view that we always think 'in' the language in which we speak. But, importantly, it need not suggest that we think 'in' some other language, prior to our verbal articulations. The picture of thinking 'in' a language suggests the use of a cannon of symbols which stand for absent objects and which may be rearranged and linked in various ways to 'express' a thought previously held. That picture is adrift from the facts. Sometimes, I think 'in' language in the sense that I am responding (as now) to a written question with a written answer. But whether this is the use of a cannon of symbols which stand for objects is rather less clear. Some philosophers say yes, others (I would side with) say no. And there are other occasions where we might say I was thinking about a problem, a musical problem (how to convey a song's mood in percussion), and not thinking 'in' language at all: one thinks then in the sense of trying and considering, or, if one is really good, considering and trying. My suggestion would be that thinking about the *mot juste* is sometimes a thinking of the former 'in' language kind, and sometimes of the latter.

David Robjant

back

(11) Dave asked:

I have a question regarding the existence of actual infinities. I've heard theists argue that an actual infinity cannot exist, yet claim that God is infinite. Some then say that an actual infinity cannot exist in 'the physical world' or in 'spacetime,' but outside of the physical world (but still in reality) actual infinities can exist. Isn't this an arbitrary distinction? Or are they using a different notion of infinity for God? The existence of God is such an obvious counterexample to their argument that I feel like I'm missing something. Thanks.

---

The contexts of metaphysics and mathematics are, as Dave knows, different, and therefore their use of a common symbol, e.g., 'infinite,' does not entail equivocation. Metaphysics explores the intelligibility of self-subsistent being, which finite beings allegedly participate (or, as modern syntax has it, participate 'in'). In the Thomistic tradition that has influenced subsequent philosophical theology to the present day, the symbol of 'subsistent being' is equivalent to 'God.' Mathematical infinity, by contrast, refers to the possibility of adding a member to a series: if it is always possible to add one more, then the series is infinite. The series of natural numbers (or of even numbers, or of prime numbers) is infinite in that sense. It is a metaphysical claim that no series of existents can correspond in a one-to-one fashion to the series of natural numbers, because such an actual or 'consummated' infinity would lead to absurdities.

For example, suppose an infinity of persons stands in a line to your left and each person has one coin. A superhuman being with magnetic powers causes the coin belonging to the person on your immediate left to travel instantaneously from his or her pocket to yours (so that now you have two coins);simultaneously, coins from the third and fourth persons wind up in the second's pocket; coins from numbers five and six go to number three; from seven and eight to four, etc. What such a transfer accomplishes is a doubling of the number of coins by the mere change of the location of existing coins, that is, without the production of new coins. That is metaphysically absurd, and that is why there cannot be a consummated infinity: it bears within it the possibility of absurdity, which is no possibility at all.

In the future, Dave will go further under his own steam to resolve more quickly, if not dispel, a problem if he clarifies who said exactly what (i.e., not settle for 'I've heard...', 'Some then say...'). I wish to assure Dave that what was 'obvious' to him has also been obvious to the intelligent writers who, he seems to have thought, missed the obvious and then contradicted themselves. Which philosopher forgot that he had said that God was infinite after declaring consummated infinities to be impossible? One should document such a self-damning performance before presenting it as an interesting case for the consideration of others. Having said that, I also want to assure Dave that I enjoyed answering his question and hope he will pursue his metaphysical studies.

Anthony Flood

back

(12) Reese asked:

The square root of 2:

A) Can be expressed by a ration of two integers

B) Cannot be expressed by a ratio of two integers

---

You don't ask a question, simply asserting 2 contradictory statements I take it that your questions are:

1.Which statement is correct?
2. Has the issue any philosophical significance?

The answers are:

1. (B) — proof in a moment
2. Yes, mainly historical

The Pythagoreans held that numbers were somehow 'the essence of all things'. They recognized square numbers, triangular numbers, perfect numbers, amicable numbers, prime numbers, and much else. By numbers they meant the whole numbers (1,2,3 ...etc etc...) Reality, somehow, was reducible to these or to ratios between them (hence rational numbers). Consider now the unit right angled triangle ie one with its short sides each exactly one unit long. By Pythagoras' Theorem, the length of the hypotenuse is exactly the square root of two. So, as part of reality, this length must be the ratio of two integers. Of course nobody had worked out which two, but in due course that might be known. Then one day Hipparchus proved that root two cannot be so expressed. Legend has it that he demonstrated his proof while on a boat journey, and his companions were so disturbed by this threat to their mystical numerological cosmology that they turfed him overboard. Geometry was clearly superior to arithmetic -root two could be represented exactly by a line but not by numbers. Plato's model for his views on forms and universals were ideal geometrical objects, and, famously, the sign over his Academy said 'Let no one ignorant of geometry enter this house'. Hipparchus' proof:

Assume root two expressible as ratio of two integers. Let these, reduced to their lowest common form by division, be m and n. It follows that m and n are not both even (one or both must be odd).

So:

root 2 = m/n
(squaring both sides), 2 = m2/n2
Hence m2 = 2n2
Hence m2 is even
Hence m is even
Hence m is 2p
Hence m2 = (2p)2 = 4p2
Hence 2n2( = m2) = 4p2
Hence n2 = 2p2
Hence n2 is even
Hence n is even
Contradiction (m and n cant both be even)
Hence assumption that root two = m/n is incorrect (reductio ad absurdum).

Of course things have moved on since the Greeks with successive acceptance (sometimes resisted by many mathematicians) of irrational numbers (such as root 2), negative numbers, zero, transcendental numbers, imaginary numbers, infinite numbers and nimbers.

Craig Skinner

back

(13) Sandra asked:

What's the difference between an idea and knowledge?

---

OK, well, let's start out by assuming that by 'idea' you mean something like what the early modern philosophers meant — a mental representation that has meaning (so, kind of like a concept). (That's not exactly how they thought of ideas, but it's close enough.) Now, what's the difference between an idea and knowledge? Well, for one thing, you can have an idea about (or concept of) something without having any knowledge about whether or not that thing exists. Plenty of children have the idea of Father Christmas. But they don't know whether or not he exists. It's one thing to have the idea; quite another to know that the idea corresponds to a real, existing object.

That said, it seems that *sometimes* the mere presence of the idea automatically goes hand-in-hand with *some* knowledge. So, arguably, if you really do have the idea of a bachelor, then you also know that all bachelors are unmarried — if you didn't know that, you couldn't really be said to possess the idea of a bachelor, because clearly you wouldn't know what 'bachelor' means. A lot of philosophers think of such truths as 'all bachelors are unmarried' as analytic truths — they are true in virtue of the meanings of words alone, so if you know what the words mean (in other words, possess the relevant ideas), you'll know that the relevant claim is true. But this isn't knowledge of any aspect of external reality; knowing that all bachelors are unmarried, for example, doesn't mean you know that there are any bachelors out there in the world.

It can be easy to lose sight of the point that ideas and knowledge are different when you start thinking about some of the debates had by the early modern philosophers. So for example Descartes thought that existence is part of the idea of God (because existence is a 'perfection', and God is by definition perfect), so that merely by having the idea of God, you can know that God exists. (A non-existent God would be less perfect than an existent one.) Also, the debate about 'innate ideas' and 'innate knowledge' sometimes doesn't distinguish very clearly between the two. But, again, there is clearly a distinction to be made here. If we think of ideas as ingredients of knowledge, in the sense that in order to know that P (that snow is white, say) you need to have the ideas that constitute P (in this case the idea of snow and the idea of whiteness), then if there are no innate ideas, it looks like there can be no innate knowledge either. (You aren't born with any ideas at all, so you aren't capable of having any thoughts at all, and so aren't capable of knowing anything, until you acquire some ideas.) But the reverse doesn't hold — if we have no innate knowledge, it doesn't follow that we don't have any innate ideas. If I lack the innate knowledge that snow is white (which clearly I do), it doesn't follow that the idea of snow and the idea of whiteness are not innate (although it is independently implausible to hold that they *are* innate!).

Helen Beebee
Director
British Philosophical Association

back

(14) John asked:

if you would listen to my brilliant advise then all your problems would go away and since your problems have went away you must had listened to my brilliant advise. which part of the question is valid and invalid?

---

Hmm, this question needs a bit of tidying up before it can be answered. (You ask 'which part of the question is valid and invalid', but I don't see any question there!) I think probably you're asking something like this: is the following argument valid?

(Premise 1) If you listen to my brilliant advice, all your problems will go away.
(Premise 2) All your problems went away.
(Conclusion) So you listened to my brilliant advice.

This is obviously an invalid argument. (It's a standard fallacy known as 'affirming the consequent'.) Generally, you can't derive 'C' from 'If C then A' and 'A'. (Think of another example. 'If Sparky is a dog, then Sparky has 4 legs. Sparky has 4 legs. So Sparky is a dog.' Clearly invalid: both premises would be true if Sparky were a cat or a giraffe, but the conclusion would then be false. Similarly, in your example, there are lots of reasons why your problems might have gone away, of which listening to the advice is just one. Perhaps the problems just went away of their own accord, or perhaps you decided what to do on the toss of a coin, and paid absolutely no attention to the advice. In that case, the conclusion would be false, but both premises would still be true (assuming that listening to the advice would *also* have made the problems go away).

Helen Beebee
Director
British Philosophical Association

back

(15) John asked:

if you would listen to my brilliant advise then all your problems would go away and since your problems have went away you must had listened to my brilliant advise. which part of the question is valid and invalid?

---

Questions cannot be valid or invalid. Only logical arguments can be valid or invalid. I will try to construct or guess at which argument you have in mind. An argument consists of premises and a conclusion. A logically valid argument is one where if the premises are true then it is impossible for the conclusion to be false.

Premise 1: If you would listen to my brilliant advice then all your problems would go away.

Premise 2: Your problems have gone away.

Conclusion: Therefore you must have listened to my brilliant advice.

This is an invalid argument because it is possible that my problems have gone away for some other reason e.g. because I have seen a psychiatrist and not because I have have listened to your advice. In this argument it is possible for both the premises to be true but for the conclusion to be false.

Shaun Williamson

back

(16) Benjamin asked:

True or False: The fact that we often have difficulty putting our thoughts into words disconfirms the view that we think in the language in which we speak

---

The short answer is false. In 'Philosophical Investigations' Wittgenstein wrote 'We are like a primitive tribe who hear the expressions of civilised men and interpret them in a bizarre way'. So the fact that we have difficulty putting our thoughts into words gives rise to the superstition that somehow our clear thoughts exist in some 'thought language' but we fail to translate this thought language correctly into English.

If we think in a different language to the one we speak then which language is this? Are there any dictionaries for this language? Is there only one language of thought or are there many?

The things that we call languages are things like English, French, Chinese, German etc. Languages have words but what are the words of this strange language of thought and how did we learn this language and how can we be sure that we have learnt it correctly.

Of course instead of thinking that we have difficulty putting our thoughts into words which, to the primitive mind, suggests a sort of translation, we could have said that often our thoughts are unclear.

Here is an exercise that is sometimes given in English lessons. Describe how to tie a shoelace? Your description will be clear if someone else could learn how to tie a shoelace by following your instructions.

This is an example of how it can be difficult to express things in words but there is no temptation to think that I have a clear thought in some other 'mental language' of how to tie a shoelace but I have problems translating this clear thought into English.

Another expression that leads to a similar primitive superstition is 'What I really meant to say..' This tempts us to think that somehow my meaning exists independently of my expression of it and if only I could get a clear look at what I mean, I could express it accurately. We are tempted to take the surface structures of language and construct a metaphysic around them.

Shaun Williamson

back

(17) Dave asked:

I have a question regarding the existence of actual infinities.

I've heard theists argue that an actual infinity cannot exist, yet claim that God is infinite. Some then say that an actual infinity cannot exist in 'the physical world' or in 'spacetime,' but outside of the physical world (but still in reality) actual infinities can exist. Isn't this an arbitrary distinction?

Or are they using a different notion of infinity for God? The existence of God is such an obvious counterexample to their argument that I feel like I'm missing something. Thanks.

---

I think you are confusing yourself with words here. We are most familiar with the idea of infinity from mathematics. It is a useful idea and a very ordinary one in some ways. So for example mathematics often deals with infinite series of numbers.

However it makes no sense to talk about the existence of an infinite number of physical objects. Every physical group of things in our world is finite and this includes the total number of atoms or even sub-atomic particles in the universe.

God, if he exists, exists outside of our physical world of space and time so the idea of an infinite God does not contradict any of the laws of science. Of course we have no idea of what it really means to describe God as existing outside of space and time or what it means to describe God as infinitely powerful etc.

I'm not even sure if talk about God makes sense so I can't really help you with that but there is nothing obvious wrong with attributing infinity to God. In fact it would be more difficult to believe in a God who was finite.

Shaun Williamson

back

(18) Dave asked:

I have a question regarding the existence of actual infinities.

I've heard theists argue that an actual infinity cannot exist, yet claim that God is infinite. Some then say that an actual infinity cannot exist in 'the physical world' or in 'space-time,' but outside of the physical world (but still in reality) actual infinities can exist. Isn't this an arbitrary distinction?
Or are they using a different notion of infinity for God? The existence of God is such an obvious counterexample to their argument that I feel like I'm missing something. Thanks.

---

No, I don't think you are missing something. Theologians sometimes get into quandaries when trying to reconcile reason and dogma. In my view the word infinity is empty of meaning: we use it when we do not know the limits of something, just as we use the word chance when we do not know the causes of something. God (if there is a God) does not have to be infinite in order to be perfect; perfect, that is, in the sense of being the best of all possibles. And what is the difference between 'infinity' and 'actual infinity', if any?

Helier Robinson

back

(19) Benjamin asked:

True or False: The fact that we often have difficulty putting our thoughts into words disconfirms the view that we think in the language in which we speak.

---

True. Other-counter examples are having a word or name 'on the tip of your tongue,' and having an idea for which no word has yet been invented. This does not mean that we do not use language when thinking: we do most of the time. But if you distinguish ideas and words, then there are three kinds of thought: with ideas alone, with ideas and words combined (i.e. concepts), and with words alone; the last is rote thinking. And you can also distinguish two kinds of idea: concrete, such as pictures in memory or imagination, and abstract, as in logic and mathematics. Computers do mathematics using words (symbols) alone; this kind of 'thought' is called algorithmic. Creative, or original, thinking uses ideas alone: the words, hence concepts, come later.

Helier Robinson

back

(20) Kalyan asked:

I claim and proclaim to be an atheist as well as a skeptic rationalist. But then, my question is it a contradiction in the sense that as a skeptic and a rationalist, I don't have enough evidence to prove my arguments as an atheist?

---

No, it is not a contradiction. A contradiction is a statement that is both true and false. You claim to be a rationalist and a sceptic, and that you have arguments, but insufficient evidence, to prove atheism. I don't know what your arguments are, but if they are genuine arguments then you start with premises which, if true, logically entail the truth of atheism; but your insufficient evidence means that you do not know if your premises are in fact true. In fact, atheism, like theism, is extraordinarily difficult to prove; each is a matter of belief rather than proof. By the way, if you are a strict sceptic and a strict rationalist then you have to be a solipsist. Since you cannot prove the existence of anything outside of your present consciousness, you have to be skeptical about all such existence, which means denying it, which makes you a solipsist.

Helier Robinson

back

(21) Kalyan asked:

I claim and proclaim to be an atheist as well as a skeptic rationalist. But then, my question, is it a contradiction in the sense that as a skeptic and a rationalist, I don't have enough evidence to prove my arguments as an atheist?

---

The short answer to Kalyan is that you can be an atheist while holding a reasoned skeptical stance ('reasoned' because your skepticism isn't either pathological or mere blind obstinacy) without believing yourself to be in a position to offer a proof that God does not exist. It suffices that you can offer arguments in favour of the view that atheism is the 'best explanation'.

'Best explanation for what?' is the question. The existence of a world (rather than no world) is one possible explanans, or thing to be explained. Another possible explanans is the existence of a Moral Law (if you believe in such a thing). But there are many more, maybe as many as there are views on the nature of the godhead.

I have never undergone the experience of a religious revelation. But supposing I did, would I be in a position to consider theism and atheism as alternative explanations and, moreover, choose atheism on the grounds that it provided a better explanation for my experience than atheism? Well, yes, that is what one has to say as an atheist. But I admit it sounds rather odd to say it. I can see a case of arguing that an experience wouldn't be the experience of religious revelation if you regarded it as possibly illusory. But then again, that problem doesn't arise if the explanans is another person's (alleged) religious revelation.

The idea that a scientific theory is an 'inference to the best explanation' goes back to the American philosopher of science C.S. Peirce who distinguished what he termed abduction from the process of Baconian induction. The idea was more recently revived by British philosopher of science Peter Lipton, and has become part of the vocabulary of contemporary analytic philosophy.

My University of London external students taking the BA Philosophy of Science module have been sending me essays on this topic, along the general theme, 'Is inference to the best explanation a distinctive kind of explanation?' I find Lipton's idea somewhat hazy, and yet there seems undoubtedly to be a core notion, which the God question illustrates nicely. You wouldn't seriously claim to have inductive evidence for atheism. Yet it seems to make perfect sense to say that atheism is a better explanation for any alleged evidence that a theist might put forward than theism.

According to Occam's Razor, other things being equal the better explanation is the one that posits fewer hypothetical entities. God is an unnecessary posit. Any explanation that does any work, works just as well without God directing things behind the scenes. That would be the moderate atheist view.

Enter Dawkins. In 1976, in my first year taking the Oxford B.Phil, there was a rumour going round that the redoubtable Gareth Evans was offering his undergraduate tutees and graduate students a free hardback copy of The Selfish Gene (which had been published that year) provided they promised to read it. With such a great testimonial, I could never bring myself to indulge in the fashionable Dawkins-bashing, despite Dawkins' somewhat embarrassing reductive views of the nature of philosophical inquiry, as a mere illustration of the theory of 'memes'.

Apropos of the meme theory, the Presocratic philosopher Xenophanes is the first recorded philosopher to employ a genetic argument against a religious claim:

Ethiopians say that their gods are snub-nosed and black, the Thracians that theirs have light blue eyes and red hair.

Kirk, Raven and Schofield The Presocratic Philosophers §168, p. 169

As Xenophanes must surely have realized, this isn't an argument that God cannot be black and have a snub nose. What the observed 'coincidence' shows, in our terms, is that the Ethiopians' reasoning to the best explanation is likely to have been somewhat biased. Having said that, if you believe that man is 'made in God's image' and your only experience of human beings is of people who are black and have snub noses, then it is surely reasonable to infer that God is black and has a snub nose.

However, by the same token, someone who had travelled a bit and discovered that different races have different physiognomies, would realize that this inference was not reasonable, and that any claim of 'resemblance' between God, or the gods, and man must allow for racial variation.

What this shows, if anything, is that you can undermine a purported inference to the best explanation by either pointing out grounds for possible suspicion of bias, and/or showing that the explanation relies on an impoverished evidential base. At any given time, however, the explanation remains in place until either a better explanation comes along, or the grounds for putting forward that explanation are undermined.

I would therefore be quite happy to accept that the belief that atheism is the best explanation for the existence of the world, or the phenomenon of religion — or anything you like — is a 'meme', in Dawkins' sense, whose evolutionary history goes back to the great historic clashes between established religion and the emerging sciences. That doesn't decide the question whether atheism is or isn't in fact 'the best explanation'.

But doesn't our very sense of what makes one explanation 'better' than another depend on prior conditioning, on the memes that have been transmitted to us? Is there a fact of the matter here? Couldn't we be completely wrong about what is or is not a good explanation?

For Dawkins, the spectacular success of science is a major consideration. The kinds of criticism that any scientific claim is subjected to by other scientists do not vindicate themselves (because the same argument can be run with 'the kinds of criticism that any theological claim is subjected to by other theologians'). However, the advantage science has over theology, is in its results. Religious belief has 'results' too, but the results arise from the belief — its psychological effect on the believer — rather than the truth of the belief: a vital distinction.

As I've said, it all depends on the explanans. Here, there is a nice finesse in that the atheist isn't the one who has to state what the explanation is intended to explain. Atheism is not a claim, but rather the denial of a claim. The onus is clearly on the one who makes the claim — the one who asserts that God exists — either to offer a proof, or, failing that, to justify the view that God's existence is a better explanation for XYZ, whatever 'XYZ' may be, than any alternative.

Geoffrey Klempner

back

(22) Ahmed asked:

Does philosophy make a person happy? If yes, how?

---

No it can't. Happiness comes from setting yourself non evil, realistic goals in life and then working towards achieving your goals.

My goal in life was to understand Western philosophy so philosophy has made me very happy. Some one else might be made very happy by building a model of the Taj Mahal from matchsticks. It all depends on what sort of person you are and what your interests are.

Having goals in life and trying to achieve them is what makes you happy.

Shaun Williamson

back

(23) Alvin asked:

I was reading on the problem of induction the other day and I find paradoxes like Nelson Goodman's 'grue paradox' perplexing. It seems that it is not justified for scientific laws to go from specific to general. I know there is no known generally acceptable solution to the problem and I've read Carnap's and Popper's solution but still find it unsatisfying.

I'm particularly worried with the problem as it poses a threat to the foundations of science. Science cannot progress without employing induction, although it seems obvious, the grue paradox shows that we should not take it for granted.

It seems that Popper's solution has been heavily criticised. (i.e It can be assumed that the theory that the sun rises everyday until the day we find a counterexample, the initial theory would be falsified.) What is the problem with Popper's solution? What about Carnap's?

Generally what are your thoughts on the problem?

---

If you have read Goodman, Carnap and Popper (on the significance and problems of 'grue'), then you know that their discussions were targeted at a particular approach to the justification of inductive inference. Specifically, they are addressing some of the problems inherent in a statistical probability approach to confirmation theory. Goodman's basic argument with 'grue' is that any observation of a green emerald, provides equal confirmation for the inductive hypothesis that all emeralds are green, and for the competing hypothesis that all emeralds are grue. Both Carnap and Popper tried unsuccessfully to counter Goodman's reasoning.

Check out John Norton's article 'How the Formal Equivalence of Grue and Green Defeats What is New in the New Riddle of Induction' (Synthese, Vol 150. No 2. May 2006, pp 185-207). In that article Norton explains that either (a) grue and green are fully equivalent descriptions of the same underlying scientific facts, and which to prefer is an arbitrary decision; or (b) the frame can be expanded until there arises scientific facts that make it obvious that the green hypothesis is to be preferred over the grue hypothesis.

Goodman's 'grue puzzle' (it is not a paradox) presents a challenge to inductive inference (and therefore, presumably, science) only if one is wedded to the view that inductive inference needs to be justified by some form of confirmation theory. If one abandons that premise, then Goodman's 'grue puzzle' ceases to be problematic.

One needs, instead, to recognize that inductive inference is a 'rule of thumb' reasoning process and not an 'absolute rule' reasoning process on a par with deductive logic. A piece of deductive reasoning follows rules that always yield the desired result, without exception. Inductive inference, on the other hand, merely yields the desired result more often than not. One chooses competing hypotheses not on the basis of any confirmation theory, but on the basis of cognitive economy. One hypothesis is easier to deal with in anticipating the future. The Copernican sun-centered hypothesis was more cognitively economic than the competing Ptolemaic Earth-centered hypothesis.

Stuart Burns

back

(24) Ben asked:

Hi, I'm currently sitting a Philosophy A-level and I'm really struggling to comprehend soft determinism/compatibilism. How can free will be compatible with determinism? Surely by definition, they both necessitate exclusivity to each other?

---

The resolution of your dilemma lies in a better understanding of just what it is you mean by the label 'free will'.

The classic understanding of free will comes from the ethical consideration that given a choice of doing a good thing or a bad thing, you are free to choose to do the good thing. Now this level of understanding certainly includes the proviso that you are not under any physical compulsion. You are free to implement your evaluation of which is the best thing.

But just what exactly does this mean? Given a choice of A or B, you do not just flip a coin, or rely on some other randomizing input to govern your choice. Depending on some sort of randomizer is not what we mean by making a choice. Nor do you simply take someone else's word that A is the thing to do, or B the thing to avoid. Following someone else's commands is not making a choice. And relying on someone else's opinion might be making a choice about how to choose, but is not making a choice about what to do.

To make a free-will choice between A and B, you go through all the reasons you can think of that weigh in favour of or against A or B and evaluate which alternative would be 'better' according to the applicable standards of 'better' that apply. The amount of time and effort you invest in this evaluative exercise will vary according to your expectation of the costs and benefits involved. Whether to pick up and keep this penny you see on the ground before you will command less consideration than whether to pick up and keep this bag full of $100 bills. Whether to have cake or pie for desert will command less consideration than whether to buy this house or not.

But go back and re-read the first sentence of the previous paragraph again. To make a free-will choice is to marshal and evaluate reasons. But that means that if you are faced with the same alternatives tomorrow as you are faced with today, and the reasons have not changed, then you will make the same choice. In other words, the choices you make are determined by the reasons that you consider. The only way that you would make a different choice, is if the reasons are different.

For any choice to be your free choice, and not the result of someone else's control of, or influence on you, you have to make the choice based on your own understanding of your options, as evaluated by the values and priorities you have learned through experience. A choice is not freely yours if it is not based on your beliefs and your character, your experiences and your goals. Yet with all of these constraints, you still feel that the choice is a free one. That is because these constraints that I have listed are what you are. If these constraints are not operational when you make your choice, then you are not making the choice.

When you make a choice, you bring to that choice your experiences and memories, your character and desires, your goals and your values. That is what 'you' are. And given who you are and what you are made of, in any particular circumstance the only way that you could choose other than how you do choose, is if something external to you forced the issue. In many cases, your friends can predict the way that you will choose, because they know who you are. And if you were to choose other than you would normally choose, your friends would look for some unusual reason for this unusual choice.

But all that is no more than saying that you are the product of your past history. Which is all that Determinism is saying.

Stuart Burns

back

(25) Alvin asked:

I was reading on the problem of induction the other day and I find paradoxes like nelson goodman's grue paradox perplexing. It seems that it is not justified for scientific laws to go from specific to general. I know there are no known generally acceptable solution to the problem and I've read carnap's and popper's solution but still finds it unsatisfying.

I'm particularly worried with the problem as it poses a threat to the foundations of science. Science cannot progress without employing induction, although it seems obvious, the grue paradox shows that we should not take it for granted.

It seems that popper's solution has been heavily criticised. (i.e It can be assumed that the theory that the sun rises everyday until the day we find a counterexample, the initial theory would be falsified) What is the problem with popper's solution? What about carnap's? Generally what are your thoughts on the problem?

---

In my view induction is a special case of animal learning. A dog quickly learns that if you pick up his leash then he goes for a walk; the dog is generalising from the particular to the universal. And we do the same with stereotyping and superstition: consider how many people are starting to claim that all Muslims are terrorists, and how many people do you know who cross their fingers, or knock on wood, to avert bad luck? The point about scientific induction (which is also generalisation from the particular to the universal) is that it is strictly disciplined by three criteria that have, over several centuries, been found to work: (i) scientists must be objective, (ii) quantitative data are better than qualitative data, and (iii) experiments must be repeatable. What these have in common is emphasis on the public, as opposed to the private. Objectivity is exclusion of the subjective, which is private. Quantitative data are more public than qualitative data. And the repeatability of experiments ensures that their results are public. The reason that the public is important is that it excludes illusion. We can define empirical reality as all that people perceive around them that is non-illusory, which is all that is potentially universally public. Illusions are unreal, so are excluded from empirical reality; most illusions are private; but some illusions are public, such as the railway lines meeting in the distance, but they are not universally public. And the universally public does not have to be actually perceived, only potentially universal. With this definition you can see that empirical science seeks to know empirical reality.

Please note that the problem of induction does not apply to theoretical science, which has different criteria, of which the two most important are that a theory must not be contrary to empirical data, and a theory should predict empirical novelty.

Helier Robinson

back

(26) Malcolm asked:

What does this mean? 'I do not believe we can have any freedom at all in the philosophical sense, for we act not only under external compulsion but also by inner necessity.' (Albert Einstein)

---

If your actions are determined by external compulsion then they are not free; and if they are determined by inner necessity then they are not free; in which case none of your actions are free, you have no freedom. This means that your feeling of making decisions freely, and of acting freely, are illusory.

Helier Robinson

back

(27) Malcolm asked:

What is the easiest way to determine the figure in Aristotelian logic (syllogisms)?

---

If you arrange the figures in numerical order from left to right, with major premise first and minor second, then drawing lines through the middle terms give a picture of a buttoned shirt collar:

M-P P-M M-P P-M
S-M S-M M-S M-S

Helier Robinson

back

(28) Ben asked

Hi. I'm currently sitting a philosophy A level and I'm really struggling to comprehend soft determinism/ compatibilism. How can free will be compatible with determinism? Surely by definition they both necessitate exclusivity of each other.

---

According to the usually accepted definition of free will, determinism and free will are indeed mutually exclusive. However compatibilists have an alternative definition of free will which, they say, is compatible with determinism while still giving us 'free will worth having'

To explain:

Determinism means that all events, including our acts, are consequences of the laws of nature plus previous events (ultimately in the distant past), so that everything which does happen must happen. It is inevitable. At no point could I have done otherwise than what I did. This is clearly incompatible with free will defined as the ability to have done otherwise, to have chosen to do something different, to have alternative possibilities (AP). So, by definition as you say,no free will (AP) if determinism is true.

The only ways of preserving our free will are: 1. Deny that the world is deterministic, and try to account for free will (AP) by a dash of indeterminism (quantum events and chaotic dynamics in the brain for example, or the 'self 'as a causal agent) without making such indeterminism mere randomness. This is what libertarians try to do (unsuccessfully in my view) but that's a side issue as regards your question. 2. Accept determinism (no AP)but define free will as being able to act freely even though you have no AP. This is what compatibilists do. But is this 'free will worth having'?

Clearly it's worth being able to do what you want (to act voluntarily) rather than being compelled at gunpoint to do something else, or prevented from doing what you want by being in jail. But this is to confuse voluntary action with free will. John Locke had no such confusion. In his Essay Concerning Human Understanding (4th ed 1699) he speaks of a man being carried asleep into a room and the door being locked. The man awakes, finds the company in the room most congenial and has no wish to leave. Here, Locke says, the man acts voluntarily in staying in the room, but yet has no liberty (he cant leave the room if he wanted to). Some compatibilists try to maintain that because the man chooses or wants to stay, he exercises free will even though (unknown to him) he can't do otherwise.

Frankfurt and others have devised ever more elaborate scenarios where the lock is in the brain rather than in the door (mad scientists remotely monitor your brain and can stop you doing anything they don't like, but, as it happens, they never have to intervene because everything you do voluntarily is OK by them) purporting to show free will without AP. But, whether the brain is monitored or not monitored, actions, including voluntary ones, are still determined. Another argument used is that you could do otherwise if you wanted to, but you don't want to. Of course not, I say, because what you want is determined, and couldn't be otherwise. Compatibilism agrees with this point but holds that you are 'able' to do otherwise in the sense that, had the past or the laws of nature been different, you might have done otherwise. But how is this an 'ability'? It can never be exercised.

So, free will, defined as having AP, is indeed incompatible with determinism, and, defined as acting freely without AP, merely amounts, in my view,to voluntary action which is nevertheless deterministic

If you still want free will, have a look at the libertarians arguments.

If they don't convince you, get off the age old merry-go-round about freewill, accept it's an illusion (very compelling I admit) and think about two sorts of question 1. What's the mechanism of this illusion? Do all humans have it? Would all self-conscious entities in the universe feel they have free will?. Will future computers think they do? 2. How should we live knowing (or at least justifiably believing) we have no free will? (no praise or blame of course but no moral indignation or recriminations either; still right and wrong, approval/disapproval; still quarantining dangerous criminals to protect the public).

Best of luck with your A level(s).

Craig Skinner

back

(29) Kristen asked:

How do we know if anything really exists?

---

If you are going to use expressions like 'really exists' then you must have some idea of what it means and of the difference between existing and not existing. So for example there is a difference between a bus stop that really exists and a bus stop that doesn't really exist.

I can stand by the bus stop that really exists and catch a bus. I can't stand beside the bus stop that doesn't exist. When I go to catch a bus I always choose a bus stop that really exists and I never go to one that doesn't really exist.

It would of course be strange to say 'I know that some things really exist' and just as strange to say 'I know that nothing exists'. Why does it seem to us when we are doing philosophy that such sentences could even make sense. That is the real problem.

Shaun Williamson

back

(30) Alanna asked:

You were born a tyrant. You were taught to think that you are better and more important than anyone else. You killed thousands of innocents mercilessly as though they were just pawns. You delight in bloodshed and battle. You are a master of propaganda and lies.

Suddenly you are overthrown. You live in exile. For the first time you experience friendship and love. You are happy. But a citizen of your ex-kingdom and son of someone you have ruthless murdered finds you. He has suffered under your control and wants to avenge his mother and all the others you have killed or brought great depression. Can you ever been forgiven?

---

The people you have wronged may choose to forgive you or they may not. You cannot demand forgiveness. There is no world forgiveness authority that can declare you to be officially forgiven. You cannot demand forgiveness just because you have changed.

Forgiveness is something that only the individual people you have wronged can do.

Shaun Williamson

back

(31) Martin asked:

If a meritocracy is a type of society where people are rewarded in line with there intelligence or ability, has there ever been talk of a type of society where morals and good will is rewarded? Could this type of society be made possible and how?

---

I think the basic idea of a meritocracy is that people are educated, employed and paid on the basis of what they can do for society. This can be contrasted with the idea of aristocracy where people inherit the right to jobs and money even if they are incapable of carrying out those jobs and don't do anything for society that would make it worthwhile to pay them for what they do.

The idea that we should pay people for being good is ludicrous. Being good is not an economic activity. People should be primarily be valued in terms of whether they are good or bad people but this has nothing to do with their economic value.

Morality is its own reward, if you are a good person then you deserve praise and no one has the right to criticise you. If you are a bad person then everyone has the right to shun and criticise you, including yourself.

A cobbler who is a good person deserves our praise but if he is useless at making shoes then there is no reason to pay him for his efforts. We praise him for being a good person but we pay him for being a good cobbler.

Shaun Williamson

back

(32) Asia asked:

Do holes really exist or are they pockets of non-existence?

---

Whoa! I know someone who would love this question — my erstwhile student and Pathways mentor Brian Tee. Brian got his MA in Philosophy from The University of Sheffield and now owns a bookshop in Sheffield — a nice job for a philosopher. I have to apologize to Asia in advance because Brian would have been able to give a much better answer than me. But I can only try my best.

I remember having a three hour discussion on the philosophical topic of holes with Brian while downing pints of Easy Rider at The Sheaf View pub, just up the road from my office. John Riley, another ex-student who designed the banner for The Ten Big Questions was also there. The discussion was sparked off when Brian pointed to the absence of beer in his glass and reminded me that it was my turn to buy a round.

How can an absence be something? As any beer drinker knows, the absence of beer in your glass is a very serious matter which needs to be rectified as soon as possible. Somehow, that got us onto the topic of holes.

Let's say that holes undoubtedly exist. Then what is a hole?

Consider a hole in a wall. (I think that was my bright idea.) A hole is something you can climb through: an opportunity (if you are trying to get to the other side of the wall) or a threat (if you are trying to prevent someone from getting to the other side of the wall). However, a hole — say, a gap in the brickwork — isn't a hole in the wall if it is too small (then it's a crack — another concept that one could look at), or if air is blasting through at a sufficiently powerful rate, or if it contains a guillotine designed to chop you in half if you try to climb through.

Chicken wire is full of 'holes', but a hole in a chicken wire fence is a matter of concern to the farmer, especially if there are foxes about. Here again, what does or does not count as a hole is relative to the function or purpose of a given item.

Is a hole a thing? Consider the holes in Emmental ('Swiss') cheese. If you bought some Emmental at the supermarket and then discovered that it didn't have any holes, you'd have the right to complain: the cheese may taste the same, but it isn't Emmental without the holes. You'd miss the peculiar pleasure of exploring the holes with your tongue as you bite into the cheese. Visual appearance is also very important. In this and in many other cases, holes are a positive aesthetic feature.

However, so far we are merely skirting round the issue. Talk of the 'functional' or 'aesthetic' role of holes merely underlines the reasons why we take a practical interest in these strange objects. The philosophical question, however, is what holes are, ontologically speaking.

From the point of view of logic, to say that a hole is a 'something' is to assert that it is an 'entity with an identity' in P.F. Strawson's sense: an object of reference whose persistence and identity conditions are sufficiently well defined to enable a speaker and hearer to identify it as the 'same again' on different occasions and say things about it.

One of the things we discussed in the pub was Sartre's discussion of 'the absence of Pierre'. I'm waiting in a coffee bar for Pierre but Pierre hasn't shown up. Wherever I look, Pierre is not in my field of vision. In terms of Gestalt psychology, I perceive the cafe not just as general scenery but as a ground on which I am expecting a figure to appear. All the details fade into a more or less uniform blur. And yet what I perceive is not merely a blur but something positive, Pierre's absence.

To perceive a hole is to perceive a gestalt, a 'figure' on a 'ground'. But, equally, to perceive the absence of a hole is to perceive a gestalt. The hole searched for is not there.

Frege or Russell would say that the absent Pierre isn't a peculiar kind of object inhabiting the 'realm of non-existence'. Rather, the statement, 'Pierre is not here' can be analysed in first-order predicate calculus as, 'For all x, if x is in the cafe, then x is not equal to Pierre', or, analysing proper names à la Quine, 'For all x, if x is in the cafe then x does not have the property of being-Pierre'.

However, this response still fails to address the question why absences, or holes, are philosophically interesting, and indeed why Sartre sees the very notion of 'nothing' or 'nothingness' as having deep phenomenological or metaphysical significance. You don't have to believe that holes are 'made of' a special kind of non-existent stuff, or think of holes as 'pockets of non-existence' in order to sense that holes are somehow problematic and disturbing.

Assume that a hole is, as stated above, an 'entity with an identity'. Holes which meet this criterion are like things, and yet they lack many of the essential qualities of things. Holes lack the defining properties of a 'substance' in Aristotle's sense. In Lockean terms, holes do not have 'primary qualities' from which their 'secondary qualities' flow.

And yet holes are like things in that they have a natural life, a natural history. Consider the hole in a sock, which starts off as a broken thread and then gradually grows and grows until your heel sticks through. The holes in Emmental are produced by a biochemical reaction, their distribution and size is carefully controlled by the precise conditions under which the cheese is manufactured. And yet they are not made of anything. They contain pure carbon dioxide but they are not made of carbon dioxide, any more than a hole in a brick wall is made of air.

Just like physical objects, holes can combine and merge. Two small holes in a sock can gradually grow until they merge and become a bigger hole. Equally, holes can be divided up. Adding a few strands of wire fixes the 'hole' in the chicken wire fence. From one point of view, the larger hole has been divided up into smaller holes, but, as we have seen, the smaller holes are not holes in the fence, which as a result of the timely repair is once more an effective fence, sufficient to keep the foxes out.

In the pub we also considered the idea that the edge or rim of the hole constitutes is actual, physical presence. It is true that in describing the precise dimensions of the rim you have described the dimensions of the hole. And yet logically the rim, qua physical stuff, cannot be a constituent element or part of the hole, because you can fill the hole in (e.g. a hole in a wall) without in any way changing the material properties of the the rim. Equally, if I mark a chalk circle where I plan to cut a hole in the wall, I have defined a potential rim which in a sense actually exists (as physical material) and yet does not yet exist — just as for the sculptor the statue already 'exists' in the uncarved stone.

— Come to think of it, what is it that one 'sees' in the hunk of stone?

Last week, I was kicking around possible designs for a new web page, ISFP Publishing. The idea is to help unknown authors promote books on philosophy. Somehow, I gravitated towards the idea that the background should look like old paper. I found something very nice on Flickr. But still, there seemed to be something missing. Then the idea came to me — from I don't know where — that what the page needed was a fly, crawling across the paper. The people I've shown the page to agreed that the fly was just right and nothing else would do. But how did I know this, from just staring at the space where a fly was not? What did I see?

However, I think there's something else that needs to be emphasized, something to do specifically with our psychological attitude to holes in particular, which does not apply to absences or lacks generally.

As a matter of physical fact, our bodies are porous (from the Greek poros, passage or pore). The human body is made of, defined by, its holes. (Something about this reminds me of Tantric philosophy.) Through these passages and channels, information and physical material flows in and out. The miracle of reproduction is the most impressive example.

The very notion of perception involves the idea of holes or channels whereby information is conveyed into our minds from the external world, through the eyes, ears, nose. To be receptive to experience is essential to our connectedness with the world and our surrounding environment, as indeed it is to our capacity to communicate with one another. Yet equally important is the role of holes in relation to physical needs, the need to breathe, eat etc.

Last time, I strayed into Freudian territory in talking about 'male' and 'female' aspects of the impulse to philosophize. Leaving aside the differences between the sexes, the discovery that one has an anus as well as a mouth, must be a momentous event for the human infant.

All of which leads me to conclude that what makes the topic of holes so enticing is not just one thing but a potent combination of factors.

— Well, those are some more or less jumbled thoughts. Holes exist. But there is no single, definitive way of stating what makes something a hole. It depends on your point of view, or interest. And I've tried to explain why holes are so 'interesting'. If there is a core or real essence to the 'philosophical problem of holes', I don't think I've found it. Maybe you will, Asia, if you keep looking. Or ask Brian.

Geoffrey Klempner

back

(33) Martin asked:

If a meritocracy is a type of society where people are rewarded in line with their intelligence or ability, has there ever been talk of a type of society where morals and good will are rewarded? Could this type of society be made possible and how?

---

Martin, I must say that I find this a most interesting question. That is, since on the surface there appears to be little difference between the two 'types of society' you mention, on examination, it can be argued, the difference may be quite profound. The apparent similarity is in the fact that, in a moral society, one holds to the moral or ethical values of the community because one believes that this is the manner in which one, as a member of such a community, should conduct oneself; whereas, in a meritocracy it is considered, in the main, advantageous to oneself merely to be seen to behave in an upright and moral way.

Edith Stein, in her essay 'The Individual and Community', deals very well with this issue where she describes the difference between what she calls 'the community man' and the 'association man'. Before anything else, says Stein, if you want to understand in what sense you can talk about the universe of sentient reality into which the lone psyche fits as a member, you have to clarify a determinate form of the living together of individual persons. Where one person approaches another as subject to object, examines her, 'deals with' her methodically on the basis of knowledge obtained, and coaxes the intended reaction out of her; they are living together in association. Conversely, where the subject accepts the other as a subject and does not confront him but rather lives with him and is determined by the stirrings of his life, they are forming a community with one an other. In the association, everyone is absolutely alone, a 'windowless monad', as Leibniz might say, whereas in the community, solidarity prevails. Thus, whereas a society founded solely on meritocracy can lead to isolation, alienation and loneliness, the reward for a society grounded in moral values and good will is personal security and social stability.

In her essay, Stein takes the demagogue (a popular leader who appeals to the baser emotions of his people) as the purest example of the 'association man' who wants to make a crowd of people subservient for his own purposes. The bond of solidarity is severed between him and those who are objects of his 'treatment'. However, because subjectivity is the object of the association man (because he wants to make the people his 'subjects'), he needs the posture of a community man as an epistemological expedient (it serves his purpose to gain the reputation of a 'community man'). Stein identifies the 'association man' as an 'observer'. What distinguishes the observer from the spontaneous participant is that the observer rationally takes advantage of what community life offers him — and in doing so he uses his 'intelligence and ability' to create the guise of a moral agent for his own personal merit. As a type of Machiavellian figure, he passes over from spontaneous experiencing into a wary posture, he makes everyone else's inwardness into an object instead of immediately 'reaching' to it, and he exploits the knowledge of it for the purpose of his transactions. On the other hand, the 'genuine' man of the people puts himself at the service of the people out of a natural predisposition. What counts for him are the wishes, needs, and the interests of the people, which he allows to affect him directly as a community man. However, whilst the 'impression' he makes is unintentional, once he becomes conscious of his function as a leader in the community, he is put in the position of having to study people in order to be able to guide them correctly. Still, it is possible for him to fulfil this role without passing over to the association posture. Thus, community is possible without association, but association is not possible without community.

Stein distinguishes genuine community or society from other kinds of unions amongst people. In common with the difference between the 'association man' and the 'community man', she holds that the principal distinction is between community (Gemeinschaft) and association (Gesellschaft) is that communities are founded on organic relations between individuals, whereas associations are based on more artificial unions. In contrast to communities who are focused on the well-being of all their members, associations are focused on certain goals and the means by which to attain these goals. Notwithstanding the distinction between association and community, few alliances are pure associations or pure communities, but a combination of both However, pure communities are possible, whilst pure associations are not.

(For more on this see Stein, by Sarah Borden, 2003, pp 47-64).

Tony Fahey

back

(34) Krai asked:

I am not clear about the difference between idealism and realism. Could you please give the essence of the two isms/concepts?

---

Idealism

The essence of Idealism is that knowledge is primarily or entirely intellectual. That is, knowledge is dependent upon mind or ideas [hence idea-lism] of the human being. For the Transcendental Idealist thinker Immanuel Kant [1724-1801]; what we perceive and understand is facilitated by the synthesis of intuitions with categories. This synthesis produces a simultaneous identity between intuitions of the senses and the Categories inherent to the Intellect. This act of sandwiching is the condition of knowledge and the possibility of knowledge. What we perceive can only occur on the condition of the categories being involved in the act of perception. The categories also limit or determine what it is possible to perceive. Without the categories, no perception or understanding could occur. Think of a person who wears spectacles. Without them, the world is blurred, out of focus, indeterminate. With the application of spectacles, the world becomes focussed, ordered and coherent. Analogously, the application of the categories makes the phenomena we perceive have quantity, quality, relation between terms, and causality. All these operate in time and space. Without these categories, we could not know the world. An important qualification is that the categories allow us to see the world as determined by themselves. We can never know the world as it is in-itself.

In essence, without the actions of the Mind, we cannot know anything. Knowledge is mind dependent. This can be described as the defining character of Idealism: human knowledge is dependent upon the Mind and its actions.

Realism

Realism is either direct or Indirect. With Direct [or Naive] Realism, experience of phenomena through the senses is primary. Humans perceive the world as it actually is. Unlike Idealism, there is no recourse to the Mind with its Categories and Concepts which, mediate or determine our experience. The senses of the human being human being are like a mirror which simply reflects the way the world is. Human beings learn and acquire knowledge by means of experiencing the world. Hence, realism is an empiricism: knowledge is acquired from sensual experience. What humans have knowledge of is real as it derives from the world.

This view is subject to the criticism of Indirect Realism-that we do not perceive the world directly as a mirror does. Whilst human knowledge is largely composed of things we perceive and experience, it also has things we do not experience. Firstly, we do not experience 'numbers'. I see one sheep and another and another but nowhere, do I experience 'three'. The number 'three' is an abstraction made by the human mind. It does not exist in the world to be experienced. Again, why do human beings value consistency and have an issue with contradiction? If I state 'It is raining and it is not raining', I will be accused of committing a contradiction. Again, is the contradiction experienced? I cannot, see, smell, touch or taste it. Arguably, it is derived from an awareness and experience of difference between statements made in the same way difference is experienced between hot and cold, black or white. But where does the understanding of difference itself come from? Is it itself experienced or, is it developed by the abstraction from experience or; is it an innate ability of the mind? Finally, is colour a property of objects themselves or, is it dependent upon the human senses? A tomato is red in daylight but dark when perceived at night. What then, is its real colour? Perhaps there is no real colour inherent to the tomato. Its colour is dependent on human senses and the environment. In other words, the tomato itself has no innate colour but only powers to imbue the perceiver with certain colour perceptions.

Nevertheless, Realism maintains that the knowledge a human being has, is acquired through empirical experience; experience of a world that exists beyond and independently of, the perceiver.

Martin Jenkins.

back

(35) Lois asked:

There are situations where the pursuit of our own happiness and peace of mind conflicts with that of another. Must we always put the interests of others before our own? Is there any justification for pursuing one's own welfare at the expense of someone who stands in the way of our goal??

---

Your question — 'Must we always put the interests of others before our own?' — presupposes an Altruistic moral standard (the standard that is the basis of the Judeo-Christian-Islamic religious morality, for example). If you are going to insist that the Altruistic conception of Right and Wrong is the only acceptable basis for moral judgements, then the answer to your question is — No, there is never any acceptable justification for pursuing one's own welfare at the expense of someone who stands in the way of our goal.

However, there are many people (and many philosophers) who would argue that the Altruistic basis for morality is simply wrong. Personally, I would argue that an Altruistic basis for moral judgements is auto-genocidal — designed to ensure self-immolation. Altruism is based on the moral premise that the needs of the other always take precedence over the needs of the self. Since there are always many more others in the world, with much greater needs than your own, there is never any moral justification for pursuing one's own interests at all, ever. Pursuing one's own needs, whenever you do so, is always in the face of the Altruistic directive that doing so is less moral than caring for someone else's needs.

However, if you find this as distasteful as I do, and if you should choose to adopt as the basis for moral judgements an alternative ethical system, such as Evolutionary Ethics, then the answer to your question is — Yes, there can sometimes be an acceptable justification for pursuing one's own welfare at the expense of someone who stands in the way of our goal. From the basis of Evolutionary Ethics, the challenge would be to demonstrate that you judge that the long term best interests of your genes would be more likely enhanced should you, in this particular instance, pursue your own welfare at the expense of someone who stands in the way of your goal.

But you need to be prudent in this cost-benefit analysis, being neither narrow minded nor short-sighted. We are a social species that flourishes best only within a social environment. It is usually counter-productive to pursue your own interests while disregarding the interests of those who constitute your social environment. It is usually best (on average, and in the long run) to find ways to cooperate with those in your social environment. In the particular scenario you describe, try to find a way to pursue your own best interests that is not at the expense of someone who forms part of your social environment. That person has interests as well. Almost always, it is possible to find a way to trade, negotiate, and cooperate. But sometimes, on rare occasion, circumstances (and the standards of Evolutionary Ethics) demand that you bite the bullet and pursue your own goal at their expense.

Stuart Burns

back

(36) Alanna asked:

Some people say that life on earth is just a test. Others say that it is the pathway to heaven. No matter what reason there is, the question is as follows. Why are some people born into a life of luxury and comfort whilst others are forced into one of great hardship and despair? And what about those who die of birth before they even have a chance to live?

---

The simple answer is — 'Shit Happens!'

A more profound answer must rely on a deeper analysis of the premises hidden beneath your question.

Your very posing of this question raises the conjecture that you find the current situation (wherein some people born into a life of luxury and comfort whilst others are forced into one of great hardship and despair) somehow unexpected or puzzling. Which in turn, raises the conjecture that your underlying premises are creating some expectation that this obvious imbalance of initial (at birth) fortunes should not be. The question now becomes 'What is it that makes you believe that such a gross imbalance of initial fortunes is not the expected outcome?'

Let's first examine the assumption that the world behaves according to entirely naturalistic rules. The current state of affairs (with its gross imbalance of initial fortunes) would be the result of entirely natural processes. It is an observable fact that the extent of people's fortunes varies widely. It is also an observable fact that the extent of people's talents varies widely. If, by some feat of magic, all the world's total fortune was redistributed equally amongst all people, the unequal distribution of talents would not take many generations to cause the distribution of fortunes to resemble the current state of affairs.

Given a randomly selected birth, therefore, it would be expected that the newborn would arrive into a life of luxury and comfort or into one of great hardship and despair with probabilities according to the imbalance of fortunes in the general population. It makes no sense, from this basis, to inquire as to why some people are born into wealth while others are born into poverty. Any more than it makes any sense to inquire into why some are born in Canada while some are born in Australia. People are born because their parents had sex. The wealthy are as capable of sex as are the poor, just as Canadians are as capable as Australians. The fortunes that any randomly selected newborn actually gets, is simply a matter of 'chance' — in the sense that it is random which newborn you happen to select to examine. The fortunes that a particular newborn gets is a product of the talents of the child's parents (and their parents, and so forth). Overall, then, a gross imbalance of initial fortunes is to be expected as the natural consequence of the historical impact of a naturally occurring imbalance in people's talents.

Since the assumption that the world behaves according to entirely naturalistic rules does not appear to generate the expectations that seem to have prompted your question, let's examine an alternative. Let's examine the religious premise that the world is managed by some 'God' (however you wish to conceptualize that label) who both has a purpose in mind, and has the power to implement that purpose. (This is the 'No Sparrow Falls' thesis of Matthew 10:29.)

With this as the premise, it would be a necessary consequence that the initial circumstances in which one finds oneself is/was selected by God to suit his/her/its purpose. Now it becomes comprehensible to inquire why might this God have placed some people into a life of luxury and comfort whilst others are forced into one of great hardship and despair. If we add the premise that this God is also benevolent (to any extent convenient to the argument), then one would come to expect that good things should happen to good people, while bad things happen to bad people. One can then inquire 'Why do bad things happen to good people, and vice versa?'

Unfortunately for the answer, the purpose of this God is indecipherable. Given the absence of any evidence, it is impossible for us to determine what might be God's purpose for any of the things that he/she/it wills to happen. So, from the perspective of this religious premise, there simply is no available answer to your question.

Of course, given the deficiency of the evidence available on the nature, characteristics, and/or purpose of this God, it is entirely conceivable that this God's purpose is to keep us confused. It is also entirely possible, that God's purpose is indistinguishable (by us) from random events. And it is entirely probable, given the extant evidence, that the religious premise is in fact wrong. In which case, your expectations are ill founded.

Stuart Burns

back

(37) Krai asked:

I am not clear about the difference between idealism and realism.

Could you please give the essence of the two isms/ concepts.

---

First, a caveat. 'Idealism' and 'realism' are labels that are employed in many different areas of philosophy — with varying degrees of 'term-of-art'ness. Since you ask your question in the absence of any particular context, I am going to assume that you mean these terms in their generalized metaphysical sense. This is the context of widest general employment of these terms, and involves the least 'term-of-art'ness.

In the history of Philosophy, there are two quite distinct traditions about the nature of the relationship between 'the Self' and what we think we perceive — what we think is real. They are the Idealist, or the 'Inside-Out' tradition and the Realist or 'Outside-In' tradition. (I like the more descriptive labels. I feel they are less confusing, since the Idealist/Realist dichotomy is used in many different ways and many different places within philosophy.)

(1) The 'Inside-Out' tradition is best exemplified by the famous quote from Rene Descartes — 'Cogito, ergo sum!' — 'I think, therefore I am!' Philosophers of this tradition start with the incontestable premise that 'I think', and deduce from that the inescapable conclusion that consciousness is the fundamental given of metaphysics. Their argument is that to deny the premise 'I think', or that 'I am conscious' is a logical contradiction. The very fact that one is denying it necessitates that one is thinking and is conscious — thus invalidating the proposition.

However appealing this approach is, it suffers from one fatal flaw that no philosopher has ever managed to bridge. Philosophers of the Inside-Out tradition maintain that our modes of consciousness and cognition modify or process the sensory inputs, so that what our consciousness is aware of as sensory evidence must be regarded as the products of our consciousness rather than unbiased evidence of reality. In that event, goes the inescapable logical conclusion, either we can know nothing about the nature of an alleged external reality, or anything that we can know about such an alleged external reality must be provided through other means than our senses.

There is no logical line of reasoning that can proceed from the basic premise that consciousness is the fundamental given of metaphysics, to the conclusion that there is a reality outside of one's own consciousness. Since there is no way to validate the evidence of the senses, there is no basis from which to conclude that the sensory evidence is valid. Philosophers of the Inside-Out tradition are therefore forced to conclude that all that is perceived, as well as all the contents of consciousness is actively created by the nature of consciousness — the 'Self'. As it is impossible, therefore, to logically derive the existence of an external reality, there can be no logical foundation for any constraints on the nature of the contents of a particular person's consciousness.

So we have philosophers like Berkeley who argue that there is no external reality. What we think of as 'reality' is but ideas in some consciousness — specifically God's consciousness. And we have Kant who argues that our understanding of the noumenal world (the un-perceivable and unknowable reality that is the foundation beneath our sensory perceptions) is governed by the structure of our consciousness.

The proponents of the 'Inside-Out' line of reasoning support their arguments with examples and analyses based on evidence from the senses. Which is, of course, a logical contradiction since they argue that the evidence from the senses cannot be trusted. They assume that consciousness, as prior and primary to the sensory evidence, must generate our understanding from the evidence of our senses. Since this understanding is not a pure product of our senses, therefore what we understand about our sensory perceptions cannot be trusted as evidence of an objective reality.

Therefore, there can be no logical necessity for any standardization or similarity of the contents of consciousness from one person to another. In fact, there can be no logical necessity that there exists anything other than one's own consciousness. Any suggestion that there exists a reality, or that there exists other minds, is founded on untrustworthy evidence from the senses. The purest version of Idealism inescapably drives the logic towards Solipsism. And the only escape is to posit some unsupported additional premise (like Berkeley's addition of God) that can provide a loop-hole.

Because it denies the existence of any form of objective reality, the Inside-Out tradition logically results in 'Subjectivist' notions of Truth, Knowledge, and Ethics. The philosophy of Kant is perhaps the pinnacle of this school of thought.

There is also a sub-tradition maintained by those philosophers who start with the same 'Inside-Out' premise, but despair over the subjective consequences and proclaim the 'Nihilist' school — Truth, Knowledge and Ethics are impossible, illogical, and invalid pursuits for inquiry. The once popular school of 'Logical Positivists' are more or less of this school. Which is probably a good explanation why Philosophy and Philosophers as topics of popular awareness are in such ill repute.

(2) The Outside-In tradition is best exemplified by Aristotle. Philosophers of this tradition start with the premise that thinking and consciousness are processes not things. By the very nature of what a process is, in order for a process to 'exist' (be in the process of processing) there must be something that is being processed. To think is self-evidently to think *about* something. To be conscious is to be conscious *of* something.

Philosophers of this tradition start with this premise and acknowledge that by the nature of processes there must first be something about which I can think or of which I can be conscious, and deduce the inescapable conclusion that the existence of something is the fundamental given of metaphysics. The argument is that to deny the existence of something is a logical contradiction. The very fact that one is denying that something exists necessitates that one is thinking about and is conscious of something — thus invalidating the proposition. (By the act of thinking, one demonstrates that the thing that is thinking, and the thing that it is thinking about, both exist.) This argument is most succinctly (if not most cogently) expressed in the basic axiom of Randian Objectivism — 'Existence exists'.

Start with the premise of a reality that exists (i.e. is 'real') as the fundamental given of metaphysics. Add to that the realization that if thinking and consciousness are processes that are about and of reality, then reality must exist prior to and independent of those processes. You can't have a process in operation, without something being processed. You can't be conscious, without being conscious of something. But a process is not necessary for the existence of something. Thus the premise of a reality that exists as the object of the process of thinking and consciousness, necessitates that reality is objective and independent of those processes.

If reality is not 'real' (ie. objectively existent), then the information provided by our senses is not a valid basis upon which to base conclusions about the nature of Reality. For Reality to be other than 'real', would mean it would have to be 'un-real' (non-objective and/or non-existent). And 'unreal' means just that something is imaginary, or ideal, or constituted by our consciousness.

The approach that is more in keeping with 'Common Sense' is the view that 'out there' is not 'in here'. That there is a reality that is outside oneself, that does not respond to the whims and notions of one's conscious attention, and that does not disappear when one's consciousness is focused elsewhere. If reality is 'real', then the information provided by our senses is a valid basis upon which to base conclusions about the nature of Reality.

There are numerous writers of the Outside-In (Realist) school of philosophy, beginning with Aristotle, who have written excellent expositions on the 'real' and 'objective' nature of Reality. Among the more recent of these are Ayn Rand, David Kelley, and William P. Alston. I can do no better than refer you to the works of one of these authors. They have done a much better job than I could possibly do, and at far greater length than this text would permit.

Stuart Burns

back

(38) Zachary asked:

I was wondering can you have inalienable rights without the existence of god and if so how?

---

Everything depends on just what is meant by the words 'inalienable rights'.

According to the various online dictionaries I checked, 'inalienable' is an adjective that according to common usage means 'incapable of being repudiated or transferred to another; not subject to forfeiture; protected from being removed or taken away; unable to be removed.'

The word 'rights' is especially problematic, since it is so widely used and abused. According to those online dictionaries I checked, even as a noun, the word has many different meanings depending on the context of usage. Here is a selection of common usages that would be appropriate in the context of 'inalienable rights'. A 'right' is a noun that according to common usage means 'something claimed to be due by moral principle: that which is morally good or in accordance with accepted principles of justice, fairness, and honesty; that which is just, morally good, legal, proper, or fitting; anything in accord with principles of justice; an abstract idea of that which is due to a person or governmental body by law or tradition or nature; the interest possessed by law or custom in some intangible thing; a justified claim or entitlement, or the freedom to do something.'

Putting the two definitions together, and you get quite a mouth-full. I'll simplify things a bit, and shorten this mouth-full down to — an 'inalienable right' is a 'morally or legally justified claim or entitlement that cannot be removed, repudiated, or forfeited'.

In political philosophy, the term 'inalienable rights' is used to refer to the concept of rights that are inseparable from those to whom they belong. The rights are presumed an inherent part of one's existence (as a person, or as a moral agent, or as a citizen, or as a resident — depending on who is doing the presuming). Some supporters of the idea of inalienable rights believe that these are not granted by any human authority, but rather are present in all human beings regardless of whether they are acknowledged or not. Other supporters maintain that these rights can only be granted by human agency.

Based on my simplified definition (or even on the more expansive mouth-full), it becomes clear that you can indeed have inalienable rights without the existence of God. God only enters the picture if you (a) restrict the definition to 'morally justified claim or entitlement', and (b) maintain that God is a (or one, or the only) source of moral principles.

It is possible to argue that people have some selection of inalienable right simply in virtue of their being people. In other words, as a logical consequence of being a self-conscious animal, you have a 'morally justified claim or entitlement that cannot be removed, repudiated, or forfeited' to certain freedoms and liberties. As but one trivial example, simply in virtue of you being conscious, you have the inalienable right to think as you choose. You might not be able to do anything about what you think, but no one can remove, repudiate, or forfeit the ability to think as you choose. Or, as another example, an inalienable right to pursue your own happiness. Although you might not actually be able to do anything (you might be in chains in prison), you can at least pursue your goal. Supporters of this class of inalienable rights, call them 'natural rights' in virtue of their argument that they stem from the nature of Man, rather than from the works (laws) of Man.

Alternatively, it is also possible to argue that people only have some selection of inalienable rights in virtue of the laws that govern where they reside. For example, a person might acquire an inalienable right to 'liberty' within a legal environment wherein the term 'liberty' has been given some specific definition, and provided with such protections that it cannot be constrained, removed, repudiated, or forfeited. Supporters of this class of inalienable rights, call them 'legal rights' in virtue of their argument that they stem from the works (laws) of Man, rather than from the nature of Man. Those who maintain that the only inalienable rights are legal rights argue that there is nothing inherent in the nature of Man that provides any moral justification for 'claims or entitlements that cannot be removed, repudiated, or forfeited.' In the long run, evolutionary survival is the only fact that matters, and survival is a matter of tooth-and-nail struggle.

Stuart Burns

back

(39) Kristen asked:

How do we know if anything really exists?

---

We can start by examining the premise that 'nothing exists'. But then, what is it that is thinking that nothing exists. Surely, whatever it is that is doing the thinking is at least something. Hence we can conclude that at least one thing exists — the thing that is thinking about existence.

Now we can inquire just what 'thinking' is. Thinking is not a thing, surely. It is not static, but is dynamic. Thinking flits from one idea to another. It is pondering different aspects of itself. Surely, it is not abstract either. It is not a vague notion like 'up' or 'down'. It is not a generalized notion like 'idea' or 'thought'. Thinking is an actively concrete specific thing. It is here. It is 'me'. If it is not a thing, and is not abstract, then perhaps it is a process.

But if thinking is a process, there must be something that the process is processing. Not only that, but a process needs something to be doing the processing. A process is the organized activity of something that does the processing. A process cannot process itself. A process needs inputs. So we can conclude that not only is there the thing that thinks, the thing that does the processing that is thinking, but there is also the inputs that the process is processing.

So now we can conclude that there is definitely something that exists. There is the thing that is thinking about existence. And there is the source of the inputs that the thing is processing. The process could not exist (be in the process of processing) if there were not something that could be processed. So not only does the thing that is thinking exist, but the inputs that are being processed in the process of thinking had to exist prior to and independently of the process that is the thinking.

So that is how we know if anything really exists. To ponder the question is to demonstrate that there exists a thing that by processing some inputs thinks about the question of existence. To think is to think about something. To be conscious, is to be conscious of something. So we can be sure that there is something, in addition to the thing that thinks, for that thinking thing to think about and be conscious of.

Stuart Burns

back

(40) Adam asked:

I have a question for you regarding an answer you gave to a person struggling with the idea of an infinite past. I've discussed this with friends of mine and your statement 'A universe stretching infinitely back in time is no more difficult to conceive than a universe stretching infinitely into the future.' appears to be the standard response. I find this response problematic because of a puzzle that arises from an infinite past. Suppose that there is an infinite past, and that this current moment (Time X) is temporally connected to another time previous (Time Y) that is infinitely far away from it. If we suppose that time Y happened and then time X happened, how can it be that they were infinitely far away from each other. It seems impossible that two moments can both happen if they are infinitely far away. What is troubling about this example is that every moment in time suffers the same fate, that other moments which are connected to it happen as a part of the same chain, yet on a link that is infinitely far away. If no two moments in time are infinitely far away from each other then I don't see how there is an infinite past, since any two moments can be finitely (even if extremely large) compared with one another.

---

What you term the 'standard response' relies to a greater or lesser extent on Kant's discussion of the infinity of time in the section of the Critique of Pure Reason entitled 'The Antinomies of Pure Reason'.

Kant's solution, in simple terms, is to distinguish between two concepts of 'infinity': an actually existing infinite collection, and the notion of the infinite which is 'set as a task'. All sorts of paradoxes arise if you posit an actual infinity, either of space or of time.

To state that there is no end to time is to state that for every time t there is a time t' such that t' is after t, for any given constant unit of time. A similar statement can be made about the past: for every time t there is a time t' such that t' is before t. This suggests that the past and the future are symmetrical so far as the question of infinity is concerned.

The qualification about units of time needs to be added because if the increment halves each time then you get a situation like Zeno's paradox of the arrow: there is an infinite number of smaller and smaller movements of the arrow between the bow and its target. (Hawking in 'A Brief History of Time' exploits this idea his account of the Big Bang: the closer you get to the Big Bang the faster events occur. So if you were travelling back in time you would never reach the first moment, even though according to the Big Bang theory time as such is finite.)

There's a very good discussion of Kant's Antinomies in Jonathan Bennett's book 'Kant's Dialectic' (CUP). You might also enjoy reading A.W. Moore 'The Infinite' (RKP).

All the best,

Geoffrey Klempner

back

(41) Geoff asked:

What job prospects are there for philosophy graduates?

---

Geoff, the simple answer is that philosophy graduates have pretty much the same job prospects as other undergraduates on non-vocational degree programmes. Obviously if you want to be a vet, a philosophy degree isn't the thing to do! But if you want to work in human resources or advertising or the civil service — or even in careers that require further qualifications, such as law or accountancy — you're as well qualified as pretty much anyone else. (So for example in the UK a standard LLB law degree is 4 years. A standard philosophy degree (in England and Wales) is 3 years, after which you can do a one-year 'CPE' conversion course, which will bring you up to the same level of qualification.)

Philosophers are fond of claiming that a philosophy degree equips you with distinctive skills that are highly prized by employers — the ability to think logically, independently and creatively; the ability to defend a position with good arguments; the ability to understand and clearly express complex ideas; etc., etc. I'm inclined to think this is true, although admittedly I don't have any concrete empirical evidence that philosophy graduates do any better on the job market than english or history or sociology graduates! In general, though, in my experience most graduate employers take the view that specific job-related skills can be taught on the job and they can train you; what they're looking for is the more 'transferable' skills, such as those listed above.

The 'Philosophical and Religious Studies Subject Centre' has produced an employability guide for philosophy graduates that you might find helpful — it's available at: http://www.prs.heacademy.ac.uk/publications/emp_guide_for_web.pdf

Helen Beebee
Director
British Philosophical Association

back

(42) Penny asked:

This is a political philosophy question about the incompatibility of national sovereignty and international institutions such as the UN, EU, treaty commitments and the legitimacy (or not) of enforcement mechanisms. Im sorry its so long.

For my entire adult life I have been a strong supporter of the UN and international law as the best hope to prevent and mitigate wars and help bring about, if not perfect global peace, harmony and justice, at least a reduction of conflict and more peaceful coexistence. I dislike nationalism, and particularly superpatriotism, which seem to me one of the principal causes of conflict, and have looked forward to the decreasing importance of nation states.

Now, since I've developed an amateur interest in philosophy and ethics, I discover that national sovereignty is seen by many as key to human progress and civilisation since at least the Enlightenment; that it is inalienable and by definition supreme, meaning that states cannot relinquish any part of their sovereignty, thereby destroying any claim to legitimacy of international law (and the courts to enforce it). I read, too, that while states have the authority to make treaties and sign up to conventions if they wish, they can also break them at will if that suits, and that no other state or institution has (or can have) legitimate authority to prevent them, or penalise them for doing so (or even, it seems, have grounds to criticise them, since states are not moral agents).

So when those of us who were against the Iraq war complained that it was a war of aggression, or we cite the Geneva Conventions (rather than basic morality) on the treatment of prisoners, or the Law of the Sea when unarmed passengers are killed on ships in international waters, or the discriminatory application of the Nuclear NonProliferation Treaty, or we welcome the establishment of the ICC, apparently we haven't a philosophical leg to stand on.

If nothing short of a world state (inevitably oppressive and therefore far from desirable) can legitimately override national sovereignty, what is to be done? Are we stuck forever with a Hobbesian state of nature in the international arena, where the strongest countries can generally expect to prevail over the wishes and needs of the weakest, backed by the threat of superior brute force?

I was warned that studying philosophy would force me to rethink some of my fundamental beliefs, which was true and is stimulating, but Im finding this very hard to come to terms with. Is there a way round or over the sovereignty stumbling block to greater global justice, a philosophical route to legitimacy for what I think of as progressive international institutions?

---

Despair not. Earnest idealism, such as even this decrepit correspondent once had, will only lead to progress if it takes the time to understand the problem — and an understanding of the problem will only lead to progress with a dose of earnest idealism. You show a promising and may I say rather unusual combination of the two, Kant's precedent notwithstanding.

Indeed, the wish of international law is not the fact of it, and the tendency with progressive journalism and right-thinking persons generally has been, lately, to pretend that it is. You have shaken yourself out of the pretence. Well done. There is something earnest and hopeful about the pretence, which is aware of itself as at least an exaggeration, but which imagines that by the sheer force of prayer we could make order and law in the world by believing in it. And although that is not enough, there is something in the faith of it which is necessary all the same, in as much as we will not make law and order in the world by having the kind of black faith in Power that the Nazis had. But it does not follow, as some leader writers seem to think it follows, that to fight that black idiocy we are obliged to leap to the barricades for any bright foolishness.

The obvious hard case, and the occasion for much warranted and unwarranted idealism, is the European Union. For the EU is not merely a treaty, but a treaty *process*, in which much of our hopes and material interests are invested. As Germany experienced in the 19th Century, a treaty process in pursuit of a customs union (zollverein) can, by stages, effect a political union. As we include other nations in our decision making process so we include them in one state. For what is a state, if not a system for deciding the regulation and policing of markets and exchange? — At least, this is the question posed by those hopeful of what you call a philosophical route to legitimacy
for international law.

But, as becomes evident when the EU hits one of its periodic political hitches, the trouble with 'a philosophical route to legitimacy' is that it is just that. And there is much at stake in the idea of a nation that is not in the least bit intellectual or philosophical. A state is not simply a device for securing one's rational best interests. A nation state develops a kind of collective Ego, however nebulous, which no 'philosophical route to legitimacy' can quite touch.

Like you, I wish that it could. But as it strikes me that our efforts would be better directed at building some new common identity than at forcing diverse old identities to comply with a 'philosophical route to legitimacy'. The successful international political entities of the past have done both, in varying proportions. May states have been, along the way to their Pax Romana, pretty bloody, and the hope of the EU is that it offers a bloodless kind of unification. But there is an obvious sense in which the old pattern, despite the hopes of internationalists, has not quit the scene. Neither the UN nor the EU made the space in which they try to grow. Roosevelt and Truman, and all the allied forces, did that.

David Robjant

Yes, Penny, there is a way. But to understand that way, and to understand why the currently popular concept of national sovereignty seems to be such a stumbling block, you are going to have to recognize that some of your more cherished moral premises are without foundation. (And, of course, if you wish to follow the way that I am presenting here, you are going to have to join the very few of us who are fighting to teach the general population that some of their most cherished moral premises are also without foundation.)

Most people today, including most philosophers today, labour under the premise that there are three different aspects to doing the 'right thing'. First, there is 'things as they are' — in your question you provide the examples that nationalism causes violence, and governments of nations focus on parochial self-interest. Second there is 'things as they should be' — and you provide the examples of global peace, harmony, justice, a reduction of conflict and more peaceful coexistence. And third, there are the 'ethical/moral principles' which, if we all would only adhere to them, would get us from 'things as they are' to 'things as they should be'.

You describe the 'things as they should be' in positive terms (naturally). You have passed a value judgement on the 'things as should be', and you have judged that they are 'good'. (Obviously, since 'should' and 'good' go together.) You have in all probability inherited a suite of moral tenets from the general Judeo-Christian-Islamic tradition that says that peace, harmony, justice, absence of conflict, and peaceful coexistence are good things. But do you understand why they are considered good things? Do you have any foundation behind the judgement that such things are good things? Have you thought this through yourself, or have you simply adopted the moral tenets of your environment?

You describe the 'things as they are' in negative terms. You look forward to the decreasing importance of nation states as a way of reducing the principal causes of conflict. You complained that the Iraqi war was a war of aggression. And so forth. It is a reasonable assumption, then, that you view the way that things are is somehow not desirable. You have passed a value judgement on the 'things as they are', and you have judged that they are 'not good'. But you have made such a judgement (as most people do) on the basis of those moral tenets for which you have no foundation.

The central difficulty that you are facing, is that the concept of 'morally good' has lost its anchor. Most people (including most philosophers) use the concept without really understanding its meaning. As a result, we find ourselves in a situation where moral disagreements become one person's opinion versus another. And the winner is the person who yells the loudest (or most persuasively), or carries the biggest stick. In our modern culture the loudest yellers are to be found in the church pulpits, and the biggest sticks are to be found in the government legislatures (or what passes for a legislature in a non-democracy). Lacking any reason to change things, and any interest in finding one, they have reinforced the moral tenets of their ancestors, without understanding their basis. They are commandments with no underlying authority. The only way that people will follow such commandments, is if they are persuaded that to do so is a good thing. There is no way to persuade someone who does not agree with you. There are no reasons you can give someone to justify the commandments — other than 'Do as I say, Or else!!'

It used to be, in ancient times, that the foundation of morality was located in the 'telos' (to use a Greek word) of Man. To Aristotle, for example, a 'good' person fulfilled his/her proper function well. And a person had a proper function — he was a husband, father, son, or she was a mother, daughter, wife, farmer, fisherman, warrior, or citizen of the state. Each of these roles defined a well understood functional requirement that a person had to fulfill well to be called 'good' at it. The concept of 'good' was a functional concept. The concept of fulfilling a function well was a matter of factual description. Factual descriptions of how a person fulfilled the functions justified the labels of 'good'. There was no dichotomy between 'is' and 'ought'. If she is a ship's captain, then she ought to do those things that would constitute being a good ship's captain. If she is doing those things that constitute being a good ship's captain, then what she is doing is 'good', and 'a good thing'.

In the Dark Ages, the 'telos' of Aristotle gave way to the 'telos' of God. The source of the function changed, but the basic functional foundation of moral tenets did not. God handed down moral commandments so that we could properly fulfill our function within his grand design. A 'good thing' was something conducive to the fulfillment of God's purpose.

During the Age of Enlightenment (1637-1815) we lost the concept of a proper function of Man. Aristotle was discredited for various reasons, and with him went his 'telos'. God was dethroned, and with her went our 'telos' within her design. But the language of morals did not reflect this historical evolution. So we have lost the basis for our moral tenets. Why is being honest a 'good thing'? Why is justice a 'good thing'? Is it really just a matter of opinion?

What is needed is a new 'telos' for Man that can act as the foundation for a renewed understanding of moral language, moral judgements, moral rules. And the science of genetics has given us that new telos. Genetics tells us that the function of the individual organism (any organism, of any species) is to ensure the replication and flourishing of the genes that encode the recipe that is the organism. This, then, gives us a new functional description of Man. And it provides the basis for a renewed functional understanding of 'good' and 'moral'. That is good and moral that tends, on average and in the long run, to promote the proliferation and flourishing of our genes. That action or choice is good or moral that in our best judgement will most likely promote the proliferation and flourishing of our genes over the long term.

Now, with that basic principle, we can go back and examine those 'things as they are' and see if indeed they are as bad as you initially judged. Wars and conflict, national sovereignty, breaking of treaties and conventions, and so forth, can be morally necessary if they are most likely, in our best judgement, to promote the proliferation and flourishing of our genes over the long term. But this is not a 'free ride' ticket to do just as we please, or whatever might seem in our short term interests. As an empirical observation, Man is a social species. We tend to flourish best when we cooperate in a social environment free of coercion, and free of chaos. So peace, harmony, justice, reduction of conflict and peaceful coexistence are also 'good things'. But so is nationalism to some degree.

The world out there is over-populated with characters who would employ coercion to expropriate what we have without compensation. (They aren't only just out there, of course. We have our share of home-grown thieves and extortionists — including many in government. But the focus at the moment is on nationalism.) The only defence that we, as individuals, have against such expropriation is our mutual cooperation in self-defence. It started out with families and tribes, grew to city-state, and then to nation-states. The point is that within the boundaries of the nation-state, people are assumed to enjoy mutual cooperation and to (more or less) voluntarily renounce resort to coercion. Those outside the nation-state boundaries are assumed to not adhere to this 'civility'.

The growth of international organizations and international agreements reflects the trend to find common ground with others in other nation-states in some areas. We all recognize that cooperation for mutual benefit is better than conflict. But we need our guarantees that 'those others' will not resort to coercion to expropriate our wealth. And I should emphasize that this attitude is universal, and active at all levels from the individual to the nation-state itself. It is a natural consequence of our new 'telos'. (It is a natural consequence of our genes in action.) An altruistic concern for the welfare of others, at the possible expense of our own, is self-genocidal and 'morally bad'. A certain amount of xenophobia is a natural and rational self-defence mechanism. (Which is not to suggest that an unreasoned xenophobia makes any sense. Archie Bunker lost out on a lot of things he could have gained by fair-trading with his 'unacceptable' neighbours.) So there is no incompatibility between national sovereignty and international institutions. International institutions are just the manifestation of the growing extent to which we find international cooperation to our benefit, while protecting ourselves from the coercive threats out there. Nation-states, being the embodiment of our mutual cooperation in self-defence, will only decrease in importance as the threats of coercion out there decrease.

(What I personally do see occurring is a shrinking in the effective size of nation-states as the necessary population mass required to ensure self-defence decreases with the shrinking of external sources of coercion. The world's population is no longer really faced with major acquisitive nation-states.)

The philosophical leg that you are looking to stand on, is 'intelligent self-interest'. You will get nowhere as long as you simply proclaim it your opinion that adhering to the Geneva Conventions is the better way to treat prisoners, or that the Law of the Sea should prevent the killing of unarmed passengers in international waters, or that application of the Nuclear NonProliferation Treaty is discriminatory, or that the establishment of the ICC is a good thing. What you need to do is show people how it is in their individual best interests to adhere to the Geneva Conventions, or the Law of the Sea, etc. With a functionally based understanding of basic morality, appealing to a person's self-interest is the proper moral approach.

Finally, I will conclude with a few words on 'super-patriotism' and 'extremism'. There are two different sources for such behaviour. One is simple ignorance. Some people think that they are right and we are wrong, and their morality permits them to employ coercion to attain their ends. They are ignorant of the empirical evidence that strongly demonstrates that whatever your goal, you are far more likely to attain it through voluntary cooperation than you are through coercion. The other source is moral abdication. A lot of people are persuaded (by charismatic religious or political orators) to abdicate their moral responsibility, and let others make the moral judgments for them. Once they abdicate their responsibility to themselves, they become easy pickings for suicide bomber recruiters and other such extremist operators.

If we taught the proper functional meaning of 'moral good' in the schools, we would be faced with a lot fewer people who have abdicated their moral responsibility to themselves. It may not have prevented the Iraqi war (we can have a separate argument as to whether it was a morally necessary war), but it would certainly change the face of modern politics. And it would certainly eliminate such home-grown abominations as religious fanatics.

Hope you found these few thoughts enlightening. I look forward to whatever comments you may wish to offer in reply.

Stuart Burns

The question of how the actions of nation states can be subject to law is the most urgent question of our times. It is, above all, a practical question. If the United Nations and the Security Council are not sufficiently effective to deter or prevent wars of aggression then we should be figuring out ways of making them more effective. Which is of course exactly what political thinkers and political leaders have been doing. If we succeeded, would it really matter if this went against some treasured philosophical principle? I don't think so.

Sovereignty is essential, as Hobbes argued in Leviathan because in the absence of a sovereign to whom one cedes the power to enforce law, there can be no justice and no law except the law of the jungle, the war of 'all against all'. But Hobbes also argued with perfect consistency that a monarch, ruling alone, is the only effective sovereign. As soon as you introduce limitations to the power of the monarch — a parliament for example — the problems that the idea of a sovereign was introduced to solve break out all over again.

The problem is encapsulated in the famous example of the Prisoners' Dilemma. Of all the many game-theoretic strategies that have been explored, Hobbes' solution is the only one that guarantees the an agreement or contract will be honoured by both parties — because they are answerable, not just to one another but to a third party who has the unfettered power to punish infractions with lethal force. The third party, once appointed, cannot be unappointed. That's what ensures no backsliding on the deal.

No-one accepts this today in the political arena. Why not? Logically, Hobbes' argument is unassailable. To absolutely guarantee peace, the humble acquiescence of every subject to the law of the land, nothing less than the absolute power of a dictator is required. The problem is, kings and dictators have an awkward tendency to behave in way which is not necessarily aimed at the good of their subjects. (But that's OK, because they will face the judgement of God.)

Having made the experiment, human nations have settled for less. We have a political system — I'm talking about liberal democracy although you could say similar things about other political systems — which works for the most part in maintaining the peace of the nation. Bad things still happen. There are political stalemates when we need urgent political action; the police force struggles to stay on top of the crime rate; civil disobedience and strikes throw their spanner in the works.

'Thank goodness that they do,' would be a reasonable response. Can you imagine what kind of state it would be, where the decree of the ruler was absolute, where every crime and misdemeanour was instantly punished? Vid screens in every room just like in 1984. You drop a piece of chewing gum on the pavement and Whooof! off you go in a puff of smoke. (Although I know a few people who would agree to that.)

So my argument would be, if we are prepared to compromise the logic of Hobbes' response to the prisoners' dilemma for the sake of practicality, then what this means, in effect, is an admission that the idea of a 'sovereign' is a fiction. It may be, as many believe, an indispensable fiction, but it is a fiction nonetheless. I recognize the law of the land, by and large, but there are cases where my conscience, or just urgent practical need, overrides respect for the law. One drives through the occasional red light.

The United Nations is a building in New York. It is also a fiction. It doesn't exist except in the minds of the political leaders who founded it and the delegates who attend it. Belief that the UN can work is necessary in order to make it work. And it has worked, by and large; at least one can argue that world affairs would have been in a far worse state without it.

There isn't a question of what may or may not 'legitimately' override national sovereignty from a philosophical standpoint. If a resolution is passed by the UN, then it is legitimate, because that's just what the member states have subscribed to. Of course, the real world being what it is, resolutions fail to be implemented, just as national laws fail to be observed. Punishments and sanctions only deter in proportion to their severity: that's a problem for national law as well as for international law.

But can't philosophers figure something out? Insofar as this is a problem for game theory, you need game theorists; insofar as this is a problem of practical politics, you need political scientists. Maybe somewhere in there, is a role for utopian dreamers. (The League of Nations was once a utopian dream. It's failure led to the UN.)

The most intractable problems of our time require more than a number-crunching or logic-crunching response; they require originality, creativity. Something new, at any rate. I do wonder whether there is any meaningful role for political philosophy. You want 'philosophical legitimacy' for international law? You've got it. What we want is just to make international law more effective, without it hurting too much. Maybe that just shows the colour of my philosophical creed (for want of a better word, call it pragmatism with a small 'p').

Geoffrey Klempner

back

(43) Burak asked:

Greetings Sir or Madam,

I've been writing a philosophy book, with not caring my young age (23). Just like Montaigne said, I believe philosophy is not a mountain top we can never reach. Besides, I'm very confident on what I've found is a very interesting and explaining point of view on what the truth is as the greatest question of philosophy. It is maybe a brand new idea or a very rare but we can position it between Nietzsche, Allen Poe and Confucius. What I'd like to talk about now with a philosopher is how one can use an idea on explaining and detailing many subject about life, society, history or even politics. Because I've written, thought many things about politics and utopias; but I'm wondering how can I use my new idea efficiently on these concrete subjects. Plus, I'd like to know how a philosopher can protect his or her ideas from stealing.

---

Burak you give no indication that you have ever studied philosophy at a recognised university so it is unlikely that anyone will be interested in your ideas or that a publisher will want to publish them.

This doesn't mean that you can't study philosophy on your own, you can just by reading books but you should not make the mistake of thinking that philosophy is just thinking about things and that anyone can do it if they want to.

In general philosophers publish their work in academic journals but these journals will expect you to have recognised qualifications. Also philosophical ideas have no monetary value and like mathematical proofs they are given away for free. You can't patent philosophical ideas. You can of course establish that you first thought of an idea by publishing books or articles.

Sorry to sound so negative but you have to understand that the world is not waiting for an amateur philosophers to give it pearls of wisdom. There are lots of philosophers at universities all trying to get their books and articles published and you have to compete with them or just study philosophy on your own for your own enjoyment. Perhaps there is a local university near you that has courses of lectures or discussions that are open to everyone.

Shaun Williamson

back

(44) Geoff asked:

What job prospects are there for philosophy graduates?

---

You can become a philosophy teacher which is very difficult since there are few jobs and lots of people applying for them. Beyond that philosophy is an arts degree like English, history or Latin or, so it qualifies you for the same sort of jobs that any arts graduate might apply for.

Of course you may find that when you tell people that you studied philosophy they will confuse this with theology, this isn't something that history graduates have to deal with.

Before committing yourself to philosophy you should find out as much as possible about the subject since people often have mistaken ideas about the nature of philosophy courses at universities.

Only study philosophy because you have to do it, don't do it because you think that it might be interesting or lucrative. It certainly isn't lucrative and you may not find it interesting.

Shaun Williamson

back

(45) Lois asked:

There are situations where the pursuit of our own happiness and peace of mind conflicts with that of another. Must we always put the interests of others before our own? Is there any justification for pursuing one's own welfare at the expense of someone who stands in the way of our goal?

---

This question came in a while ago, and I wasn't going to answer it. Other Ask a Philosopher panel members have already had a go, and I couldn't really see that I had anything to add. (Lois didn't provide an email address so she'll have to wait — rather a long time, I'm afraid — until the next series of Questions and Answers is posted.)

But something happened to make me look at this question again. (It's not something I want to talk about here.) The thought occurred to me that pursuing this question from Lois can take you into a very dark place indeed.

But let's start off with the more obvious points that a moral philosopher would make.

I can think of two clear cases, which few would dispute, where in the one case it was perfectly reasonable to put oneself before another; while in the other case one has a clear obligation to put the other person before oneself.

Let's say you are one of two shortlisted candidates for a well paid executive position, waiting to be interviewed. This is the first time you have reached the short list after scores of unsuccessful job applications.

Your stomach churns as you realize how much depends on how you perform in this interview. A divorced mother of three. You are behind with your mortgage payments, and you and your children are threatened with eviction from the home they have lived in all of their lives. Your age is against you, and it was only pure luck that you managed to get this far in the selection process.

The other candidate catches your eye. 'How long do you think we're going to have to wait?' You mumble something in reply. But the other woman needs to talk so you listen. You listen with a growing sense of amazement to her story about her husband who cheated on her with his personal trainer, her subsequent divorce, her three young children and how far she is behind with her mortgage payments. She could be you. She has as much to gain, or to lose, as you have yourself.

What should you do? There's no question. You go for the job. In the interview you fight for your happiness and the happiness of your children. You fight for all your lives.

Our moral intuitions tell us — at least, my moral intuitions tell me — that in a situation of fair, or even not so fair competition such as the one I have described, there has to be a winner and a loser. You have every right to strive to win with all your might, even though as a necessary consequence the other must lose. Until human beings finally succeed in creating Utopia, that's the nature of the society we live in.

I've painted this in black and white colours, but it is not just an isolated, extreme example. There are many, many ways in which human beings have to fight for their happiness and peace of mind, knowing that there will inevitably be winners and losers in the game of life. Of course, you can do your best to help those less fortunate, give generously to charity and good causes. But if it was wrong to compete in the first place, then charity and good deeds would merely be a salve to ease one's guilty conscience.

In the example I have just given, it could be objected that I was unfairly raising the stakes as each candidate was naturally concerned for the well-being of her children. I don't think that's the crucial point, however. My original idea was to have two not-so young but single Philosophy PhDs competing for an academic post. (I can sympathize, but not that many would.) Exactly the same considerations apply. One is destined for a life in academia and the realization of all his or her dreams, the other will end up as a bank manager. And both believe this is the very last chance for either of them.

But what about a parent's duty to one's child? Isn't that the clearest case where one has an obligation to put the happiness of others before one's own? The very definition of a 'bad mother' or 'bad father' is a person who refuses to do this. Again, I'm relying on moral intuition, but I expect the majority of parents would agree. It's a cliché, but clichés are often true, that parenthood is a sustained and bloody exercise in self-sacrifice.

Well, I could go on to talk about all the cases in between, where we are pulled both ways, towards wanting to say that one has an obligation to put the other first, and saying that one is justified in putting oneself first. Or, I could delve into moral theory in order to account for these alleged intuitions: what would a utilitarian say? or a Kantian deontologist? or a virtue ethicist? or an evolutionary biologist?

But I leave that as an exercise.

What concerns me is a disturbing vibe that I get with this question. Our 'happiness and peace of mind' is at stake. What would one not do for the sake of one's happiness and peace of mind? As a parent, you can't be happy if your children are unhappy. And if there really is no prospect that one will ever attain happiness, wouldn't it be better just to end it all?

And to think that you could be happy, were it not for the one person standing in your way!

What you would say to the the mother of three who fails to get the job is that it isn't the end of the world. OK, so you get evicted from your home. That's terrible. But people survive worse, and they end up making good lives for themselves. Or to the disappointed PhD, one would remind them that they still have their life ahead of them, there are other ways to pursue one's interest in philosophy besides paid employment in a university.

When do we not think this? When are we absolutely and utterly convinced that unless XYZ happens, our happiness and peace of mind will be gone forever, never to return? Love would be pretty high on the list. But not the only item. It could be a political cause that you have dedicated your whole life to. Or something as banal and unidealistic as the mistaken belief that you can only be happy having lots and lots of money.

Which brings us to that dark place, which popular films and TV dramas love to explore.

In Lois' question, there was a nice vagueness in the idea of doing something 'at the expense' of another. One naturally assumes that we are dealing with a tit-for-tat situation. What one stands to win, the other stands to lose. But there's no logical reason for this assumption. — That is the way a murderer thinks too.

Geoffrey Klempner

back

(46) Jerome asked:

Husserl's phenomenology begins with what he called the transcendental-phenomenological by the encounter between the transcendental ego and phenomena. Through the transcendental-phenomenological reduction, one moves from thinking to reflection.

Explain.

---

In his book, The Logical Investigations (1900), Husserl argues that our common sense or everyday thinking — our 'natural attitude of thought' — does not allow us to intuit the essence of things, or the true essence of the self. Following Descartes, who argues that the one thing we can be certain exists is our own conscious awareness, Husserl concludes that if we wanted to construct our concept of reality on solid foundations, this view of consciousness was the place to start. Thus, in the same way that Descartes begins by doubting everything that he could not prove, so too does Husserl believe that the study of mind should begin by setting aside all that is not given in consciousness: all that does not belong to the mental state of the subject. The method Husserl introduces for this analysis or examination of things as they appear to our consciousness is called 'transcendental phenomenological reduction'. Essentially, the characterisations of transcendental phenomenological reduction infer a phenomenological description of reflection as opposed to everyday, non-reflective thinking. It should be noted that, for Husserl, phenomena is anything, imagined or objectively existing, real or ideal, that presents itself in any way to individual consciousness. Husserl's ambition is to develop a method that will not falsify these phenomena, but will allow them to be described as they appear — as 'things themselves'.

When Husserl recommends the return to things themselves, what he is recommending is a return to an analysis of things as they appear to consciousness. Husserl thought that all sciences had evolved randomly and were made up of a combination of empirical act and theoretical supposition. Theoretically this hotchpotch was unacceptable: what was required was a clear account of the nature and the theories which were deemed central to scientific investigation. What was needed was a new method which could clearly identify the metaphysical presuppositions inherent in the sciences. According to Husserl, it is in the 'natural attitude of thought' that we spend most of our time. In such an attitude our attention is turned to thing as they are given to us: it is the view that the world exists outside and independent of consciousness. Phenomenological reduction, however, brackets this thesis, arguing that consciousness must be examined on its own terms. Husserl held that our normal way of thinking was based on a fallacy about the way we perceived reality, both in our daily lives and in our scientific pursuits. Not only did we presuppose a great deal of what was given to us, but we were prone to depend too much on common sense, or the notion that the general consensus must be right. Given this state of affairs, he held that a fundamental change in our thinking was both necessary and possible, and it was this conviction that led him to develop a method to move us from the 'natural attitude of thought' to the 'phenomenological attitude'. For Husserl, all genuine knowledge rested on inner evidence. Knowledge, in the strictest sense, means it is inwardly evident that something is the case. Human acts must be fulfilling intuitions. In order to grasp this 'inner evidence' it is necessary to put on hold ('bracket') all that is inessential so that the essence of phenomena can speak for itself.

The terms phenomenon and phenomenology derive from the Greek for 'appearance'. Phenomenon refers to a thing or event that appears to human consciousness. Phenomenology, thus, is the study of manifestations. Husserl believed that as far as our knowledge of the world goes, all we can know is phenomena. Husserl agreed with Descartes that the one thing we can be certain of is our own conscious awareness. In the same way that Descartes began by doubting everything that he could not prove, so too did Husserl believe that the study of mind should begin by setting aside all that is not given in consciousness. To do this we must begin by stripping our perceptions down to their simplest forms, shedding all our layers of habit and assumption. Husserl calls this kind of perception 'bracketing'. Since all we can know are things that appear to our consciousness, he said, let us ignore the questions that we cannot answer and deal with those we can answer. The human mind understands the world by bringing it under certain concepts, and each concept presents an essence. These essences are not discovered by scientific inquiry and experiment, by are revealed to consciousness where they can be grasped by intuition. In order to grasp the true essence of things themselves we must clear the mind of all the debris that prevents intuition from forming. And it is only by 'bracketing' all those presuppositions and prejudices which clutter our minds that we can approach the true essence of the object: that we can study what is left as an object of pure inner awareness.

Meaning, says Husserl, is neither in the mind, nor in the world alone, rather it is discovered by the a priori modes of intentionality. These intentional modes fall into three categories — perception, imagination, and signification. What this means is that intentionality is like a screen between consciousness and the world onto which objects and acts are projected; without the screen objects and acts would not exist. Intentionality, then is a conduit, a channel, between consciousness and phenomena. Consciousness itself cannot be grasped as itself because it is intentional: it is always directed towards that which is not consciousness: it is always looking away from itself. It is only by an analysis of intentionality that consciousness itself can be discovered. Thus, when we peel away the encrustaceans of preconditioning not only can we intuit the essence of things themselves but also the essence of consciousness — pure consciousness. To examine consciousness, we need to bracket out all objects and facts. What remains is 'the transcendental ego', which, for Husserl, is pure being — Absolute Being. It is important to realise that Husserl does not deny that the real world exists; rather that it is only realisable in virtue of the transcendental ego. Without pure consciousness, nothing is possible. Pure consciousness is before all acts and objects. It is only through pure consciousness that all other entities are known; and they are known as entities that appear in consciousness.

Tony Fahey

back

(47) Vivian asked:

What are the positive aspects of the analytic philosophy for a fruitful dialogue with Thomistic Philosophy?

---

If by 'analytic philosophy' you mean a dialogue resulting from forensic examination of such issues as Aquinas's 'Five Proofs of God's Existence', and his 'Christainising' of Aristotle, it can be said that the positive aspects are considerable.For example, in the case of his 'five proofs', notwithstanding the fact that the Catholic Church still presents these proofs as unquestionable evidence of the the existence of God, on analysis, each of the five have been found to be unsustainable,and his ontological argument, since it depends on departing from Aristotle to embrace a Platonic view of the position of the soul/mind after the death of the body — a position that Aristotle himself could not accept (see Aristotle, Platonism and the Knowledge of God by Patrick Quinn), canal so seen to have its shortcomings.

For Aquinas, philosophy was always the handmaiden of theology, and his philosophical approach was employed only to justify Christian dogma. Moreover, his position on heresy was in accord with the Church's teaching on this matter, and although the infallibility of the Pope was not enshrined in Church law until much later, Aquinas argued that anyone who rejected the word of the pontiff should be deemed a heretic — and punished as such. His stance as a theologian first and a philosopher as a poor second can be seen where, in relation to his position on heresy, he said,quoting St Jerome (Gal. 5:9), the Church should '[c]ut off the decayed flesh, expel the mangy sheep from the fold, lest the whole house, the whole paste, the whole body, the whole flock, burn, perish, rot, die'.

Given that it can be argued that Aquinas himself was of an analytical turn of mind, it should not be surprising that, on the approach of his demise, his secretary is said to have reported that the great man confessed to him that he believed that all he has written was of straw.

Thus, if exposing these weaknesses in Thomistic Philosophy can be considered positive aspects of analytic philosophy, it can be said a dialogue between analytic and Thomistic Philosophy can prove to be most fruitful.

Tony Fahey

back

(48) Nua asked:

What is quality and what is its measure?

---

Quality is an abstract term with lots of different uses so there is no general answer to your question.

Are we talking about the quality of carpets or fine wines or the quality of poetry. It is natural to humans to compare all sorts of things and to decide that some are of high quality and some are of low quality. Just because we use the same word quality in all these cases can lead to the superstition that 'quality' is a sort of ghostly property of things. It isn't.

We have very different ways of measuring the quality of different things so there is no one answer to your question. There is no one single measure of quality.

Shaun Williamson

back

(49) Beejay asked:

what are the weaknesses of quantum philosophy?

---

Beejay the problem here is knowing where to begin. I could just say that its main weakness is that it is all spurious nonsense but that this not likely to be helpful to you. The rapidly developing field of quantum philosophy seems to provide the opportunity for lots of academics who seem to be mostly scientists to waffle on about the philosophical implications of quantum mechanics and how it is important to the mind body problem etc. They have even begun to talk about the many minds interpretation of quantum mechanics.

These people are not answering any scientific questions and they are not answering any philosophical questions either. It will take centuries to untangle all this nonsense but thankfully we will have moved on to string theory philosophy by then so we won't have to worry about it.

We would have to discuss in detail a particular piece of this nonsense to show just where it goes wrong and maybe life is too short for that.

Wittgenstein wrote 'Everything we need to know (in philosophy) is on the surface nothing is hidden from us'. And of course by that he meant that the answers to philosophical problems in no way depend on the latest and greatest interpretation of quantum theory. Quantum mechanics has no implications for the mind body problem. We don't need any more interpretations of quantum mechanics. Quantum mechanics is the interpretation. We may want easy ways to picture scientific theories but the picture is not the theory and the picture has no implications for science or philosophy.

Shaun Williamson

back

(50) Amanda asked:

What is truth for pragmatism?

---

In brief, it is verifiability.

Pragmatism was a late 19th century American reaction against the perceived woolliness of Continental metaphysics. In line with the European reaction (logical positivism), propositions were held to be meaningful only if empirically verifiable (or tautological). Peirce introduced the term pragmatism.He accepted the correspondence theory of truth, but said that since only verification can decide whether a proposition is true, why not define truth as the passing of such tests. All would have been well if it had been recognized that this was a matter of how we know something to be true (an epistemological theory) not a matter of what makes something true ie not a theory competing with correspondence and coherence theories. Unfortunately William James, enthusiastic about pragmatism, wrote as if verification produced (rather than confirmed) truth, and at times his writing suggested that a proposition was true because it was useful (rather than being useful because it was true). Russell and Moore heavily criticised James who was horrified at being misunderstood and famously wrote an article to clarify things which made matters even more opaque.

A good, very readable, account of the controversy is the chapter 'Truth: why I am not a Pragmatist' in 'The Whys of a Philosophical Scrivener' by the famous maths popularizer, Martin Gardner (OUP 1983)

The whole episode is an example of what Berkeley in 1710 described as a principal cause of difficulty among philosophers, namely 'that we have first raised a dust and then complain we cannot see'. The dust has long settled.
We can all agree with the core pragmatic beliefs 1. Verification means that we know something is true rather than merely suspecting or believing it 2.Until then it is reasonable to assume something true if it works (routine procedure in science eg we think the laws of aerodynamics are true because planes stay in the air)

Craig Skinner

back

(51) Shanna asked:

What is the relationship between common sense moral intuitions and moral philosophy?

---

I don't know what common sense moral intuitions are. It seems to me that this idea is an invention of philosophers.

Most people know nothing about philosophy and nothing about moral philosophy. Morality is an activity of humans and assessing our own behaviour and that of others in terms of morality is something that humans have always done. Debating and disputing about what moral standards we should adopt and about what is good and what is evil is also something that humans have always done.

It is the contention of philosophers that morality stands in need of a philosophical justification. However philosophers have always failed to provide an agreed justification for morality. Some claiming that moral statements cannot be justified while others think they have the right to revise our ideas of good.

The fact is that morality is a human activity and will continue no matter what philosophers think. The idea that we are all just following our crude common sense moral intuitions and that only philosophy can tell us what real morality is is a nonsensical idea. What philosophers should investigate is why the illusion that morality needs a justification seems so real and yet is so unobtainable.

Morality exists in the world outside of philosophy and will always do so.

Shaun Williamson

back

(52) Mustafa asked:

My son is always in a hurry while answering the exam's paper.How can I deal with it?

---

Mustafa the only way to deal with this is sensitively. If you make too much of it you will just make your son even more neurotic about exams. Exams are a stressful thing for most children and you need to help him to be less stressed about it. Make sure that he has a watch so that he knows how much time he has left to answer the questions.

Just tell him to do his best and if he has any time left over at the end of the exam to check his answers to see that they are correct. There is no magic answer and you can't take the exams for him.

Shaun Williamson

back

(53) Shanna asked:

What is the relationship between common sense moral intuitions and Moral philosophy?

---

Whilst in philosophical discourse generalisations are best avoided, it seems fair to say that the premise upon which most, if not all, moral codes are based is the principle that we should do unto others only that which we would have others do unto us. The issue, for me, that Shanna's question raises is whether this principle derives from nature or from nurture: that is, whether it is something that derives its moral values from worldly experience or whether they are values given to us as a priori intuitions, ideas or concepts.

It should be said from the outset that this oft debated, yet never quite resolved question, occupies different schools of philosophical thought and one's conclusion depends on which of these schools of thought one finds most convincing. Amongst these differing or opposing approaches is that advanced by John Locke (1632-1704) who, echoing Aristotle, held that there is nothing in the mind that is not first in the senses. According to this view, the mind, at birth is a tabula rasa, a blank slate upon which experience will write its moral and other codes of behaviour. For Locke, there are no a priori, innate ideas or concepts of the world before we have experience of it. Against this view was the Enlightenment belief that man was inherently good, and that evil was the result of the pollution of innocence by corrupt social institutions: organised religion and politics. Another, not unrelated approach takes place between Empiricism and Rationalism. Empiricists, of whom David Hume may be considered one of its most notable adherents, as is the case with Locke, argue that all our ideas and concepts derive from experience, whilst Rationalists, such as Descartes, take the view that there are, within the mind, certain a priori ideas that do not depend on empirical experience. Somewhere in between these opposing views is Kant's argument that 'though all our knowledge begins with experience, it by no means follows that all arises out of it'. It seems that Immanuel Kant (1724-1804), a committed rationalist, disturbed by the argument, set out by David Hume (1711-1776) in his An Enquiry Concerning Human Understanding (1781), that we know the mind only as we know matter: by perception, declared that he was awakened from his dogmatic slumber by his contemporary's argument that experience is the basis for knowledge. Whilst the aforementioned quotation is taken from Kant's Critique of Pure Reason, and refers to his view that before worldly experience there is within the mind the a priory instinct of space and time and the concept of cause and effect which enable the mind to put perceive things given to them in experience, not as things in themselves, as noumena, but as phenomena, as things as they appear to human consciousness, it might equally be taken as an expression of his moral philosophy as expressed in two of his other major works, Foundations of the Metaphysics of Morals (1785), and his Critique of Practical Reason (1788), two works in which Kant deals with a common-sense conception of morality based on what he calls the categorical imperative. By 'categorical' Kant means that it applies in all situations, and by 'imperative' he means that it is commanding and thus absolutely authoritative.

Although Kant offers several formulations of the categorical imperative, the two that are most often quoted is the one which states that one should always act in such a way that one is able at the same time to will that the maxim of one's action be in accordance with a universal law of nature, and the other (and the one most relevant to the issue at hand) is the one which states that one should treat humanity, whether in one's own person or that of anybody else, never merely as a means but always also as an end. What Kant is saying is that inherent in human reason is the capacity to determine that which is right and that which is wrong. Thus, in the same way that Kant argues that there is, within the mind, before experience, the a priory forms of intuition, space and time and the concept of cause and effect, so too does the mind contain the innate ability to discern, through practical reason, the difference between right and wrong. If one accepts Kant's categorical imperative, one can say that common sense moral intuitions are the foundation stone of moral philosophy.

However, whilst the arguments set out both in the Critique of Pure Reason and the Critique of Practical Reason and the Foundations of the Metaphysics of Morals may appear laudable enough in their own right, it should be said that in both his attempt to forge a synthesis between analytic propositions and synthetic propositions in the former and his attempt to lay foundations of moral philosophy in the latter, are found wanting. In the former, in attempting to show that there are propositions which appear to be synthetic: drawn from experience, are in fact a priori (hence the term 'synthetic a priori propositions), he succeeded only in showing that mathematical formulations, such as 7+5 = 12, fit into this category. In the latter, whilst he shows that the religious argument that we should treat others as ourselves can be shown to be in accord with human reason, experience shows us that the moral conclusion of the 'categorical imperative' is not one that is universally held. That is, that there is no universal consensus on human rights; on the right to free speech; on the right to life, or on issue of abortion in general; there is no consensus on the issue of euthanasia, on the right of same sex couples to marriage, or to form civil partnerships; and there is no consensus on the right to health care, on education or the right to bear arms.

Thus, we find that, notwithstanding Kant's 'Copernican Revolution' the human mind is not privileged with knowledge of a transcendent deity, freedom, or 'things in themselves'. In fact we can say that there are no inalienable or universal rights applicable at all times and in all places; there is no Socratic daimon whispering moral imperatives into the corporeal ear; nor is there a Cartesian homunculus with a morality compass steering the soul through the turbulent waters of life. Moral codes are not given a priori from some transcendent or metaphysical realm, rather they derive from worldly experience. If there is a relationship between common sense and moral values it is a relationship that manifests itself as the instinct for survival: an instinct the drives us to devise 'moral' laws that allow us to survive and thrive in an alien world. As Thomas Hobbes (1588-1679), in his magnum opus, Leviathan, says, it is through self-interest that man enters into a social contract with his fellow beings. It is by entering into this compact that allows man to move from a 'state of nature', a state in which the life of the individual is 'solitary, poor, nasty, brutish, and short', to a 'state of peace'. It is through common sense and the instinct for self preservation that encourages men in the 'state of nature' to hand over the reins of power to a sovereign who can in turn impose and enforce certain moral codes of behaviour that can guarantee that each can exist in a society without fear or danger from any other man. Moral laws, then, do not depend on innate moral intuitions, for if they did there would be no need for a sovereign power to impose such laws on the populace, or to employ forces to ensure that these codes of practice are not broken.

The Italian philosopher, Giambattista Vico (1668-1744), agrees with Hobbes that moral order derives from common sense. However, rather than common sense of the individual, Vico argues that it is common sense in the form of communal sense — the sensus communis — of the entire community. For Vico moral codes of behaviour were first introduced when early men, more beast than human, in an effort to appease the anger of an (imagined) anthropomorphic deity, felt compelled to regulate their lives by introducing the institutions of religion, marriage, and property (an institution first introduced as a right of the bodies of dead to be interred rather than left, as had previously been the case, to rot or decay above ground). It is because moral imperatives derive from the collective common sense of the community, that they cannot be said to hold across all time, but change t as needs demand. Moral guardians, or as Vico calls them, 'theological poets' for all their claims, are not divinely inspired people with privileged access to the wishes or demands of a transcendent deity, rather they are conduits through which the collective will of the people finds a voice.

In closing, it seems that whilst nature decrees that intuitions, in the forms of space and time, have a role to play in how we perceive our world, somewhat paradoxically, it does not decree that intuitions have a role in the formulation of ethical values.

Tony Fahey

back

(54) Alistair asked:

I'm working through the exercises in a book on logic ('Logic' by Wilfrid Hodges) as part of an effort to study philosophy more formally. I have long been fascinated by the project that began with Frege and culminated in Godel, partly from a historical standpoint. Also, I intend to study Russell and Wittgenstein, so a basic understanding of logic seems essential.

However, my main interest is philosophy. I am not particularly interested in going too deeply into logic itself, into, for example, its applications in computer science and linguistics. So my question is, what use is it to philosophy (rather than computer science etc.), and what advantage for philosophy does modern post Frege logic have over Aristotle's logic?

Is it that modern logic is thought to subsume or replace Aristotle? Hodges' book takes in propositional calculus, semantic tableaux and predicate logic, but seems to make little mention of syllogisms or how to detect common logical fallacies (or not in a form I recognize, at least). I would have thought that these things were a more useful foundation for philosophy than formal languages. And even if, technically speaking, modern logic does indeed cover everything in Aristotle, with a lot more besides, is it even appropriate to apply mathematics to an activity that takes place through language?

My concern is that, in trying to learn philosophy, I will be wasting my time if I get seduced by all those exotic symbols. I doubt that it is going to help me make sense of the Critique of Pure Reason. Does it make sense to teach this kind of logic as part of the teaching of philosophy? After all, it came to be regarded as a kind of philosophy only because of Russell, whose project to place knowledge on firm foundations is now widely held to have been a failure, as shown by Godel (in logic) and Wittgenstein (in philosophy).

Why does mathematical logic have pride of place, when practical philosophy mostly does very well with older concepts such as syllogism, circular argument, begging the question, reductio absurdum, and so on?

---

Alastair you have some very wrong ideas about philosophy. Philosophy is not a subject that progresses so that we can say Godel and Wittgenstein showed that Russell was wrong and everyone accepts that. In philosophy there are no generally accepted truths and everything is always open to debate.

You have now got to a stage where you are faced with having to do something difficult and your response to this is to say 'Can't I just skip this bit and concentrate on the things that I find easier to grasp'.

You mention 'practical philosophy' but there is no such thing as practical philosophy, there is only philosophy. However (there is always a however) you are trying to study philosophy on your own so going too deeply into formal logic may not be the best place to start. However at some stage in your studies you will need to have a grasp of what propositional calculus is and what predicate calculus is. You will also need a firm understanding of what a valid argument,what a tautology is and what a contradiction is and of how logic can be an axiomatic system and what concepts such as completeness and consistency mean when applied to an axiomatic system.

It is an unfortunate fact that to do philosophy well you need to know everything about everything but of course none of us do know this. I don't know the logic book you mention so I can't say if it is a good book for a beginner or not. If at some stage you decide to continue your studies of logic then feel free to ask as many questions about it as you need to.

Shaun Williamson

Alistair, a long question, but a good one. First, a brief answer:

1. There is indeed no need for fluency in the formal languages of logic in order to study and understand philosophy 2. Critical thinking in philosophy and in everyday life is indeed better served by informal logic 3. I think that the infection of much 20th Century analytic philosophy writings by logical symbolism is past its peak, and philosophical logic is emerging with new vigour from decades of debility due to that infection.

To amplify each point:

1. Even the teachers of analytic philosophy recognize that expertise in formal logic is unnecessary. Thus, the 2010 study guide for the BA (Philosophy), Uni of London, says in its blurb about the compulsory (Philosophical) Logic module 'Formal logic does not figure as such in the examination..., but some knowledge of elementary formal logic is necessary for the subject as a whole'. It then goes on to recommend a 'gentle introduction' to formal logic in Guttenplan's book. But Hodges will serve as well. and you have not wasted time in reading it and doing its exercises by way of your gentle introduction. As a student of philosophy your focus will be analysis of and reflection on concepts that arise out of/ are built into logic and reasoning — validity, identity, necessity, truth, reference, definite descriptions, conditionals. In addition you may wish to reflect on the reasons for and value of nonstandard logics which deny bivalence or deny Aristotle's laws of thought such as LEM or even LNC. Also a basic understanding of non-bivalent, including fuzzy, logic is needed to understand the concept of vagueness. I can say from personal experience that a decent grasp of philosophy, including excellent marks in Uni exams in Phil of Maths and Phil of Science, is possible with virtually no knowledge of formal logic.

2. The last 30 years or so has seen development of 'informal logic' allied to the teaching of Critical Thinking in schools and universities. The online Stanford Encyclopaedia of Philosophy entry on 'Informal Logic'(2007) is a good source.

3. The love affair between analytic philosophy and logical symbolism blossomed with the publication in 1905 of the Theory of Descriptions in Russell's 'On Denoting' (that 'paradigm of philosophy' as Ramsey called it in 1931). Russell was widely seen as ushering in a new age of rigour — many old philosophical problems would simply be shown up as confusions of thought; wooly Continental metaphysics was exposed; Meinong's supposed metaphysical excesses were driven out of ontology. And indeed it was a shot in the arm to philosophy, and a paradigm in the sense later articulated by Kuhn. Our view is more nuanced these days -there's much to be said for Meinong, and the Frege/Russell strictly logical approach to language (semantics or what the words mean) was soon seen to be inadequate,failing, among other things,to grasp the pragmatics (what the speaker intends) so key to natural language. But, at any rate, Russell couched his theory in the symbolism of his (and Whitehead's) Principia Mathematica, starting the trend of discussing such matters in terms of this symbolism when they can be understood without it (of course it is true that some people find it easier to grasp symbolically, and indeed I think I am one of these people).

Finally, and incidentally, I think the main Russell/Frege project was to reduce maths entirely to logic. They failed, but only because it is impossible, and they nevertheless reduced arithmetic entirely to logic supplemented only by what Frege rather generously referred to as Hume's principle (the idea of equinumerosity as one-to-one correspondence)

All the best with your studies.

Craig Skinner

back

(55) Roxane asked:

Real happiness is helping others. Who among philosophers in the next would you best attribute that principle?

---

Whilst there are many philosophers, such as Hannah Arendt and Edith Stein, who could be said to fit into this category, the first that comes to my mind is Emmanuel Levinas.

For Martin Heidegger philosophy is essentially a philosophy which seeks to place man in his context with the world, and only incidentally tells us what are or ought to be the relations between one person and the other in society. For Emmanuel Levinas, however, one's ethical relation to the other takes precedence over one's relation to oneself. For Levinas, the other is absolutely other: beyond comprehension — beyond complete understanding. Face to face with the other, the self is obliged to put responsibility for the other before oneself; for this reason the relation puts the self in the position of hostage: the self becomes slave to the other. Thus, Levinas expounds, or advocates, an ethics of obligation and self-sacrifice to the other.

Levinas was born in Lithuania to Jewish parents. He moved to France in 1923. His philosophy is directly related to his experiences during World War 11. His family died in the Holocaust and, as a French citizen and soldier, he became a prisoner of war in Germany. While his was in the prison camp, where he was forced to perform labour, his wife and daughter were kept hidden in a French monastery until his release.

One of Levinas's main ambitions is to attempt to describe a relation to another person that cannot be reduced to understanding. He finds this in what he calls the 'face-to-face' relation. What Levinas means by face-to-face relation, paradoxically, is not a relation of perception or vision but a linguistic connection. The face is not something we see but something we communicate with. When I am communicating with another person I am not reflecting on the other, but actively engaging in relation where I am focussed on the person in front of me. I am not contemplating, I am conversing. Levinas's point is that unless my social interactions with others are underpinned by ethical relations I am in danger of failing to acknowledge the humanity of the other.

Ethics, for Levinas, is the critical questioning of the liberty, spontaneity and cognitive enterprise of the ego to reduce all otherness to itself. What this means is that it questions the authenticity of the ego, the self, and its tendency to reduce others to understandable objects. The term Levinas uses for the ego, or the self, is 'Same'. The 'same' refers not only to subjective thoughts, but also to the objects of those thoughts: not only to the domain or world of the ego or self, but the domain of others. Levinas's ego is not the Cartesian ego: not some homunculus existing in a solipsist vacuum separated from the body and unsure of the validity of the reality of others, but an ego whose distance from others fades into insignificance.

For Levinas, as I have said, the ego — the 'self'- is not some Cartesian homunculus, but an embodied being of flesh and blood, a being capable of hunger, which eats and enjoys eating. 'Only a being who eats, says Levinas, ' can be for the other'. What he means by this is that only a being that knows hunger and enjoys eating can understand what it means to give its bread to others from out of its own mouth. Levinas's ethics, then, is not some deontological obligation to universalise maxims, but an appeal to allow one's subjectivity to remain open to, what Simon Critchley calls, 'the pangs of both hunger and eros [love]'. Subjectivity, he says, is not Descartes' 'ego cogito' (I am, I think), rather it is the declaration that 'Here I am!' And as an existential being I am obliged to answer to the call of the other. Ethics, my responsibility to the other, says Levinas, begins and ends with me.

Levinas' 'big idea' is that the relation to the other cannot be reduced to understanding and that this relation is ethical. For Levinas, it is our empathy or sympathy for the other that takes precedence over our own needs. The argument that may be put against this is 'how can we really know what the other is experiencing or feeling? ' That is, if the other tells me he/she is in pain, sad, or in need of something, how can I know with certainty that the other is being truthful? The answer, of course, is that I cannot. However, in fairness to Levinas, he never claims that his 'big idea' will lead to a full understanding of the other. What he is concerned with is reminding us of our moral obligations to the other — that is, that by quite ordinary acts of civility, hospitality, and kindness, we can make the world of the other a better place to be in.

Tony Fahey

back

(56) Marti asked:

Who was the famous philosopher who said there was no such thing as a selfless act, please. The illustration of a man saving a drowning boy. No one would have known had the man allowed the boy to drown. The philosopher argued even this was a selfish act because the man could not have lived with himself had he allowed the boy to drown.

---

I think such a statement would be attributable to Ayn Rand. Her philosophical position of 'Objectivism' argues that all acts are selfish. The title of one volume of her collected articles [including those of her one time collaborator Nathaniel Brandon] in which the example of saving a drowning person is discussed is entitled The Virtue of Selfishness.

Her arguments are flawed in my view. For an act to be selfish, it must harm some one else. Acts can be self-interested and not be selfish. There can also be individual self-interested actions which are collective satisfying a collective common good. Self-interest is a much more flexible term which does not have the negative consequences associated with selfishness.

With regard to the example given of the man saving the drowning boy. To describe it as selfish is wrong-abiding by the above definition. Could it be self-interested if one of the motives of the action was to consider how the man would feel if he had not rescued the boy? I don't think it's feasible to say that the man stood on the riverbank calculating the pros and cons of saving or not saving the boy. Similarly, the mother does not push her child out of the way of an oncoming car after calculating whether she could live with herself or not in acting or not acting.

If the man did have self-interested, ulterior motives in saving the boy-that he would impress his mother, that it would gain him promotion or other such consequences- then the nature of the act and approbation afforded or not, would have to be decided on a case by case basis.

Martin Jenkins

back

(57) Eric asked:

My name is Eric and I am 17. What is philosophy exactly? My dad (he said he took it in college and didn't like it) said it's smart people asking stupid questions but I have looked at some philosophy questions and I find them very very interesting. But I have noticed all the questions I've looked at are almost always unanswerable or a matter of personal opinion. So is it basically trying to explain what is unexplainable. And I have also observed almost every philosophical question can only be answered by another question. So is philosophy the study of human opinion thoughts or neither.

---

Philosophy, as any student of Philosophy will tell you, means 'love of wisdom'. In its truest sense it is a desire to challenge, to expand and to extend the frontiers of one's own understanding. It is the study of the documented wisdom — the 'big ideas' — of thinkers throughout the history of humankind. However, even in our most respected institutions, Philosophy is often presented as theology, psychology, spirituality or religion. Indeed, many exponents of these respective disciplines seem to have no difficulty in identifying themselves as 'philosophers' when in fact they are 'dogmatists' (sic). What can be said, however, is that Philosophy is all of the above and none. 'All', in the sense that it will certainly engage with the views advanced by the exponents of these disciplines. 'None', in the sense that Philosophy can never be constrained by views that do not allow themselves to be examined, challenged, deconstructed and demystified in the realisation that 'wisdom' or 'truth' is not something that can be caught and grasped as one particular ism.

For those really interested in Philosophy, it is important to draw a distinction between 'a philosophy' and 'Philosophy' itself. There are abroad today many colleges, institutions, societies, schools of philosophy, groups, cults and sects promoting the view that they 'teach' Philosophy, where in fact what they are doing is promoting a particular worldview that they claim is superior to other worldviews or 'philosophies'. What has to be said is that when a body claims that its philosophy has the monopoly on other worldviews it cannot be placed under the rubric of Philosophy — it is dogma. It is for this reason that those institutions that promote a particular religious ethos cannot, by their very nature, be said to teach Philosophy in any real sense: they are constrained by their own 'philosophical' prejudices to treat other worldviews impartially — particularly where these other approaches run contrary to their own. Moreover, by indoctrinating their students into a mindset that holds that it is their way or no way, these institutions show that their interest is not primarily in that which is best for the student, but that which is best in ensuring their own perpetuity. This approach (of using others as a means to one's own ends), as Kant reminds us, is repugnant to Philosophy — the search for wisdom.

What this means is that Philosophy cannot condone any body of knowledge that advocates a closed view on wisdom or truth — one cannot take an a la carte approach to Philosophy. As the Dalai Lama, in the prologue to his book The Universe in a Single Atom: The Convergence of Science and Spirituality advises, where scientific discoveries are made that expose weaknesses in long held traditional beliefs, these beliefs should be abandoned, and the new discoveries embraced (would that all spiritual leaders or 'philosophers' were so open minded!). Philosophy, then, must operate on the premise that its conclusions should ever be open to what Karl Popper calls, 'the law of falsification'. That is where its conclusions are found to be questionable, it is imperative that these views are revisited, re-evaluated and, where necessary, either re-formulated or abandoned. Unfortunately, as history shows, many systems of belief either will not entertain such an approach, or, if or when they do, it is often so far in time removed from the initial discovery that much harm has occurred in the interim.

What should be realised is that the wisdom to which Philosophy aspires is not attained by the practice of uttering self-hypnotising mantras or prayers, nor by being initiated into some select group, sect or cult that promises that its 'road less travelled' is the one true road. Philosophy is not love of 'a truth' or 'some particular approach to wisdom', but a love of truth and wisdom. However, this wisdom or truth does not come pre-wrapped and packaged as one ism or another, rather it involves the courage and preparedness to engage with, to challenge and to expand the boundaries of one's own knowledge and experience. — one's own wisdom.

Tony Fahey

back

(58) Eric asked:

My name is Eric and I am 17. What is philosophy exactly? My dad (he said he took it in college and didn't like it) said it's smart people asking stupid questions but I have looked at some philosophy questions and I find them very very interesting. But I have noticed all the questions I've looked at are almost always unanswerable or a matter of personal opinion. So is it basically trying to explain what is unexplainable. And I have also observed almost every philosophical question can only be answered by another question. So is philosophy the study of human opinion thoughts or neither.

---

Philosophical questions are questions which cannot be answered by any other subject so they are not scientific questions. Philosophy is the ruthless pursuit of the truth and part of this is to reach a complete answer to all philosophical problems. However this does not mean that philosophical questions have an answer. It may be that such questions are not real questions.

Philosophers should never be interested in opinions but only in the complete truth.

However there is no money in studying philosophy so don't do it unless you feel compelled to do so. Don't do it because it seems interesting, you might lose interest in it after the first twenty years. Only do it if you feel compelled to do so.

Your spelling is terrible by the way, try to use a spell checker with your email program.

Shaun Williamson

back

(59) Nick asked:

Hello, I'm happy I get the chance to ask a philosopher, as I don't meet too many in daily life. The question I'm asking now, is no other than the one about spiritual meaning of humans! I don't expect a brain to be able to understand itself, but one of my recent discoveries which is close to the field of psychology is the similarity between stories like Santa Clause and Religion. Both are passed by from generation to other without someone thinking to its consequences. When as a little child I found out that Santa close did not exist I had the feelings, many children have, of frustration. I believe in my heart was a feeling of being lied to, rather than not receiving a gift again on Christmas. Recently I genuinely applied same rationality about religion: having heard different opinions of different nations on this world, I wonder who Jesus really was, as I was born a Christian, and this is the main model character that I know in details. Should I believe Jesus resurrected from dead? How much of bible is made by or influenced by man? How can I know the truth while being surrounded by people who tell lies? I'll stop here, I hope you understand my concerns and wait to listen to your view on religion.

---

This is an interesting question which could signal the beginning of a journey that may occupy you for many years to come. A journey, that is, that begins, as you so rightly infer, when one starts to doubt the veracity of beliefs that heretofore one has accepted without question, and more significantly, beliefs that one has been encouraged to accept without question — quite often from people who do not know that these ideas are false because they have been indoctrinated into the same belief system.

The first thing that should be said is that, whilst you might prefer a more direct answer to your question — whilst others may offer a more direct response, it is my view, in this instance, that this issue might be best addressed by a different, and more circuitous, route, for there are some issues, particularly in philosophy, that one must deal with in one's own way, at one's own pace, in one's own time, and when one is better prepared to accept the conclusions of one's findings — a preparedness, it can be said, that comes only after a good deal of study, reflection, and ultimately a readiness to sacrifice: to let go of, long and deeply held beliefs that one has come to realise are no longer tenable.

Thus, rather than presenting my views on issues such as 'the spiritual meaning of humans', who Jesus really was; whether or not he rose from the dead; or how much of the Bible was influenced by man, or on religion itself, I would encourage you to seek answers to these questions through a combination of conscientious study and reflection — and it is in relation to these issues that the isfp, the International Society for Philosophers (in that, as its mission statement infers, it comes to the table with no other agenda than a love of philosophy, and the desire to provide a forum for all those, amateur and professional, who share this love), can play a vital role. For it is in studying the works of others concerned with the same issues; by discussing these issues with others of like mind; by reflecting on these issues with an open mind, and by being prepared to reappraise, to re-visit, re-evaluate, and where necessary, to let go of ideas and beliefs that you no longer find sustainable, that you will come to find your own answers to these often difficult and complex questions.

However, whilst, in this case, I think it best to let you work on these issues yourself, I believe I may be able to point to some areas of study that might help you begin your investigation into this interesting issue. I should begin by saying that the Santa Claus example you give is most appropriate in that, for some, religion is for adults what Santa is for kids. The frustration and disappointment that children feel on learning that the Santa is a myth results on learning this truth prematurely — before the mind has time to reason it out for itself. Whilst it can be argued that it is a myth built on the identity of a person that once existed, this can be of little comfort to the traumatised child at the time — if you spot an analogy here, it is not unintentional.

Let us really begin by looking at the derivation of the terms 'religion' and 'philosophy'. Patrick Quinn informs us that the religion derives from the Latin religio meaning 'to bind' and signifies belief in or obedience and sensitivity to the sacred, which is conceived to consist of a supernatural power or set of powers regarded as divine and having control over human destiny, whilst is taken from the Greek philos, (love) or philia (affinity for or attraction towards) and Sophia (wisdom, knowledge) ( see Philosophy of Religion A-Z, 2005, p.180).

Thus, immediately we see that where religion 'binds' one to a particular belief or set of beliefs given or imposed by some transcendent entity, implicit in the definition philosophy is the view that the search and acquisition of wisdom and knowledge is more in the hands of the individual. Religion, then, involves the belief in the existence of a transcendent entity that has the power to control and determine the course of all events in the cosmos. Being religious involves adhering unwaveringly to the laws, tenets, and injunctions of the system of belief to which one is aligned — whether unwittingly or not. Religion does not involve a love of wisdom and knowledge, nor does it encourage the questioning of beliefs or 'truths' handed down by religious tradition, rather it demands obedience to the set of beliefs it holds have been revealed to it by an omniscient, omnipotent and unseen god. The demarcation point for religious enquiry, where it exists, is that God exists and that all knowledge and truth necessary to human existence has been, or will be, given in revelation. Philosophy, for religion, is seen as a useful method, a tool, for showing that that which it holds to be true can be validated by reason. And this can be said to be the crux of the matter, for whilst philosophy is concerned with many of the issues that concern religion — the proof of the existence of God, a priori truths, and so on, it is not, and never should be, dogmatic.

As Christianity is the religion to which you refer in your question, and that with which I am most familiar, it is the one that will occupy this discussion. With this in mind, can I suggest, in moving further to a resolution to the issue(s) you raise, you could question the historical accuracy of the Old and New Testaments; you could look at the arguments both for and against teachings of Augustus, who saw philosophy as a continuance of religion, and of Boethius, who saw philosophy, in the form of Athena, as offering him 'consolation' in the face of his impending death. You might consider the pros and cons of Aquinas's proofs of the existence of God, and of St Anselm, who, like Descartes, held that if one could conceive of a perfect being (God) then this prefect being must necessarily exist. You could look at the how certain tenets became incorporated into Church law through the Council of Nicea; of the different and many forms of Christianity that existed before this event, of the role of Arius played in the introduction of Nicene Creed, and the expressions of faith contained therein. You might look the role the Inquisition and the Index of Prohibited Books played in the suppression of reasoned arguments against the teachings of the Church (as well as the connection between these institutions and the current Congregarion for the Doctrine of the Faith), and of the treatment of such thinkers as Copernicus, Galileo, Bruno and many others who rejected the 'truths' imposed on them by the Church Fathers.

As with your question, I will stop here, for there is enough in the above to help you on your way in resolving the issues contained in your question.

I would like to finish by drawing on a popular Italian saying 'chi va piano, va lontano e sano', which translates something like 'one who goes slowly, goes far and well'. So Nick, or George, make haste slowly, and travel well.

Tony Fahey

back

(60) Lucy asked:

If pragmatic considerations show it is irrational not to believe in the principle of induction, do they also show it is irrational not to believe in God?

---

Mmm, it looks like Lucy is asking us to do her homework for her. This has all the hallmarks of an assignment or essay question. But unlike some we receive on Ask a Philosopher, this one is not that bad. How much help my answer is going to be is another question.

Two things ought to scream out at you when you see the phrase 'pragmatic justification of induction' (by the way, you'll find loads of pages if you search for this in Google):

The first point is, how on earth am I going to be persuaded by a pragmatic argument that belief in induction 'works in practice' or 'leads to practical benefits' if I'm not already committed to induction? In that respect, a pragmatic justification of induction is in exactly the same quandary as an inductive justification of induction. Just because induction works fine for you, or just because it has worked for me in the past, is no reason for me to believe that it will work for me now unless I have already accepted that inductive reasoning is reasonable.

The second point has to do with the — allegedly modest — idea of a merely 'pragmatic' belief. Suppose I accept that induction 'works' (or has worked for me in the past, or has worked for you); is that supposed to be a true statement, or only something which it is useful to believe? If I state that it is merely useful to believe the statement just made, is that a claim to truth, or am I merely saying that it is useful to believe that it is useful to believe... and so on.

This is all very well covered ground — as you will discover if you do an internet search. In any event, the idea of a 'pragmatic justification of induction' has at least two major points of uncertainty/ instability before we even go on to consider the even more explosive idea of an inductive proof of the existence of God.

(In my last post, I described myself as a 'pragmatist with a small 'p''. Perhaps, one should make clear that the background to this question is most definitely Pragmatism with a big 'P', I'm talking in particular about the philosophies of C.S. Peirce and William James.)

The Pragmatist may object at this point that I have willfully misinterpreted the pragmatic case for induction. We are not concerned with anything so abstract as the 'definition of truth' (although this more ambitious thesis is what James attempted in Pragmatism, 1907), but rather the question of how one ought to behave, or, equivalently, what makes behaviour 'rational' or 'irrational'. When I avoid putting my hand in a pot of boiling water in order to stir the spaghetti, I am not considering what would be a 'true statement' concerning the effect of a temperature of 100 degrees Centigrade on living human tissue. Rather, I am simply avoiding doing something which I know to be harmful. The knowledge in question is practical knowledge. It is something you just don't do, without having to think about it first.

We navigate our way through through the world, avoiding myriads of dangers large and small, choosing intelligently without pausing to reflect on that choice. This is part of what it is to 'be rational'. You wouldn't call someone rational who only did the rational thing when prompted to think about it, but the rest of the time behaved in a more or less random way.

This also disposes of the objection that a pragmatic justification of induction presupposes inductive reasoning. The whole point of the pragmatic 'turn' is to halt the threatened regress of an inductive argument for preferring induction. At a certain point, thinking comes to an end and we just act. The capacity to learn from experience (which is basically all that induction amounts to) is an intrinsic part of the capacity to make intelligent choices, whether or not these choices are reflected upon.

I'm prepared to buy all this, just for the sake of Lucy's question. I should add, however, that I don't really like the idea that induction is something we just 'have' to believe, come what may. There are principles which it definitely pays to believe even though they are apparently counter-inductive. One is Sod's Law: If something can go wrong, it will go wrong. If you estimate the chance of something going wrong with your plan, your estimate — however rationally based, however carefully you have sifted all the relevant inductive considerations — will always be too optimistic. Another well attested counter-inductive principle (which I don't have a name for) is that Good Things Never Last. On the basis of induction, rationally it oughtn't to make a difference whether you are onto a 'Good Thing' or not, but in practice it just does.

But maybe that just shows what a pessimist I am. Maybe (to be really clever, if not cute about this) you could make an inductively based case for pessimism, on the grounds that it offers a necessary rational corrective to the natural human tendency to be over-optimistic.

However, this is merely delaying the real question: whether an useful analogy or, better still, inference can be drawn between a pragmatic justification of induction and a pragmatic justification of theism.

On the face of it, there's a huge disanalogy, a massive non-sequitur. You say belief in God works for you. I say non-belief in God works for me. If you didn't believe in God, you say, your life just wouldn't be worth living. My response is that if I believed in God, my life would become hell. There would be no place far away enough or deep enough to hide.

Instead of the happy-clappy belief that 'God will always love me' or 'God is on my side', I prefer the honesty of good old-fashioned Catholicism. When you die, you can expect to spend 1000 years in Purgatory (according to one book I came across — it's a grimly fascinating subject for debate), going over every aspect of your life, inch by inch, until you are thoroughly cleansed and prepared for everlasting life in Heaven. Lovely.

The idea of being a 'God-fearing man' has this aspect of truth about it. As Geach (a Roman Catholic) says in his defence of Divine Command theory (see my post on Plato's Euthyphro) to defy God is the very definition of insanity. For my part, I couldn't live with that fear looming over me. The fire and brimstone preachers had the right idea: What the Hell are you smiling for?

However, you will say that I have just conceded the Pragmatist's case, by demonstrating that I am prepared to argue over the question of belief in God, on the ground of what is or is not the most weighty pragmatic consideration. How that argument is resolved is a mere point of detail. — I do not concede. I am expressing my personal feelings. Unlike the Pragmatist, I don't consider for one moment that my personal feelings constitute an argument let alone a 'rational' argument. So far as the existence of God is concerned, there is no case. There is no doubt where the onus of justification lies: it is with the theist, not the atheist.

For the sake of argument, however, let's put aside the last point. Suppose it were true that the question of the existence or non-existence of God is one to be settled by pragmatic considerations. To answer Lucy's question (finally!) there is still a huge disanalogy with the pragmatic justification of induction because (notwithstanding my somewhat tongue-in-cheek case for counter-inductive principles like Sod's Law) there really isn't a meaningful debate about whether or not we should accept induction. The genuine counter-inductivists died out long ago.

Geoffrey Klempner

back

(61) Earnest asked:

What is the subject and substance of Socrates' conversation with Euthyphro?

---

The problem of the Euthyphro is to determine the real character or essence of piety. It is a discussion between Socrates and Euthyphro set outside Athen's courthouse. Euthyphro has come to lay charges against his father as a murderer. Socrates is shocked (as any Athenian of the time would be), but appeals to him (ironically) as an expert for an answer to the question of what piety is (because Euthyphro seems confident in his own judgment of religious and ethical matters). There must be some one character or property which belongs to all action which is considered pious — Socrates asks. Like many of the interlocutors in Plato's earlier dialogues, Euthyphro confuses definition with enumeration of instances. He gives, for example, as his first definition for piety what he is doing now: prosecuting his father for murder. But Socrates is not looking for an instance of piety, he is looking to be provided with the fundamental characteristic which makes pious things pious.

Euthyphro's later attempt at a definition is: what all the gods love is pious and what they all hate is impious. At this point, Socrates asks: 'Is the pious loved by the gods because it is pious? Or is it pious because it is loved by the gods?' (10a) (In other words, do the gods love something because it is pious, or is it something pious because the gods love it?) This has famously been coined Euthyphro's Dilemma, and has had an important impact on subsequent theological discussion (Is what is morally right commanded by God because it is antecedently intrinsically morally right, or is it morally right simply because it is commanded by God?) In any case, the import of the question seems to be somewhat lost on Euthyphro. Euthyphro goes, with the help of Socrates, to offer up a couple more definitions of piety. In the end however, his inability to follow the thought of Socrates brings him back full circle to his original position. Euthyphro then pretends he is late for an appointment and both they, and we, seem to be no nearer knowing what piety is than when the discussion began.

Kristian Urstad

back

(62) Lucy asked:

If pragmatic considerations show it is irrational not to believe in the principle of induction, do they also show it is irrational not to believe in God?

---

I don't know if it is irrational or not to believe in the principle of induction. Most people have no idea what the principle of induction is and would not be able to tell you if they believe in it.

However if I want to switch on the light, should I try the light switch or should I just wave my arms in the air and say 'abracadabra'? In my experience this light switch has always worked before while I have no experience of things happening just because I say abracadabra.

Induction deals with the things that happen in the world and the things that we do. It is difficult to see how belief in God fits into this. If there is an earthquake and I believe in God am I more likely to survive? That might be a practical consequence of belief in God. However, in our experience, belief in God has no practical consequences and I don't see how it can be compared to belief in light switches.

If you decide that you believe in God then you also have to decide what sort of God you believe in. Do you believe in a God who is love or do you believe in a God who wants vengeance? Do you believe in the Christian God or the Muslim God or the Jewish God? Compared to belief in God, belief in a light switch is a simple thing. If you want to switch on the light you try the light switch first, its not guaranteed to work but as a first option it makes the most sense doesn't it?

Shaun Williamson

back

(63) Roy asked:

I have trouble understanding what people mean when they use a phrase with the word exception. To me it sounds like a contradiction. So my question has two parts:

A) Is using the term exception ever legitimate?

B) Does the term 'except' usually contradict the general rule that comes before it?

For example, All ice cream should be taxed, except vanilla.

This seems that the quantifier 'all' is false if a member is excluded.

For example, All students passed the final exam except Roy.

Seems to me this means only Roy failed the final exam and the quantifier 'all' makes the sentence false.

Please help me make sense of the term exception. Thanks for your help.

---

I am going to treat Roy's question as a problem for truth-conditional semantics. Grammarians, who professionally are required to have a little more respect for natural language 'as it is spoken' might respond differently.

The modern wave of truth-conditional semantics was launched by the work of Donald Davidson in the late 60's, beginning with his seminal article 'Truth and Meaning' (1967). Davidson was merely continuing the project started by Frege with his revolutionary Begriffschrift, and continued by the early Wittgenstein, Russell and Tarski.

Davidson reformulated the task for a semantics of natural language — based upon Frege's ground-breaking invention of first-order predicate calculus — which aimed to satisfy two requirements: (1) to explain how it is that a speaker, using their knowledge of a finite number of words or semantic units, is able to generate a potentially infinite number of meaningful sentences; (2) make explicit the logical entailments between sentences which are only implicit in natural language.

Applied to the notion of 'except', what we need to explain is how it is possible for a speaker to use this term consistently in any number of sentences which they have never used or encountered before, and how they are able to recognize the logical implications of a sentence containing the word 'except'.

The logical analysis represents the speaker's 'implicit knowledge'. What exactly it means to attribute 'implicit knowledge' to a speaker is itself a problem in the philosophy of language, but as it affects truth-conditional semantics generally, I won't develop it here.

Now here comes the crunch: if you can do this, if you can give an analysis which satisfies Davidson's two requirements, then Davidson would say it really doesn't matter too much if the analysis which you offer of the idiom doesn't look at all like something that an ordinary speaker, unversed in the symbolism of first-order predicate calculus would recognize.

This is all rather general. Let's apply this idea to Roy's case.

I can see why Roy thinks that it is odd to say something like, 'All the students passed, except Roy who failed.' If they all passed, then Roy passed. This follows logically from a basic rule of inference which any speaker competent with the term 'all' recognizes. But we just said that Roy failed. He didn't pass. Therefore Roy passed and Roy didn't pass: a logical contradiction.

Or is it?

Here is a first shot at translating the statement 'All the students passed, except Roy', into first-order predicate calculus:

(x)((x is a student & x is not Roy → x passed) & (x is a student & x is Roy → x failed))

'For all x, if x is a student and x isn't Roy, then x passed; if x is a student and x is Roy, then x failed.'

This seems OK. Let's try to apply it to the vanilla example:

(x)((x is ice cream & x is not vanilla → x is taxable) & (x is ice cream and x is vanilla → x is not taxable))

'For all x, if x is non-vanilla ice cream then x is taxable; if x is vanilla ice cream then x is not taxable.'

This is fine so far as it goes but it seems to leave out a rather important aspect of the meaning of 'except', which any competent speaker would recognize. When we say 'all...except...' we are pointing out an exception to a generalization, which otherwise holds. 'All trains into London St Pancras are running normally today, except from Chesterfield.' If the announcer had gone on to list all the trains into London St Pancras bar one or two, then the statement would be regarded as false, or at best, deliberately misleading.

Exceptions are in the minority. This is an important part of what we mean when we use the term 'except', and any logical analysis which fails to recognize this is inadequate. If all the students except for Roy had failed, then you wouldn't say (unless you were being cruel), 'All the students passed —except for Roy, Mary, Christopher, Bob, Susan...'.

Closely connected with the use of the term 'except' is the quantifier, 'most'. 'Most of the candidates passed the exam.' Or, 'All the candidates passed, except Roy and Susan.' (We sometimes loosely say, 'Most of the students passed, except Roy and Susan'. But this is confusing when you think about it.)

But how do we evaluate what counts as a 'majority' or 'most'? Is it more than 50%? Can the threshold change between different contexts? 'Most blood supplied for transfusions in the UK is free from contamination.' That better had better be 99.999% or the Minister for Health has a potential scandal on his hands.

Various attempts have been made to give a truth-conditional semantics for 'most', although I don't know if any particular analysis is generally accepted. To allow a vague term into logic itself would have caused great affront to Frege, who saw natural language as necessarily deficient and lacking the precision of logic. The fact is that ordinary speakers exercise refined judgement in deciding exactly when and how to use terms like 'except' and 'most' and this ability is one that is inexplicable in terms of first-order predicate calculus. — So much the worse, some would say, for truth-conditional semantics.

Geoffrey Klempner

back

(64) Soso asked:

Can you please give me a clear summery about what Averroes explains in his text of the decisive treatise?

---

Averroes or, to give him his Islamic name, Ibn Rushd, was born in Cordoba in Islamic Spain in 1126 (he died in 1198). His commentaries on Aristotle led him to be known as 'The Commentator' — a term bestowed on him by Aquinas. Responding to al-Ghazzi (1058-1111), who, in his Incoherence, criticises Aristotelianism from the point of view of Islamic orthodoxy, and charges philosophers with disbelief, Averroes produces commentaries designed to defend Aristotle and philosophy in as cogent a manner as possible. Amongst these commentaries is his Decisive Treatise.

Following al-Ghazzi's criticism of Aristotle and philosophers, Islamic philosophy faced critical scrutiny from Islamic theologians. In Decisive Treatise, Averroes begins his defence of the Stagirite and philosophers with the contention that Shari'a Law commands the study of philosophy. There was, he claims, a harmonious relationship between religious Law and philosophy in that, while capable of different explanations and interpretations, truth is one and indivisible. Many Quranic verses, he maintains, command human intellectual reflection on God and his creation. This is best done by demonstration, drawing inferences from accepted premises — which is just what philosophers do. Since the same obligation exists in religion, one who has the capacity of natural intelligence and religious integrity should be obliged to study philosophy. However, while he insists that religion and philosophy are in harmony, it should be noted that Averroes rates philosophy superior to theology in that in the order of understanding of truth, the uneducated majority need to be presented with truth in the form of narratives and parables by theologians, whereas philosophers, in contrast, are privileged to understand the truth in the most abstract way possible.

Since not everybody is capable of finding truth through philosophy, Shari'a Law speaks of three ways for humans to discover truth and interpret scripture: the demonstrative, the dialectical, and the rhetorical. These, for Averroes, divide humanity into philosophers, theologians, and the common people. Each group, he says, should employ whichever interpretative method best suits its purpose. Ultimately, Averroes argues, since the end for both philosophers and theologians is not so different, the charge by al-Ghazzi and other Islamic theologians that Aristotle and philosophers were irreligious is unsustainable.

Tony Fahey

back

(65) Earnest asked:

In The Trial and Death of Socrates, What is the subject and substance of Socrates' conversation with Euthyphro?

---

Whilst the answer to this question might be best found in one's own close reading of Euthyphro, I would offer the view that the subject of this specific 'dialogue' of Socrates is piety, and the substance, if it can be called such, is contained the discussion that arises between Socrates and Euthyphro when they meet before the court of king-archon — the courts of justice, where Socrates is indicted by Meletus for corrupting the minds of the young, and Euthyphro is charging his father with murder of a labourer who is himself a murderer. Whilst Euthyphro's family and acquaintances hold that his action is impious, Euthyphro argues that they are mistaken and that, due to ignorance, they do not understand the true nature of piety. In keeping with the discursive method for which he was renowned, Socrates asks Euthyphro to define his concept of piety; also in keeping with Socrates' other dialogues, at the end of the discussion no satisfactory definition of piety is agreed upon. That being said, as G.M.A Grube says in his note on Euthyphro in his Five Dialogues, Euthyphro, Apology, Crito, Meno, Phaedo, it should be pointed out that this particular dialogue does contain some passages of philosophical importance. These include that in which Socrates speaks of the one Form, presented by all the actions that we call pious (see 5d), and that in which it is concluded that the gods love that which is pious because it is pious, it is not pious because the gods love it (see 10d).

Tony Fahey

back

(66) Vaidyanathan asked:

I am a new comer to philosophy, and metaphysics in particular. I would like to know about the method of analysing and proving statements in metaphysics. Being a student of mathematics I am familiar with the axiomatic method. Is there any systematic method of proving statements in metaphysics?

---

First of all, I am glad to see someone interested in metaphysics. For the last century hardly any philosopher has had anything to do with metaphysics; this started with the logical positivists and is now almost universal. So why should anyone do metaphysics? It depends on how you answer the fundamental question: is all that we perceive around us reality, or is it images of reality? The common sense view is that the process of perception consists of real objects, external to the head of the perceiver, causing images of themselves inside that head, via the perceiver's sense organs; what we perceive around us is outside our heads, material, and public, while images are inside our heads, mental, and private; hence what we perceive is reality, not images of reality. There are several counter-arguments to this common sense view, but I will give only one here. It is: everything that we perceive is somewhat illusory, and the only explanation of illusions is that they are images of reality, misrepresentations of reality; this is because illusions involve contradictions, and no contradiction can be true, or real. For example, the half-immersed stick that appears bent is bent to the sight but straight to the touch if you slide your fingers down it. Other illusions are known to be such because they contradict well established belief. It is a fact that everything you perceive is somewhat illusory; if you doubt this, can you point to some perceived object that is not illusory, and also explain how you know it to be so?

This is what brings us to metaphysics. Because if all that we perceive is images of reality rather than reality itself, how can we know anything about reality? Metaphysics is an attempt to answer this question. Closely connected to metaphysics is epistemology, which is the enquiry into how we can do metaphysics: and the answer is that metaphysics is both rational and speculative; it is often said that metaphysics investigates the underlying causes of empirical phenomena. The underlying causes are imperceptible causes of the images of reality that we perceive around us, and causes necessitate their effects in the some way as the truth of logical antecedents necessitate the truth of their consequents — which is why epistemology has to be rational. And it is worth pointing out that theoretical science is both speculative and rational, and, according to physicists, it describes the underlying causes of empirical phenomena.

There remains, of course, the problem that if all that we perceive is images, how does it get outside our heads and become material and public? There is a solution to this problem which is logically very simple but psychologically very difficult. I cannot go into it here, but you might try working it out for yourself, starting with the point that if everything you perceive is images then you own physical body must be an image also: an image of an underlying real body.

If you would like to correspond with me on all this, visit my website: http://www.sharebooks.ca

Helier Robinson

back

(67) Roy asked:

I have trouble understanding what people mean when they use a phrase with the word exception. To me it sounds like a contradiction. So my question has two parts:

A) Is using the term exception ever legitimate?

B) Does the term 'except' usually contradict the general rule that comes before it?

For example, All ice cream should be taxed, except vanilla.

This seems that the quantifier 'all' is false if a member is excluded.

For example, All students passed the final exam except Roy.

Seems to me this means only Roy failed the final exam and the quantifier 'all' makes the sentence false.

Please help me make sense of the term exception. Thanks for your help.

---

No, there is no contradiction. The kind of rules you are concerned with deal with sets of things, and usually these sets are subsets of larger sets. It you rephrase 'All students passed the final exam except Roy' as 'All students except Roy passed the final exam' you see that the quantifier refers to a subset of all the students, namely all except Roy. You would have a contradiction if you claimed that all students passed the exam and, also, Roy, one of them, did not pass it; but that is quite a different meaning. The same applies to your ice cream example.

Helier Robinson

back

(68) Jane asked:

'I do not need an umbrella unless it is raining. It is not raining. Therefore, I do not need my umbrella.'

What are the sufficient and necessary conditions of this argument?

---

The key here is to understand that 'unless' should be read as 'if...not...' So the arguments becomes: If it is not raining then I do not need an umbrella; it is not raining; therefore I do not need an umbrella — a valid form of argument called modus ponens, or affirmation of the antecedent. Sufficient and necessary conditions here apply to the conditional, 'If it is not raining then I do not need an umbrella'. The truth of the antecedent of a conditional is a sufficient condition for the truth of the consequent, and the truth of the consequent is a necessary condition for the truth of the antecedent.

Helier Robinson

back

(69) Roy asked:

I have trouble understanding what people mean when they use a phrase with the word exception. To me it sounds like a contradiction. So my question has two parts:

A) Is using the term exception ever legitimate?

B) Does the term 'except' usually contradict the general rule that comes before it?

For example, All ice cream should be taxed, except vanilla.

This seems that the quantifier 'all' is false if a member is excluded.

For example, All students passed the final exam except Roy.

Seems to me this means only Roy failed the final exam and the quantifier 'all' makes the sentence false.

Please help me make sense of the term exception. Thanks for your help.

---

Roy: (A) Yes, of course! The sentences you list are perfectly good sentences with a determinate meaning — for example, as you say, 'All students passed the final exam except Roy' means 'only Roy failed the final exam'.

(B) I see your concern, though. The trick is to *not* think of the 'except' as contradicting the 'all' part. It would be easy to show how to do this in formal logic, but I'll have a go in something not a million miles away from English ...

So, here's one way you could read 'All students passed the final exam except Roy':

'ALL the students passed the exam; AND one student (namely Roy) didn't pass the exam.

As you point out, that is internally inconsistent — if the second conjunct ('Roy didn't pass the exam') is true, that makes the first conjunct ('All the students passed') false. Problem.

Solution: don't read the sentence in that way. Instead, read it as saying:

'Roy didn't pass the exam; AND every student WHO ISN'T ROY passed the exam.'

That's not internally inconsistent and means exactly what you say the original sentence means: Result!

Helen Beebee
Director
British Philosophical Association

back

(70) Callum asked:

Recently some thoughts came into my head that worried men greatly. The widely accepted (at least I think it is but am hoping isn't) view of Determinism has worried me purely because it means everything I do

was always going to happen (taking away value from my achievements) and therefore makes

criminals not bad (not that I'm thinking of being a criminal).

Are there any credible philosophers or experiments that resist determinism or (if my words are ambiguous) believe we have choice to do otherwise e.g. a person walking in a shop has the possibility of stealing or not stealing. As this would put me at ease.

---

Well, Callum, that's the traditional Problem of Free Will. Philosophers still disagree amongst themselves about whether it can be solved, and if so, how exactly. In particular:

(a) Yes, there are plenty of credible philosophers who resist determinism and believe we have a choice to do otherwise: they're known as libertarians (NB not the same as being a 'libertarian' in the political sense). Personally I'm not a big fan, however, and prefer:

(b) Compatibilism. Compatibilists say that free will and determinism are compatible. (Some of them think determinism is in fact true; some of them think that determinism *has* to be true in order for us to have free will; and some of them think that it doesn't really matter whether determinism is true or not.) Famous compatibilists include David Hume, but a major contemporary proponent is Daniel Dennett; I'd recommend his book 'Elbow Room' (Bradford Books, 1984).

Helen Beebee
Director
British Philosophical Association

back

(71) Michelle asked:

Which is better philosophy or mythology or are both essential in or lives?

---

Michelle there is no connection between philosophy and mythology. They don't deal with the same things. Philosophy is not essential to all peoples lives although it is essential to my life.

Mythology is a form of story telling and people have always told stories. It makes no sense to ask if story telling is essential to our lives because story telling like music and mathematics is just something that humans do. If we didn't tell stories then we would be very different animals in a way that it is not easy to imagine

However none of this has anything to do with philosophy, so we have philosophy and we have mythology. In the same way we have poetry and we have washing machines but no one ever asks if poetry is better than a washing machine.

Shaun Williamson

back

(72) Callum asked:

Recently some thoughts came into my head that worried men greatly. The widely accepted (at least I think it is but am hoping isn't) view of Determinism has worried me purely because it means everything I do

was always going to happen (taking away value from my achievements) and therefore makes
criminals not bad (not that I'm thinking of being a criminal).

Are there any credible philosophers or experiments that resist determinism or (if my words are ambiguous) believe we have choice to do otherwise e.g. a person walking in a shop has the possibility of stealing or not stealing. As this would put me at ease.

---

Callum thank you for you question, you are one of the few people who have posted here, who understands what some of the implications of determinism are.

So welcome to the wonderful world of philosophical problems. Philosophy is not a science so philosophical questions cannot be answered by experiments. Philosophers don't do experiments they just think about things.

Philosophers disagree about freewill vs determinism so it doesn't matter who is credible and who isn't. I have never known someone who claimed to be a determinist and who acted in their everyday life as though they really believed in determinism. In real life everyone acts as though they are free to choose. Of course humans are physical beings who are subject to the same laws of physics as any stone on the ground.

If you want to find the answer to your question, you would have to study philosophy but you should only do that if you feel you really need to find the answer. I am not a determinist but I don't believe in free will either.

Shaun Williamson

back

(73) Callum asked:

Recently some thoughts came into my head that worried men greatly. The widely accepted (at least I think it is but am hoping isn't) view of Determinism has worried me purely because it means everything I do

was always going to happen (taking away value from my achievements) and therefore makes
criminals not bad (not that I'm thinking of being a criminal).

Are there any credible philosophers or experiments that resist determinism or (if my words are ambiguous) believe we have choice to do otherwise e.g. a person walking in a shop has the possibility of stealing or not stealing. As this would put me at ease.

Stephanie asked:

What is the primary thing that separates humans from animals. I was taught it was the ability to reason?

---

Because the following response covers issues raised in both of the above questions, I have decided to tackle the both in one reply.

Do we live in a world in which, for us, our lives are mapped out for us in advance? Are all our actions determined by factors that are not, or never can be, of our own making? If so, what part does the faculty of reason play in such a world? And finally, is it really reason that separates humans from other animals, or is there some other faculty that marks us out as different from other creatures? These are just some of the issues that this response to the two above questions will attempt to address.

The central thesis of determinism is that everything that happens is fully determined by things that have preceded it. For such a thesis to be sustainable each and every event must have a cause that would ensure its occurrence. Although some philosophers accept the notion of a 'probabilistic' cause — a notion that concedes that it is probable that an effect will follow from a certain occurrence, a theory that argues that some events merely have a probabilistic cause does not qualify as a valid definition of determinism.

Thus we see that the issue of determinism is a moot one, and one that continues to cause much discussion amongst philosophers even today. The success of scientific theories, especially Newton's theory of gravity, led many, especially the Marquis of Laplace, to believe that everything in the universe, including human behaviour. The Newtonian universe, then, was completely deterministic with no room for chance. For Newtonians probability was the consequence of ignorance in a deterministic universe where everything unfolded according to the laws of nature, and/or of God (see Quantum, by Manjit Kumar, p.218). Whilst there were those who resisted this approach, in general, it remained the standard assumption of science until the early part of the 20th century. One of the earliest indications that this approach would have to be abandoned came with the introduction of quantum mechanics by the German physicist Werner Heisenberg. At the heart of this approach was the 'Uncertainty Principle' where Heisenberg showed that the electron is a particle but a particle that can also be described in terms of waves. The uncertainty around which the theory is built is that whilst we can know the path an electro takes as it moves through a space, or we can know where it is at a given moment, we cannot know both (see A Short History of Nearly Everything, by Bill Bryson, p.158). In essence what this means is that, in practice, we can never predict ('determine') where the electron will be at any given moment, we can only say where it probably will be.

Stephen Hawking tells us that the uncertainty principle had profound implications for the way we see our world. To begin with, it marked the end of Laplace's concept of a deterministic universe, for we cannot predict the future with any certainty if we cannot measure the present state of the universe to ant precise degree (see A Brief History of Time, p.57). In short, quantum physics introduces an unavoidable or randomness in science, in the workings of the universe, and in human behaviour. According to Hawking, each event lends itself to up to 30 probabilities (ibid).

Now, while there is no doubt that reason is a faculty that plays its part in allowing us to make this transition, it can be argued that we cannot say for definite that reason is unique to humans alone, for other animals also display, albeit to a lesser degree, the ability to reason things out for themselves — as any dog lover will testify. If not specifically reason alone then, what is it that is separates us humans from our animal cousins. For the Italian philosopher Giambattista Vico it is the faculties of imagination and memory that lifts us from the ordinary to the extraordinary. Indeed, so strongly does Vico feel on this issue that he argues that since all ideas, concepts, ideologies and worldviews have their genesis in human imagination that, together with memory, it is a faculty that should be developed in the young before they are exposed to the discipline of philosophy. For Vico, to educate adolescents in philosophy before they had been grounded in faculties of imagination and memory is to engender in them a sense of oddity and arrogance that manifests itself in adulthood and leaves them unfit for the social intercourse (see On the Study Methods of Our Time, p.13).

There are two related points with which I would like to finish. The first is that while reason may play its part in allowing you to fulfil your dream, the source or genesis of the dream is in your imagination. The second is that it should be kept in mind that the only time there is the present. The past is gone, the future is yet to come, the present is all there ever is. As St Augustine says, the past is really thinking, in the present, of things that have already happened, and the future is the expectation, in the present, of things that may happen. And it is in the present that our choices are made. Notwithstanding what has gone before, we have within our power the capability of changing that which heretofore has appeared to be our destiny. By drawing on our imagination, our memory, and our power of reason, we too, like the electron, can make that quantum leap that could not have been predicted or predetermined.

Tony Fahey

back

(74) Roy asked:

I have trouble understanding what people mean when they use a phrase with the word exception. To me it sounds like a contradiction. So my question has two parts:

A) Is using the term exception ever legitimate?

B) Does the term 'except' usually contradict the general rule that comes before it?

For example, All ice cream should be taxed, except vanilla.

This seems that the quantifier 'all' is false if a member is excluded.

For example, All students passed the final exam except Roy.

Seems to me this means only Roy failed the final exam and the quantifier 'all' makes the sentence false.

Please help me make sense of the term exception. Thanks for your help.

---

Language is a complex and flexible thing. 'Except' doesn't contract what comes before it it simply modifies it. Language is different from logic. Logic abstracts the features of language that are relevant to validity. So in logic 'a or b' is true if a is true or b is true or both a and b are true. In real life when we say a or b we generally mean a or b but not both.

In the same way when we say 'all men are mortal except superman' then logically this has be translated as 'some men are mortal' or 'not all men are mortal'. Why do we use constructions in ordinary like like 'all...except' well they have a certain effect just as words in French are masculine are feminine. Human language isn't logic although it has logical features. Don't confuse the everyday 'all' with the logical 'all'.

Shaun Williamson

back

(75) Dawn asked:

'Who described reality in terms of 'Monads'?

---

The philosopher who described reality in terms of 'Monads' was Gottfried Von Leibniz [1646-1716].

Following the Aristotelian account of ontos -what is, substance[s] were posited as being the constituent parts of reality, of all beings -animate and inanimate- in existence. In order to address the problematic of substances, their natures and associated problems such as to their maintained identity over time following interactions with other substances, Leibniz argued that each substance was unique and created by God. Their identities were contained within themselves impervious to external influences. That is, whatever they did was predetermined by the creator God. As such, the problem of how substances' identity and nature were identifiable and knowable, was answered.

The nature of a substance did not arise from one substance-or monad- interacting with other[s] thereby bringing about accidental change. Activity arose from the predetermined nature [haecceitas] or 'thisness' of the monad. In other words, everything you or I as monads did, do or will do, is predetermined by our nature as determined by God. There is no external influence upon one monad from another. Think of a clockwork device. It acts according to the unwinding of a mechanism. Similarly, monads 'unwind' due to its predetermined nature as created by God. As such, monads were termed 'windowless'; they act from their internal haecceity alone and not any external influence.

See Discourse on Metaphysics [1686] and The Monadology [1714].

Martin Jenkins

back

(76) John asked:

Is there a branch of knowledge which studies the repeated but seemingly unrelated appearance of the same text, thought, observation?

For example, an NPR show the other morning included a discussion of whether or not one can walk in the same river twice. The very next day, the same question was raised in a novel I was reading.

Another recent example was reading a line from Whitman of which I had not previously been aware and then hearing the same line quoted on a television show a couple of days later.

These pairings have been quite frequent and I wondered if there are studies of the phenomenon.

---

Arthur Koestler The Roots of Coincidence.

Geoffrey Klempner

back

(77) Ronny asked:

Human Test Tubes?

If this website is anything to go by depression appears to influence a lot of people into looking to philosophy to provide some answers to their issues with life. It appears I am one of those people although I am not naive enough to expect a definitive answer to any of my questions. I simply feel the need to express a thought that has dogged me since being offered medication for my depression.

My depression was explained to me, when initially diagnosed, as being due to low levels of certain chemicals within my body and medication would go some way to help correct this imbalance. Coming from a medical background up to graduate level, I was well aware of the complexities of human physiology. However, having had depression explained to me in such a manner I began to question whether everything we are as human beings is not a result of a series of complex chemical reactions? Light passes into my eye where a chemical reaction converts this to a signal passed to my brain where further chemical reactions occur and I am present with an image. Sometimes the images we perceive can produce what we describe as an 'emotion'. Could emotions therefore be seen as the end point of a chemical cascade? Are 'feelings' also end points of chemical processes? I hear a sound which is converted, via a mechanism within the ear, to a chemical reaction to produce electrical signals within the brain. Further chemical reactions branch away from this and the end point can be a stimulation of further physiology and a 'feeling' is produced. Does repetition reinforce a certain chemical pathway so that we develop the same 'feeling' or 'emotion' to the same stimulus? Is that how we come to 'like' or 'dislike' something?

These questions made me wonder whether it is ever truly possible to therefore control 'feelings' or 'emotions'? Once that chemical cascade starts can we influence it? Then again, while writing this I am having 'thoughts' that I feel I am controlling and if I expand my premise to the process of 'thinking' as being a chemical process occurring within the brain, am I not influencing these chemical reactions?

Once again, I don't feel naive enough to think I am the only person ever to have considered whether the body is not one large test tube full of complex chemical reactions with mind numbing interactions that will never be truly understood.

However, what do we become if we view ourselves in this way? Is our feeling of self or the belief that we make our own decisions in the way we interact with the world the result of a series of chemical processes?

---

The first thing I want to say to Roy is that I take the idea that depression and philosophy go together very seriously indeed.

I remember being told, many years ago, that if I continued with philosophy I would end up 'looking for the shortest rope'. That was by my uncle Jack. At the time, I thought Jack was probably wise enough to know that his own mental constitution wasn't suited to pondering the meaning of life. I can see his worried face even now. But I was different. I could handle it. I'd peeked into the abyss and it hadn't fazed me.

Then I recall that two of the lecturers who taught me when I was an undergraduate subsequently committed suicide. Maybe they thought they could handle seeing into the abyss, but they were wrong. — But that's just idle speculation, innit?

Actually, I rather like looking into the abyss. When I cast my eyes around this dingy world, the tawdry sideshows that human beings call 'culture', the abyss is the only thing with any real depth. Anxiety is the only real human emotion. (I think Freud said that.) But philosophy isn't just about plumbing the dizzy depths. It's about remembering and focusing. About being present. It can sometimes be a pleasurable activity (especially if you have a taste for Schadenfreude) but it's not something you do for pleasure.

So is Ronny right, that 'depression appears to influence a lot of people into looking to philosophy to provide some answers to their issues with life'? or did my Uncle Jack see deeper into the truth about these things? — And what the hell has any of this got to do with taking pills?

My chemical of choice is alcohol. Problem is, for medical reasons (chronic sarcoidosis, or maybe Sjogren's syndrome — the doctors don't seem to know which) I can't drink a single drop. I get a super-hangover that lasts for days. You know that feeling, when you just need a drink? I'm talking about someone who isn't in any way addicted to alcohol. I'd settle for one bottle of beer a week. I can't even have that without causing myself a lot more pain than pleasure.

At least I still have my coffee. I've been told it's bad for my condition, but I'm not aware of any particularly adverse effects. It helps me concentrate. (What do they know, anyway?)

They also say you shouldn't drink alcohol if you have a tendency towards depression. At any rate, you shouldn't drink alone. But social drinking is the best cure I can think of. If alcohol had never existed, the history of Western Philosophy would have been entirely different. Or maybe it wouldn't have happened at all. Read Plato's Symposium, if you don't believe me.

Getting back to pills. Ever since the first 'magic bullet' (Salversan, Dr Ehrlich's 'miraculous' cure for syphilis), an increasingly part of the chemicals industry has been dedicated to discovering new ever more potent formulations to add to the human test tube (nice image). Psychiatric disorders are exactly on a par with physical illnesses and disorders from the empirical standpoint. If it works with sufficiently benign side effects, that's all you want to know.

From this perspective, it's really a red herring to consider whether depressive people are that way because of a chemical imbalance. Even if their depression wasn't caused by a chemical imbalance (we'll get to what 'cause' means in a minute) a chemical cure can still work just as well. To repeat: we're only concerned with 'what works'.

I'm a good materialist, that is to say, I accept the minimal commitment for being a materialist, that mental events are supervenient on physical events. Anything else is up for grabs (a huge topic in the philosophy of mind which I don't what to get into now). Any thought, any feeling, any emotion is reflected in chemical or electro-chemical changes in my body. The direction of causation is the hard bit to figure out, but Ronny has half-seen this ('if I expand my premise to the process of 'thinking' as being a chemical process occurring within the brain, am I not influencing these chemical reactions?').

The bottom line is that you can interact with someone as a person, that means communicating, one person to another (Freud's 'talking cure'); or you can interact with them as a test tube. And that works too, sometimes. Some would argue, it works a lot better, certainly a lot faster.

This is all very circuitous (I'm sorry for that) but you'll see where this is going in a minute.

The other week, one of my old Mac laptops (a Powerbook 1400) died. Instead of starting up in the normal way with the 'happy Mac' logo, I got a picture of a floppy disk with a flashing question mark, then a black screen. I knew the hard drive was ancient and had probably had it. But I wasn't giving up. So I gave the laptop a sharp slap just to the left of the touchpad, where the hard drive is located. This time, the laptop started up, and has been working fine ever since.

We do this with people too. Sometimes, a sharp slap is just what a person needs. But doctors aren't allowed to do this, so they give a chemical slap instead.

What I'm working up to say is that this whole way of thinking about people and their mental trials and tribulations is totally wrong. To see that it is wrong, you have to get away from boneheaded empiricism and the idea that all that matters is that you 'feel OK' again. Freud understood. He saw his aim as transforming distressing psychological illness into 'generalized unhappiness'. When you do that, you have become free, your actions are your own rather than merely effects of your neurosis.

Freud said that in order to write, he needed to be in a mood of mild depression. The fact is, all genuinely creative work is painful. Gaiety and joy are wonderful things, but they're not ultimately real. At best, they are refreshing interludes that help strengthen our resolve, and they come as gifts. There's nothing more shallow or annoying than permanently joyful people.

So get away from the idea that all you need is to 'feel better'. There are other things you need, perhaps need more. (Perhaps philosophy is one of those things; or maybe psychotherapy — at least you'd have one real human relationship.) Accept the pain, adapt yourself to it, work with it. If you can find some depth in your life, whether from philosophy or some other activity, that is of far greater value.

Geoffrey Klempner

back

(78) Jeremiah asked:

Good day,

How will I know if I am born on Earth a philosopher?

---

I am going to provide a quote from the move 'Sister Act II'. The line is addressed by Sister Marie Clarence (Whoopi Goldberg) to a young student who wonders whether she ought to pursue her love of singing, or instead follow her mother's dictates and apply her nose to the grind-stone.

'If you wake up every morning and the first thing you think about doing is singing, then you're supposed to be a singer, girl.'

I think you can make the translation to your own question.

Stuart Burns

back

(79) Dominic asked:

I'm just going to make this short and sweet so I won't confuse my self typing it.

Let's say there is a infinitely large universe, and inside it is an infinite number of other universes that are all the same in every way, but there's one that's somehow different.

Are all the universes the same, as in, say you are ' a infinite amount of light years' from the different one, in a infinitely large sea of universes that are all thee same in every way, or are any different, as in the other universe in the sea of universes that are all the same?

---

I am afraid that confusing yourself is the least of your worries. As it stands, your question is quite incoherent. Let me see if I can translate your question into something more comprehensible by using a couple of 'special' words. I think the confusion in your question is being generated by the equivocation that you introduce in the meaning of 'universe'. So lets create two new words: 'super-verse' is 'all that there is'; while 'mini-verse' is the totality of existence that is causally connected to any given point. And lets call that particular different mini-verse 'Q', just to keep things clear. So, lets try this translation —

'Let's say there is a infinitely large super-verse, and inside it is an infinite number of mini-verses that are all the same in every way, but there's one that's somehow different and well call that one Q.

Are (a) all the mini-verses the same, as in, say you are 'an infinite amount of light years' from Q in a infinitely large sea of other mini-verses that are all the same in every way; or (b) are any of those mini-verses different, as Q is different in the sea of mini-verses that are all the same?'

Now if this translation is correct (and I somehow doubt it), then surely you have answered your own question. It was part of the stipulation in paragraph one that all of the mini-verses with the exception of Q are all the same in every way. So obviously, the answer to your question is (a).

For the life of me, however, I cannot figure out what other interpretation to apply to your question. So if you would like to resubmit your question (perhaps employing the new labels I have applied for clarification), I would be happy to take another stab at the answer.

Stuart Burns

back

(80) Brian asked:

Can someone recommend a few good books for an educated layperson to read in order to gain some insight into the question/problem of free will? Thanks!

---

I am sure that other responders will provide you a list of their own favourites. My preference would be 'Consciousness Explained' and 'Elbow Room: The Varieties of Free Will Worth Having' — both by Daniel C. Dennett. Although a professional philosopher addressing a philosophically complex topic, he writes in a very readable style specifically targeted for the layperson, rather than other philosophical professionals.

If you want a broader introduction to the topic, I would recommend the 'Free Will' entry of the Stanford Encyclopedia of Philosophy — http://plato.stanford.edu/entries/freewill/. It also has an excellent bibliography at the end if you should be interested enough to want to dip into the philosophical literature on the subject.

Stuart Burns

back

(81) Eddie asked:

I am inspired by 47/98, i.e. the question/answer of Johnny/Shaun about the purpose of love. Based on Shaun's answer, 3 further questions soon emerge in my head:

(A) Can animals (other than human), whom most people consider having lives but relatively fewer people consider having souls as well, have love? If yes, is it the survival of the human race or the survival of their own races the purpose of their love for?

(B) Can plants, whom most people consider having lives but no souls, have love? If yes, is it the survival of the human race or the survival of their own races the purpose of their love for?

(C) Can robots, whom most people consider having no lives and souls at all, have love? If yes, is it the survival of the human race or the survival of robots the purpose of their love for? (The latter implies that for their survival robots do have lives in the first place.)

In raising the above questions, I have assumed by intuition that there should be certain relationships among life, soul and love. Maybe this is a more fundamental question to ask: Are any of these relationships, no matter what they exactly are, necessary?

---

In order to answer your questions, we need a better understanding of just what is meant by 'love'. On the one hand, we might mean a particular subjective emotional response to someone (something). On the other hand, we might mean a particular suite of behaviours exhibited by someone 'in love' with someone (something). (There are, of course, other reasonable candidates for the meaning of 'love', but these two will do for demonstration purposes.)

If we mean the former, then clearly animals, plants and robots do not have love. Because whatever they do have, they do not have the particular subjective emotional reactions that humans have. But if we mean the latter, then animals and robots can have love because they can exhibit behaviours that we would recognize as being sufficiently similar to the particular suite of behaviours that we defined as love. On the basis of behaviour, plants could not have love, because plants cannot exhibit the necessary behaviours.

In all three cases, the purpose of whatever reproductive behaviours are being displayed (and hence the purpose of 'love', if it exists) is the survival of the species behaving. (Or more precisely, the survival and flourishing of the genes of the individual organism doing the reproducing. 'Survival of the species' is just convenient short-hand for evolutionary genetics.)

As to your final question about the hypothetical relationships between life, soul, and love a more comprehensive answer would require that you more clearly define the terms you are using, and what kind of relationship you are proposing. But in very brief terms, if I understand by these terms 'life', 'mind' and 'reproductive behaviour', then I would suggest that the existence of neither life nor mind is necessary, but given life then reproductive behaviour is necessary.

Stuart Burns

back

(82) Jeremiah asked:

Good day,

How will I know if I am a born on earth a philosopher?

---

You will feel compelled to study the works of other philosophers until you feel you completely understand them. This will be hard work and it will make your brain hurt but you won't give up. You will want to find answers to all the philosophical problems that have no answers.

Shaun Williamson

back

(83) Derrick asked:

With the rapid implementation of advanced automation, robotics and soon nanotechnologies will there still be a place for the human masses?

We have long since passed the point of sustainability, we pollute our ever shrinking supply of fresh water, deforest at accelerating rates and erode our agricultural land and every human disaster is serviced by emergency aid and the result is further breeding to add to the rescue mission next time.

For how long will the have continue to support the have not, will there still be a place for humanity's masses in the coming ages or are we in the process of eliminating ourselves?

---

It's unusual for me to be answering another question so quickly after posting a tentative answer (on human test tubes), but Ronny's question on Monday has put me in a mood which I'm having some difficulty shaking off.

In my answer to Ronny I said that I 'rather like looking into the abyss'. That is such a gob-smacking thing to say let alone mean. Did I mean it? Or was I just showing off? I feel as if I meant it. My mood is — quite buoyant.

How much can I do without? Work is piling up on my desk today, but I don't sense any strong ethical impulse to get on with it. Diogenes' question (remember, Diogenes who lived in barrel?) haunts me. I don't need any of this.

OK, well that's enough about me. What about the human race? What do we need? How much can we do without? Why do we need the masses?

Obviously, the world economy still requires a massive resource cheap labour but (as Marx foresaw) advances in technology will eventually make manual labour redundant. Imagine workforce of obedient robots who need nothing apart from a few drops of oil and a regular recharge. Well, that's pretty obvious.

What are the 'masses'? Jose Ortega Y Gasset gives a pretty potent definition in his book Revolt of the Masses (1929). The main point to note is that one shouldn't make the mistake of identifying the masses with the 'have nots'. Ortega's typical 'mass man' is the self-satisfied bourgeois.

Get rid of them all, is the answer. Get rid of the have nots, for sure. But also get rid of the bourgeoisie. Who else? Anyone with an IQ under (hmmm) 135. That's a bit generous, I know; not enough to get into Mensa, but that's OK because we're eliminating Mensa members anyway (too smug and self-satisfied by half).

To be serious for one moment (as I'm trying to be, because it's a serious question): Here's a useful thought experiment. Imagine that human beings are the only intelligent life in the universe. I know that we're repeatedly told that the probability of alien intelligence is overwhelming — despite the complete lack of any concrete evidence — but it isn't a fact, it isn't something we know.

So, imagine we're all alone. Does that make you feel more important? Does it make you any less willing to let a few billions die? Not me. What about the survival of the human race. Surely, one would care about that. But why? Survive, for what purpose?

I don't know. That's the honest truth. I just don't know.

I can't think in such general terms. When I try, I lose all my bearings. There are persons whose survival, and happiness, I very much care about apart from my own survival and well being. Instead of starting at the 'big end' (the entire human race) and eliminating the ones whose survival doesn't seem to matter, maybe the thing to do is start at the other end, the small end, by writing a list of all those I do care about, all those who I would allow into the Ark, so to speak.

As each human being comes into focus, looks me in the eye, I feel as if I would have no choice but to let them in.

The solution to 'the world's problems' has been a topic of debate for a long while, certainly since Malthus wrote his Essay on the Principle of Population. Undoubtedly, technology must play an important part. But, as Derrick has so clearly seen, if we rely only on science and technology then there may very well come a time when human beings, or at any rate a large proportion of the human race, become simply redundant.

This isn't the place for a mealy-mouthed lecture on ethics. I parade my moral virtue for no man. So I will simply say this. A heap of sand is made of individual grains. The masses are made of individual persons, and each person has a face. Whatever your ethical or political views may be, that is one fact which you should not allow yourself to forget.

Geoffrey Klempner

Ronny there are several questions here some of them very complex. You are right in thinking that any sort of emotional or mental disturbance can turn people towards philosophy, often quite inappropriately since they imagine that philosophy answers questions about the meaning of life. Philosophy does deal with questions about the meaning of life but not in the way that people might imagine.

So just as someone with a broken leg would do better by seeking medical attention rather than seeking out a philosopher, so people with emotional or mental problems, who may feel that life is meaningless, should not seek out a philosopher to convince them that this isn't true.

In general philosophers who think that life is meaningless don't feel that life is meaningless. These philosophers may be quite cheerful and happy. On the other hand people suffering from depression may feel that life is meaningless although they may not have real reasons to think that this is true.

Then you wonder if all our behaviour can be reduced to brain chemistry and brain processes. Certainly it is true that brain chemistry is involved in everything we do but in general things cannot to reduced to brain chemistry. To give you a crude example of this consider the sentence 'He stole the money from the cash box'. This could never be reduced to just his brain chemistry because 'stealing' is a social construct which presupposes a society with property rights and property laws etc. and these complexities cannot be reduced to anyone's brain chemistry.

However this also leads us to a real philosophical question of determinism vs free will but this is too complex a question to answer in an email. What is true is that we recognise that people suffering from mental disturbance may not be as able to control their thoughts and feelings as other people. However where we draw these lines is a difficult decision and the fact that we sometimes excuse people from responsibility for their thoughts and feeling doesn't imply that we must always excuse everyone.

You are right to think that the body is one large test tube. Humans are physical beings completely made of chemicals. However the implications of that are not as clear as you might think. So for example we could say 'Chemicals are completely lacking in intelligence therefore humans must be completely lacking in intelligence' but that isn't true.

Shaun Williamson

Part of your problem is the old mind-body problem, and part of it is the error of reductionism.

The mind-body problem goes back at least as far as Descartes, who was both a devout Catholic and very keen on the new science arising from Copernicus and Galileo. The church was very opposed to the new science, and I believe the reason that Descartes divided reality into two substances, which he called thought and extension (mind and matter today), was that thought belonged to religion and extension to science; since thought and extension could not interact, there could then be no quarrel between religion and science. The problem that then arose was how mind and body could interact, as with the mind willing muscles to move and bodily injury causing pain. This is the mind-body problem.

The error of reductionism is a 'nothing but' error, the error of explaining the properties of higher level systems exclusively in terms of lower level systems. This is an error because structures of systems into higher level systems have emergent properties that cannot be explained by their subsystems alone. Two major examples are the emergence of life out of chemical systems (molecules) and the emergence of mind out of brain. To claim that life is nothing but chemistry, or that mind is nothing but brain activity, is to commit this error. Another way of looking at this is in terms of the whole being greater than the sum of its parts: emergent properties are the excess of the whole over the sum of the parts, and it is an error to try to explain the excess exclusively in terms of those parts.

It is a fact that there can be causal interactions between system levels. For example, poisons can destroy life, pharmaceuticals can enhance it, and living organisms can make chemical changes to what they breathe and eat; this is two way causation between living organisms and chemistry. And this brings us to your problem. Drugs can reduce depression; drugs are chemicals; depression is mental; so chemicals can causally influence mind. But this does not mean that depression or its absence are nothing but these chemicals. And equally so for most, perhaps all, other mental phenomena. So is our feeling of self or the belief that we make our own decisions in the way we interact with the world the result of a series of chemical processes? No, it is partly so, but not wholly so.

Helier Robinson

Ronny, you've stumbled across a version of the 'problem of free will' there. As you say, you're not the only person to have considered that question; philosophers have been mulling it over for a very long time, and it's safe to say that no consensus has yet been reached!

Here are three things you might think (though there are other options available). First, there are some philosophers (e.g. Galen Strawson) — sometimes called 'illusionists' — who think, for roughly the reasons you give, that we cannot have free will. Having free will would require that we are somehow 'originating causes' of our actions (including mental actions like deciding); our decisions etc. would somehow have to come out of nowhere and not be caused by chemical reactions or neurons firing or whatever.

Second, there are some philosophers (in particular P. F. Strawson) who think that, while it may indeed be true in some sense that our decisions, emotions, etc. are 'just' a matter of chemical reactions etc., this is not a way of conceiving of ourselves that we should, or even can, adopt. In order to make sense of our lives — and in particular, our relationships with other people and our practices of praising and blaming people for what they do — we cannot think of our actions as 'just' a matter of chemical reactions or whatever. So in a sense Strawson agrees with the illusionists; but while the illusionists think that, since clearly we *are* in fact just made of physical stuff and what we do is a matter of physical processes (e.g. chemical reactions), freedom of the will is an illusion, Strawson thinks that *no* argument could show that our self-conception as moral agents is mistaken. Even if we were somehow psychologically capable of giving up that self-conception, we simply wouldn't be able to understand ourselves and our fellow human beings if we did.

Third, there are some philosophers (e.g. Daniel Dennett) who think that there just isn't really a problem here at all. We just have different ways of describing ourselves, and they are perfectly compatible with one another. So, for example, think of a computer chess programme. On one level, when you computer 'plays chess' with you,there are just a bunch of electronic signals zipping around the computer, which result in different patterns of pixels on the screen. But if we just describe the programme in this way, we'll be missing out on important facts: that the computer has just taken your queen, say. And that the computer has just taken your queen is just as much a fact about the world as are all the complicated facts about electronic signals, circuitry, etc., even though, in some sense, the former is nothing more than the latter — there's nothing mysterious going on here. Similarly, even if, say, my decisions and emotional reactions are, on one level, just a bunch of chemical reactions, that doesn't make it illegitimate to say that I really am experiencing emotions or making decisions. Nor does it make it illegitimate to say that I have *control* over those decisions or emotions, any more than (to use an example of Dennett's) it is illegitimate to say that the thermostat controls the temperature of your house.

So in answer to your questions:

'Is our feeling of self or the belief that we make our own decisions in the way we interact with the world the result of a series of chemical processes?' Yes, probably.

'What do we become if we view ourselves in this way?' Well, the issue is whether, given the answer to the first question, we are obliged to view ourselves in this way at the expense of another way (i.e. as moral and rational beings who make decisions, experience emotions, etc.). As will be clear by now, philosophers disagree on that question!

Helen Beebee
Director
British Philosophical Association

back

(84) Dominic asked:

I'm just going to make this short and sweet so I won't confuse myself typing it.

Let's say there is a infinitely large universe, and inside it is an infinite number of other universes that are all the same in every way, but there's one that's somehow different.

Are all the universes the same, as in, say you are ' a infinite amount of light years' from the different one, in a infinitely large sea of universes that are all thee same in every way, or are any different, as in the other universe in the sea of universes that are all the same?

---

Well Dominic it may be short but I'm not so sure that it's sweet. You start by saying 'let's say there is an infinitely large universe'. However I can only say let's not. Science teaches us that the universe is finite but expanding. Infinity is a useful concept in mathematics but there are no real collections of an infinite number of physical things. Infinities will always remain theoretical they do not exist and cannot exist physically.

Shaun Williamson

back

(85) Eddie asked:

I am inspired by 47/98, i.e. the question/answer of Johnny/Shaun about the purpose of love. Based on Shaun's answer, 3 further questions soon emerge in my head:

(A) Can animals (other than human), whom most people consider having lives but relatively fewer people consider having souls as well, have love? If yes, is it the survival of the human race or the survival of their own races the purpose of their love for?

(B) Can plants, whom most people consider having lives but no souls, have love? If yes, is it the survival of the human race or the survival of their own races the purpose of their love for?

(C) Can robots, whom most people consider having no lives and souls at all, have love? If yes, is it the survival of the human race or the survival of robots the purpose of their love for? (The latter implies that for their survival robots do have lives in the first place.)

In raising the above questions, I have assumed by intuition that there should be certain relationships among life, soul and love. Maybe this is a more fundamental question to ask: Are any of these relationships, no matter what they exactly are, necessary?

---

I don't know anything about the soul so I will have to leave that out of my answer. I was watching a television program tonight about the devotion of an elephant grandmother to her daughters and grandchildren so I would have to say that other animals can feel love and that their love is expressed in their care for other members of their own families. However I don't know how far this can be compared to human love or how far it extends in the animal kingdom.

I can't see any way that you can attribute love or any other feeling to plants. Plants just don't have any behaviour that could be interpreted as consciousness or an expression of emotion or love.

We are so far away from the sort of robots who might exhibit feelings or love that the question is not even worth considering at present.

I have no idea what a soul is or how we could establish that it exists.

Shaun Williamson

back

(86) Dominic asked:

I'm just going to make this short and sweet so I won't confuse my self typing it.

Let's say there is a infinitely large universe, and inside it is an infinite number of other universes that are all the same in every way, but there's one that's somehow different.

Are all the universes the same, as in, say you are ' a infinite amount of light years' from the different one, in a infinitely large sea of universes that are all thee same in every way, or are any different, as in the other universe in the sea of universes that are all the same?

---

I can only offer you my own opinion here. I believe that infinity is no more than a word: it has no reference, there are no actual infinities, any more than there square circles. It is a word that people use when they do not know the limits of something, just as chance is a word used when they do not know the causes of something. If this is correct then your problem is no better than that of deciding how many angels can dance on the head of a pin.

Helier Robinson

back

(87) Bernice asked:

The German philosopher Immanuel Kant is said to have merged both the basic contradictory ideas of the rationalists (Plato, St. Augustine, Descartes, etc) and empiricists (Aristotle, Locke, Hume, etc.). He said, 'Thoughts (concepts) without content (sense data) are empty; intuitions (of sensations) without conceptions blind.'

What does Kant mean? In that way did Kant merge or synthesize rationalism and empiricism through this saying? Explain.

---

Bernice, the first thing that should be said about Kant is that he is reputed to be difficult to understand. It is for this reason that I apologize in advance for presenting a somewhat complicated response to your very interesting question. I do so, however, in the belief that my answer to your question, while perhaps straying somewhat beyond the bounds of the question itself, may offer a more comprehensive appraisal of how Kant can be seen to forge a synthesis between rationalism and empiricism.

Empiricist philosophy argues that there is a connection between the outside world and the human brain, a connection that is made through sense impressions and their impact on the brain: an impact which is scientifically investigatable and understandable. According to the Empiricist view, human knowledge is something 'out there': something that is external to the mind. Human beings, say Empiricists, are not entombed within their minds, for them 'mind' and 'world' are not inseparable. In his An Essay Concerning Human Understanding (1690), John Locke declared that the mind was a tabula rasa — a blank slate. Human beings, he maintained, are born with nothing other than the capacity for experience through the senses. The knowledge we acquire is not due to any innate power to reason, but to the accumulation and organization of experience. David Hume (1711-1776), one of Britain's most eminent Empiricists, followed Locke's argument. 'We know the mind', he said, 'only as we know matter: by perception'. Hume maintained that the mind is not a substance — an organ of ideas, but an abstract name for a series of ideas, memories, and feelings, which all have their source in experience.

Immanuel Kant (1724-1804), was impressed with the Empiricist argument that experience is the basis of all knowledge. However, he was unhappy with Hume's skeptical conclusion. It was while reading Hume that Kant 'awoke from his dogmatic slumbers' and realized how he could answer the destructive skepticism of Hume, which Kant believed had threatened to destroy metaphysics. While Kant agreed with Locke and Hume that there are no such things as innate ideas, he could not accept that all knowledge begins with experience. 'Though all our knowledge begins with experience', he said, 'it by no means follows that all arises out of experience'.

The notion of idea advanced by Locke in his An Enquiry concerning Human Understanding is central to Hume's epistemology: Hume's concern was how do we know anything for certain? As mentioned above, Hume's view was that all knowledge derives from experience. Experience, he said, consists of perceptions, impressions, and ideas. Impressions differ from ideas in intensity. That is, by 'degrees of force and vivacity', those impressions which enter with the most degree of intensity are impressions. Ideas are more feeble impressions; and every simple idea has a simple impression. However, it is also possible to have complex ideas. These are derived from impressions by way of simple ideas, but they do not necessarily conform to an impression. For example, I can have the idea of a Sphinx by combining an idea of a woman with an idea of a lion. I have put together my impression of a human with the impression of a lion. All experience, said Hume, is a sequence of perceptions. All notions, such as cause and effect, bodies and things, even the idea of God, are but mere suppositions: amalgams of impressions.

In 1781, in response to the claims of empiricism, Kant published his famous Critique of Pure reason; his ambition was to show pure reason's possibility, and to exalt it above the impure knowledge which comes through the channels of sense. So when Kant states that it by no means follows that all knowledge arises out of experience he means that pure reason is knowledge that does not come through sensory perceptions: knowledge that is independent of all sensory experience, and knowledge which belongs to us by the inherent nature and structure of the mind. Knowledge, said Kant, is not all derived from the senses, as Hume believed he had shown, but is derived from both sense and reason. Rationalists, such as Descartes, believed that the basis of all knowledge lay in the mind; Empiricists, such as Locke and Hume, held that all knowledge of the world proceeded from the senses. Kant believed that both sense and reason are involved in our conception of the world. According to empiricism, habit arises as a consequence of knowledge which happens after, or succeeds, contact with sensation: it is a posteriori. Rationalism proposes that knowledge is analytic: it attempts to anticipate experience by constructing systems of logical deduction from basic axioms. This results in the possibility of a priori ideas of reason. By considering both Empiricism and Rationalism, Kant created a sophisticated model of knowledge which overcame the simplistic notion of the subject either anticipating or reacting to experience.

Hume maintained that it was only force of habit that made us see the causal connection behind all natural processes. Kant refutes this argument; the law of causality, he maintained, is eternal and absolute: it is an attribute of reason. Human reason, he said, perceives everything that happens as a matter of cause and effect, that is, Kant's philosophy states that the law of causality is inherent in the human mind. He agreed with Hume that we cannot know with certainty what the world is in itself. We can only know what the world is like 'for me'. We can only know things in themselves (noumena); we can only know them as they appear to us (phenomena). However, before we experience 'things' we can know how they will be perceived by the human mind. We know them a priori.

The mind, said Kant, contains modes of perception that contribute to our understanding of the world. These modes of perception are space and time. Space and time, for Kant, are not concepts, but forms of intuition. Everything we see, hear, touch, smell, feel etc., that is everything that happens in the phenomenal world, occurs in space and time. But we do not know that space and time is part of the phenomenal world; all we know is that space and time are part of the way which we human beings perceive our world. Space and Time, said Kant, are irremovable spectacles through which we view the world; they are a priori forms of intuition, that is, they shape our sensory experience on the way to being processed into thought. Space and time are inherent modes of perception that determine the way we think. It cannot be said that time and space exist in things themselves, they condition a consciousness by which we, as humans, perceive and conceive the phenomenal world. Space and time belong to the human condition; they are first and foremost modes of perception, not attributes of the physical world. The mind, said Kant, is not a tabula rasa which absorbs sensations from the outside world. Kant held that it is not only the mind that conforms to things: things also conform to the mind. In the preface to the second edition of his Critique of Pure Reason, Kant called this the Copernican Revolution in the problem of human knowledge. That is, it was just as innovative and radically different from earlier thinking as when Copernicus claimed that the earth revolved around the sun.

As shown above, the mind, for Kant, receives data of the phenomenal world through sensory perceptions. However, in order to understand this information these sensory perceptions must be processed by certain conditions inherent in the human mind. As well as the 'intuitions' space and time, Kant lists ten categories which are meant to define every possible form of predication. These concepts (or categories) are reorganized to consist of four types: quantity, quality, relation, and modality. In short, everything we, as human beings, experience we can be certain will be imposed within the a priori framework of space and time (intuitions), and subject to the law of causality. These conditions, says Kant, operate as a formal apparatus to bind together a priori judgements. These functions are the pure concepts of synthesis which belong to the understanding a priori. That is, before we have experience anything from the outside world, the mind already possesses the intuitions, space and time, and the law of cause and effect. However, these intuitions and categories, without sense data, are empty, and sensations without the intuitions space and time and cause and effect are blind.

Thus we come to realize that in Kant's view there are two sets of elements that contribute to our understanding of the world. The first set involves external conditions — which we cannot know before we experience them through the sense. The second set involves the conditions inherent in the mind. Empiricism argues that the mind is but a 'passive wax' which is pummeled and shaped by sensory impressions. David Hume had reduced the mind to little more that a sponge which absorbed impressions and formulated complex ideas, not by virtue of any innate power, but by force of repetition and habit. Kant refused to accept such a skeptical approach. Whilst accepting that our knowledge of the world enters the mind through sensory experience, he rejected the notion that all arises out of these experiences. If this is the case, the question arises, whence comes order? For Kant, the world is ordered, not in itself, but because the mind already contains certain innate powers which impose an order on the data received through sensory impressions. The human mind, says Kant, assimilates these impressions and makes judgements on these perceptions by virtue of the power inherent in the mind. These powers allow the mind to make sense of, and function in, the phenomenal world. Access to this world, then, says Kant, is only that which our intellectual and sensory powers, operating in tandem, permit. In other words, our capacity to understand the world in which we live depends on a synthesis between the intuitions space and time and the law of cause and effect, and empirical experience.

Tony Fahey

back

(88) Bernice asked:

The German philosopher Immanuel Kant is said to have merged both the basic contradictory ideas of the rationalists (Plato, St. Augustine, Descartes, etc) and empiricists (Aristotle, Locke, Hume, etc.). He said, 'Thoughts (concepts) without content (sense data) are empty; intuitions (of sensations) without conceptions blind.

What does Kant mean? In that way did Kant merge or synthesize rationalism and empiricism through this saying? Explain.

---

What Kant means is that only synthetic a-priori judgements can provide knowledge. A posteriori judgements associated with empiricism and a-priori judgements associated with metaphysics cannot provide certainty or what Kant called apodictic certainty.

Empiricism

If empiricism is the view that knowledge is derived from sensuous experience then it is precarious. What is experienced and taken as being knowledge today could change tomorrow. It could change as it is acquired purely from experience. We can have no guarantee from experience that experience will continue to provide us with the same knowledge it has in the past. As arch-empiricist David Hume observed, that I have experienced the sun rising hundreds of times before provides no certain guarantee or law that it will rise tomorrow. Or perhaps that there will even be a tomorrow......Further, experience doesn't provide us with concepts such as number or quantity. I can experience one sheep and another sheep but nowhere in sensuous experience, do I experience TWO as in the statement, I perceive 'two sheep. Empiricism provides no conceptual guarantee or law-like certainty that what we experience is true and will continue as before.

Rationalism

Rationalism or metaphysics has concerned itself with deduction from concepts. Thus the concept of God entailed a being fully possessing reality in all logical ways. For example, the subject God necessarily contains the predicate 'existence' God as so defined cannot, not exist. From his definition, he necessarily exists a-priori. Such predicates are contained in the subject. It is a matter of logical and analytical deduction. Such concepts 'mapped' out existence and in thinking them by using the pure light of reason, philosophers were supposedly thinking reality itself. In the 1787 Introduction to the Critique of Pure Reason [which I recommend you read to further answer your question] Kant attacks this metaphysical approach of using pure reason to acquire knowledge as not being successful in acquiring any. Instead each metaphysical philosopher builds a system which is then disputed by other metaphysical philosophers without final settlement ad infinitum. In other words, this approach is like completing metaphysical crossword puzzles without definitive advance.

So for Kant, both empiricism and rationalism fail to provide epistemic certainty. Kant's philosophical project is to devise a system which does this. In Critical Idealism, he proposed had found just this.

Kant

When an intuition [sensation] is presented to the senses, it is not cognised in a raw manner as advocated by empiricism. Firstly, it is presented in Space and Time. Further, it is synthesised with 'Transcendental Categories', termed 'Transcendental' in that they not derived form experience but transcend experience being inherent to human consciousness. This is the bit borrowed from metaphysics in that the Categories take the place of a-priori 'concepts' although they are not products of thought-they are the necessary conditions which allow the possibility of thought. These are Quantity [Unity, Plurality, Totality]; Quality [Reality, Negation, Limitation]; Relation [Substance, Causality, Interaction]; Modality [Possibility-Impossibility, Existence-Non-Existence, Necessity-Contingency]. Think of sealing wax and a stamp. Intuitions are the wax and the stamp the Categories. When synthesised with the wax, the stamp provides a definite, intelligible sign or meaning. This is what Kant calls synthetic a-priori judgement. The 'a-priori' aspect of the Categories is synthesised with the intuitions of experience. The product of the synthesis is certain knowledge or the world of objects in space and time we perceive around us, including others.

When, for example, I perceive a Tree, the categories of Quantity, Quality, Relation and Modality have been synthesised with the intuition. I see a single tree [Unity, totality]; it is and I can feel the intensive texture of its bark and leaves [Reality], it is determinate [Limitation] and all its parts are together in one space [substance]. It might also display movement when the wind blows [Space and Time].

'I' accompany all these synthetic a-priori judgements in that I consciously perceive them [called the original synthetic unity of apperception]. That is for example, like the act of eating a sandwich. In the act of eating [analogous to the production of synthetic a-priori judgements], I appreciate all the different ingredients in the sandwich-all at once [analogous to apperception which accompanies those synthetic a-priori judgements].

In conclusion, Transcendental Categories without content [intuitions, sensations] are empty just as intuitions without Transcendental Categories are blind.

Hope this helps Bernice.

Martin Jenkins

back

(89) Elijah asked:

What is the meaning and implication of Protagoras's famous saying that man is the measure of all things?

---

Elijah, to get a full understanding of Protagoras's famous remark, I believe it is necessary to consider it within the context of the movement of which he is considered to its founder: The Sophists. In the 5th century a movement of itinerant professional lecturers flourished in Greece. These men were known as Sophists — nomadic educators who sold their expertise to the highest bidder. For a young man of aristocratic birth, the natural, and only, career option was to enter into the political life of his city. And to qualify for this role it was imperative that he should be expert in the art of rhetoric — the art of speaking eloquently and persuasively. In the small city states of Greece, where each citizen — that is, each male over eighteen years of age, had a say in determining public policy, public preferment, and public security stratagem against enemies of the state, it was vital that this citizen should be skilled in that art that was best suited to carry his audience with him. Those judged best to impart this skill were the Sophists.

Since there were no established seats of learning at that time, these teachers roamed from city to city, finding students, mostly sons of the rich, wherever they could, and supporting themselves by the fees they received. Whilst the basis of their work focused on the rhetorical, the more skilled amongst them broadened their range to cover any knowledge available of the workings of the human mind, of literature, history, language, grammar, of the nature of virtue and justice, and the principles underlying the dialectic of argument. In short, all that was deemed necessary to provide the budding politician with skills needed to speak well, convincingly and, generally, to succeed. Their underlying theory is probably best revealed in two remarks of its leading protagonists: Protagoras, considered the greatest of the Sophists, declared 'Man is the measure of all things', and Gorgias who proclaimed, 'Nothing exists, and it did, no one would know it, and if they knew it, they could not communicate it'. From these statements the Sophists developed the view that certain knowledge was unattainable and, therefore, man should not trouble himself to seek that which he can never find. Instead, following Protagoras's dictum, he should 'measure' matters according to his nature and his needs, since man is the measure of all things.

The habit of unrestricted enquiry and discussion which was crystallised by the Sophistic movement, the free play of the mind over all subjects that interest men, meant the overthrow of much that was sacred in the existing civilisation. However, the Greeks, in the main, did not respond well to having the foundations of their lives shaken; not even when these foundations were shown never to have been rationalised, never to have been examined critically, and to have arisen principally from unthinking custom. Thus, the term 'Sophist' and associated with it became one that was treated with a great deal of circumspection. One of those who registered his concern with teachers whose methods were concerned with instructing students how to win an argument at any cost was Socrates.

Tony Fahey

back

(90) Derrick asked:

With the rapid implementation of advanced automation, robotics and soon nanotechnologies will there still be a place for the human masses?

We have long since passed the point of sustainability, we pollute our ever shrinking supply of fresh water, deforest at accelerating rates and erode our agricultural land and every human disaster is serviced by emergency aid and the result is further breeding to add to the rescue mission next time.

For how long will the have continue to support the have not, will there still be a place for humanity's masses in the coming ages or are we in the process of eliminating ourselves?

---

Hi Derrick, this is an issue which has long concerned me, and one that I too have raised from time to time in various papers and articles (see 'Philosophy, Science, Consciousness' in Philosophy Pathways Journal: Issue 152). Following your line of thinking I would say that there is a strong argument in favour of the view that we may passed the point where we can justify the need for our existence on this planet we call home.

Let me tell it the way I see it: when we consider the life of the planet in terms of a twenty four hour clock, it can be said that human beings have only been around in the last few minutes. Thus, it follows that, for the greater part of its existence, the earth has managed perfectly well without us. On this evidence it can be argued that human beings are contingent to the existence of the planet upon which they live: the world just doesn't need us. Even if we accept that, at the time of our appearance, nature had decreed that there was need for such a species of animal, as you say, recent evidence of man's impact on the world supports the view that that need may well be long exhausted. The question that arises from this view is, should the above be the case, what would happen to all our wonderful, ideas, concepts, paradigms and worldviews? The answer, of course, is that since they are products of the machinations of our minds, they would disappear with us. It has to be said that in such a world even robots would be superfluous.

I vividly remember, more than forty years ago, two photographs that appeared on the cover of a newspaper (I think it was in The Evening Standard). The first was a shot from space of Buenos Aires, the second was a shot of a cancer cell. The first point that was being made was that there was almost no difference between the two photographs. The second is that the photos support the view that human beings are to nature what cancer is to the human body. Whilst these views present humankind in a rather negative light, they do serve to remind us that we are but bit players, with bit parts, in a world that presumably worked perfectly well without us, and will continue to do so long after we are gone.

Tony Fahey

back

(91) Mike asked:

ok im not a philosopher im just a truck driver with lots of time to think. my question is about the big bang. as I understand the theory we think the universe is expanding outward correct? then my question is this from what point is it expanding? Also in which direction are we moving away and can we see other universes that are traveling along this expansion behind us, and lastly where are we positioned in this expansion?

I've watched a lot of shows and read a lot about the big bang but never read or heard these questions asked or answered,

my interpretation of expansion says that there was a starting point somewhere but these shows and books always lead me to believe that the universe is expanding away from us like were the starting point or center.

---

Mike your question is really about cosmology (which is a science) rather than a question about philosophy. Cosmology is supposed to explain the large scale universe, planets, stars, galaxies etc.

According to our current knowledge the universe as a whole is expanding. This does not mean that everything is moving away from us. Some other nearby galaxies are moving towards us and might collide with our galaxy (The Milky Way). But any collision won't happen for billions of years. However on the whole galaxies are moving apart. We know this from measuring the colour of the light from other galaxies and making complex calculations about what colour it should be compared to what colour it is. Light from a receding galaxy is redder than it should be.

We can't tell where the centre of the universe is because we can only measure the relative movement of things.

Cosmology is a science where ideas are changing rapidly and there are still many things that we can't explain yet.

Shaun Williamson

back

(92) Dameon asked:

Is it possible to be a committed, biblical Christian and still be a competent and respected philosopher or scientist?

---

It all depends on what you mean by 'a biblical Christian'. If you mean someone who believes that every sentence in the Bible must be interpreted literally then the answer is no. If you believe that the world is only about 4000 years old and that God created all the species of plants, animals and insects in the same seven day period then this contradicts all of science. It is not just a question of the Theory of Evolution. That sort of Biblical Christian must also deny the truths of modern geography, cosmology, physics, palaeontology, astronomy, biology.

However the vast majority of Christians in the world would regard themselves as biblical Christians but they do not treat the bible as though it is meant to be a science or history textbook.

I was brought up as a committed Christian fully aware of the importance of the Bible but I was always taught that there was no contradiction between the Bible and science. So that while the Bible was the inspired word of God it was not to be interpreted literally as a textbook of science or history.

Philosophers are committed to reason and so is science so while many philosophers and scientists are committed Christians, they are not simple minded fundamentalist Christians, not if they want to be taken seriously. Believing that the world is only 4000 or 7000 or 100000 years and that all species where created at the same time is like believing that the earth is flat. People can feel sorry for your ignorance but they can't respect your beliefs.

Shaun Williamson

back

(93) Dylan asked:

Dear philosopher,

I reckon i'd like to inquire, why be a philosopher.

No sarcasm intended, I honestly am interested. What is the purpose of philosophical pursuits, and how are they applicable in the modern world? What, other than being a professor of philosophy, do they do for career?

---

Philosophy is a general arts degree like history or English. The only difference with philosophy is that prospective employers often don't know what philosophy is and confuse it with theology.

I know many good musicians who will never be rich and famous but they play music because they have to. For me it was the same with philosophy, I did it because I had to. You should only do philosophy if you feel compelled to do it. It won't make you rich and teaching jobs in philosophy are few and not easy to get.

The purpose of philosophy is the ruthless pursuit of the truth nothing more or less.

Shaun Williamson

back

(94) Wesley asked:

Has anyone written on the concept of a Post-Existential life?

I have entered the final years of my life. The life I am living now can be changed only fractionally by decisions and actions I make now. That is, it is as if all my previous decisions have painted me into this corner of this room in this house, here.

If authentic acts/decisions are those in accordance with one's freedom, my Authenticism is absolutely limited by the limits of my freedom to act/decide, which have become limited by all previous decisions and by Existence itself. My actions have brought me to where I am. I have decided on a course of moral and social Being. I have made decisions that now limit my health. All these limit my Freedoms and thus my Choices. I can no longer act in such ways that bring further Freedoms of Decision. All my existential life has led to this painted corner.

Granted, I have the wide freedom limited by health and financial circumstances to act in opposition to all prior decisions, 'Out of Valid Character' so to speak. To be wicked, criminal, to defile what I have held dear, to do the opposite of what I have chosen as the correct response in previous choices presented by my Freedom. But to do so seems Inauthentic in the extreme. And even so, my opportunity to act Out of Character is highly limited.

Thus, my life could be said to be Inauthentic in that I have little freedom to act, but can this be? Does one live an Authentic Life only to face death necessarily in Inauthenticity?

Rather, I see this as Authenticity leading to infinitely smaller and smaller Freedoms of Action the closer I approach and enter death. Thus, Authenticism leads to lesser and lesser, fading, then extinguished Freedom of Action. Neither Authenticism nor Inauthenticism. But even this seems unacceptable.

I would appreciate comments. Thank you.

---

Well I have never been a fan of existentialism since its moral blindness has always seemed to me to be immoral. Sartre who first devised this idea of an authentic/inauthentic existence became a marxist although not of the dim Stalin worshiping sort.

However it is true that in existentialist terms that the choices you make limit your future choices. If your choices are authentic or sincere then you should have no regrets about this. Age does limit your choices but there is no reason to suppose that that makes your life inauthentic in itself. You can only choose from the choices that are available to you.

Existentialism never pretended that human choices are infinite. You can only choose to fly if you have wings.

Shaun Williamson

I understand, Wesley, where this is coming from. However, I will argue that if you accept the truth in existentialism, then there can be no such thing as a 'post-existential' life.

One needs to draw a distinction, however, between 'being an existentialist' (which as it happens I am not) and 'accepting the truth in existentialism' (which I do). You'll see the reason for this distinction in a minute.

Last week, as an exercise, I gave myself a mock interview. If one is being po-faced about this, one could say that it was part of an ongoing project of seeking to 'know thyself' as Socrates advocated. The serious point is that this is knowledge which one is perpetually on the way towards and never finally achieves. Indeed, to think you had achieved it, and that there was nothing more to know would be an act of bad faith.

Of course, the whole thing was rigged. This was intended for an audience. Even so, it was surprising to me, some of the answers that slipped out. (Maybe it had something with playing Hendrix's Electric Ladyland album in the background as I was writing — which has a way, as great works of art do, of getting under the skin, loosening and unravelling the congealed layers of the psyche. Hendrix once said he wanted to write music that had the power to heal; he came closer to this than most of his generation.)

One question which I posed myself is whether or not I am a stoic. I said, somewhat cagily, 'I wouldn't describe myself' as a stoic. What I meant was, I'm not of the breed of Epictetus or Marcus Aurelius, or those who follow in their footsteps. I don't believe that all that suffices for a life of ethical virtue is 'knowledge of the Good' or some such Platonic notion.

And yet, on reflection, I realize that I accept the truth in stoicism. That is to say, I believe that there is something to know, which provides an objective basis or rationale for ethical conduct; only that 'something' falls short of what Socrates or Plato aimed for. (One of my ex-students reminded me that I once actually told him I was a stoic, which is interesting as I have no recollection of saying this.)

Iris Murdoch in her brilliant short monograph Sovereignty of Good (1970) makes a big play of the shortcomings of existentialist ethics, and the need to rediscover a Platonic notion of an objectively existing Good. I have no quarrel with that. What I'm saying is that fully responsible or 'authentic' action requires that we accept the heavy burden of responsibility for the values we choose to live by. You cannot distil those values from knowledge of the Good. There is nothing to know other than what we can discover through patient, factual investigation (here I am with Hume and the early Wittgenstein). But to be willing to conduct such an investigation — when faced with bewildering ethical choices and dilemmas — is a responsibility, and to a large extent an ethical responsibility.

'If it doesn't impact on me then why should I care,' is the ultimate question posed to ethics. A true existentialist would say that I choose to care and take responsibility for that choice. I don't think, realistically, that this is a choice. (Hence, I am not an existentialist.) It is about being a person, or being human: to look at the face of the other and never be moved, or successfully resist any temptation to be moved, is to put oneself outside human life altogether. I won't try to give a metaphysical spin on this. I am stating this as if it were a plain fact.

Now to the question: what happens to this 'burden of responsibility for the values we choose to live by' as one approaches death? All the big choices have been made, and one has accepted, taken responsibility, for those choices. I sometimes wonder what my life would have been like if I had not 'chosen relationship'. But I did, and I live with the consequences of that choice. I do sometimes feel, as Wesley does, a keen sense of being 'painted into a corner'. As a widower, with three daughters who still need a parent's practical and moral support, I don't have the range of choices I would have otherwise have had.

But this picture is completely wrong, if one interprets it as implying that there are no 'big' choices left, only little or insignificant ones. Of course, one can just walk over the wet paint and make a mess of things. I fully appreciate why Wesley would not consider that as a valid option. However, to stay in one's narrow corner is an existential choice. Maybe you've made some bad decisions in your life and now you're living with the painful consequences. You can to stay and face the music, or flee. And you have chosen to stay.

But I am going to assume that this is not the case for you. By and large, you are reasonably happy about the decisions that you have made.

The first point to make is a purely practical one: we don't know, for sure, what lies ahead for us. Not everyone gets to enjoy a tranquil old age. Tragedies and disasters have a way of disrupting one's cosy retirement plans. I won't enumerate all the ways in which this can happen. Imagine that this is 1936 and you are a Jew living in Vienna. Or it is 1945 and you and your family live in the vicinity of Hiroshima.

Or let's move things on a bit and take an extreme case. You are close to death. Physically, you are incapable of any movement apart from blinking in response to questions put to you. And someone asks, 'Do you forgive X for what they did?' And let's suppose, for the sake of this example, that what X did was really unforgivable, monstrous. But you still have that choice. Is it a small choice, or is it possibly one of the biggest choices you have ever made?

Or to strike an even more sombre note: Camus in The Myth of Sisyphus (1942) poses, as a philosophical question, what reasons are there to not commit suicide. There is no time in the length of a human life where that option no longer exists as a potential life choice.

One of the points I make early on in the Pathways Moral Philosophy program is that most of us, most of the time, never face really big ethical decisions. Our courage, for example, may never be fully tested. You might well ask whether one can be an existentialist when you live a life of comfort and ease — regardless of your age — where there are no scary or momentous choices, only pleasant ones.

In H.G. Wells' brilliant parable The Time Machine, the Eloi live like this. We can only see the Eloi as irresponsible children, unwilling to face the grim reality of their situation — easy meat for the Molochs. But how many persons do, in fact, live such a life of irresponsibility? That is, after all, the point about the self-satisfied bourgeoisie. 'You've never had it so good,' as Prime Minister Macmillan said. — But that was to a generation who had lived through the Second World War.

The biggest challenge for existentialists, or for those who 'see the truth' in existentialism is how to live when no important ethical choices ever seem to intrude on one's happy existence. I'm not saying that it's necessarily a bad thing that one is happy and contented. Ultimately, we can't choose the external circumstances in which we find ourselves, the events which intrude on our lives. This lack of momentous choices is a problem at any age, not just in old age.

Yet at the same time there is a part of me which wants to rebel in fury at the idea that anyone has the right to be contented. I don't just mean that the world is in a mess, in so many ways, and that you should be striving to the utmost and to the end of your days to do something about it. That's just one way. Equally strenuous and demanding would be the decision go back to college, study philosophy, say. Or, for someone in my situation, to look for another life partner. But to be a bit cynical about this — aren't these just so many strategies against boredom? Why this great effort? what difference does it make? You're going to die, anyway. — That's the question Camus asks.

Which brings me back to the one thing which I cannot get past. The one indubitable nugget of metaphysical fact: my existence. This is what existentialism is ultimately about. I am not 'some' person. I do not do what 'one' does. The choice — and there is always a choice — is here for me, now. That is what it means to say that 'I exist', in the sense in which this is an active verb rather than a merely tautological statement.

Geoffrey Klempner

back

(95) Brian asked

Can someone recommend a few good books for an educated layperson to read in order to gain some insight into the question/problem of free will?

---

Just about every philosopher, ancient and modern, has had something to say about free will, and the literature is huge. I suggest one recent book where leading proponents of 4 major views each sets out a defence of his view and the authors then respond to each other's positions. 'Four Views on Free Will' by JM Fischer, R Kane, D Pereboom and M Vargas, Blackwell Publishing (2007) Described by reviewers as 'a gem', 'a wonderfully accessible introduction to the free will debate' and 'ideal for advanced undergraduate courses on free will',I found it excellent, and, of course, there are plenty pointers to further reading for anybody still undecided who wishes to continue on the age-old merry-go-round Good luck.

Craig Skinner