International Society for Philosophers

International Society for Philosophers

Wisdom begins with wonder

PHILOSOPHY PATHWAYS                   ISSN 2043-0728

[Home]

Issue number 133 8th February 2008

CONTENTS

I. 'Evaluative Judgement, Motivation and the Moral Standard' by Richard H. Corrigan

II. 'A Rope Stretched over an Abyss: Ethics, Law and Neuroscience' by Pierre Pouget

III. 'Some Remarks on the Nature of Philosophy' by Hubertus Fremerey

-=-

EDITOR'S NOTE

For this issue Richard Corrigan writes on the nature of moral judgement and moral responsibility. His analysis is based on the distinction between a desire, conceived as something merely given, and a value or evaluation which a moral agent accepts and embraces as part of his or her system of values. The possibility of moral evaluation rests on the ability of an agent to freely accept or reject a given value.

The question of freedom, however, has become increasingly vexed as a result of recent discoveries in neuroscience, for example, evidence for the claim that when we make a conscious 'decision' the brain has already triggered the action which we 'decided' to do. Pierre Pouget surveys the ethical and legal issues arising from neuroscientific research, putting it in the context of a debate which stretches back as far as the time of Ancient Egyptian and Greek medicine.

Hubertus Fremerey takes a broader, synoptic view of the question of the relation between science and philosophy, and the light which study of the history of science sheds on the way philosophy has developed in the Western world by contrast with the predominant current of Eastern philosophy. Science has been the greatest spur to Western philosophy; but it is also its greatest challenge.

Forthcoming in issue 135 there will a Review by Matthew Del Nevo of 'Thinking Allowed', a DVD produced by Gallions Primary School in London. The DVD explains the concept of P4C (Philosophy for Children) and shows the impact that it has had on the children at Gallions Primary School. For more information, contact Paul Jackson at PJackson@gallions.newham.sch.uk. Articles on any aspect of P4C will be very welcome.

I am always happy to discuss ideas for possible articles for Philosophy Pathways. Please send any thoughts, comments or suggestions to klempner@fastmail.net.

Geoffrey Klempner

-=-

I. 'EVALUATIVE JUDGEMENT, MOTIVATION AND THE MORAL STANDARD' BY RICHARD H. CORRIGAN

In this paper I will discuss the concepts of evaluative and motivational systems, the moral standard, moral normativity and moral imperatives, and explain their relevance to moral responsibility. It is my ultimate intention to comprehensively delineate the indispensability of evaluative judgement for moral action.

To be able to judge one choice to be more beneficial than another, the agent must be able to attribute a degree of value (in terms of expected benefit) to each of the options. After Watson, I will call the sum of the factors and capacities that allow him to do so the agent's evaluation system (Watson, 1982). This system is reflective of the agent's hierarchy of values, which embodies the values that have motivational efficacy for the agent, as they are judged to be of benefit by him.[1]

The agent's reasons for having a particular hierarchy of values will reflect the benefits that he believes are to be gained from acting in accordance with them. However, his particular hierarchy of values will also reflect the degree of cost that he is willing to bear in order to live in accordance with the values that it embodies. For example, the agent believes that there are benefits to living according to Christian values -- for instance, the feeling of satisfaction gained from piety, righteousness, the possibility of eternal salvation and so forth. However, he also realises that in order to live according to Christianity he must give up the pleasures of living as a hedonist. He judges the cost of doing so to outweigh the gain. He thereby more strongly identifies with hedonism and allows its continued integration in his evaluation system. The cost of modifying value systems may also be seen in terms of feelings of dislocation from community, betrayal of culture and so forth -- basically anything that contributes to the agent's belief that it is better to retain his current hierarchy of values.

Is there a set of values that the agent should adopt purely in virtue of his participation in a given culture? The evaluation system of a culture is not a completely closed and consistent system in the logical sense. However, there must be some common values, as this is in part what constitutes a culture. Yet it is obvious that different groups within a culture have different values. Therefore, the individual in society is caught in what I will call a 'state of compromise'; he is caught between competing values. I am not necessarily refuting the suggestion that society strives for a harmonious integration of evaluation systems; I am rather suggesting that this is very difficult to accomplish when there is not complete consensus. Personal evaluation systems are perspectival -- they reflect the individual's interpretation of, and reaction to, his society, culture and personal experiences.

Because of the existence of the state of compromise, evaluation systems are not fixed constants. They have a plasticity that allows development and change over time. I am not defending the idea that the agent can modify his hierarchy of values on a whim; values may be deeply ingrained. I am rather suggesting that particular states of affairs may cause the agent to reassess his hierarchy of values or to modify them. These may be social or personal, but must provide sufficient stimulus for the individual to reconsider his particular evaluation system. However, when I argue that the agent must have the ability to make evaluative judgements in order to have the capacity for moral responsibility, and that the ability to do so is dependent on his having an evaluation system, I mean that at the time of judging how to act he must be capable of evaluative analysis on the basis of his current evaluation system.

This is distinct from his motivational system. A motivational system is a set of factors that moves an agent to action (I shall not attempt to give an account of each of these factors). It will include his desires and may include his evaluation system (unless he is abdicating his ability to evaluate his actions and/ or desires). The agent's motivational system does not necessarily coincide with his evaluation system each time he chooses to act. When he acts according to a first-order (pre-reflective) desire with which he would not identify, he is not acting according to his evaluation system. Nevertheless, he still has a motivational system (part of which is constituted by his first-order desire).

There are inevitably grey areas where the merit of a particular action/ desire/ goal has not been clearly established, and these may only be evaluated as they arise. There are areas in which the agent's initial evaluation may be subject to revision, or in which special circumstances require him to suspend a general conviction, but all of this can be accommodated by his evaluation system. It is only in light of evaluation that a second-order desire becomes the agent's will. When he wants to have a particular desire there is reflective consideration involved, there are reasons why he believes the desire to be of greatest benefit.

In order for an agent to have the capacity for moral responsibility, it must be possible for his evaluation system and motivational system to completely coincide. If this does not occur then the individual would be incapable of performing the actions that he believes, upon reflection, are of the greatest moral value. To be capable of accurately judging what desire or action has the greatest moral value, the agent must have an understanding of the correct moral standard[2] against which it should be assessed. It is in light of the moral standard that an agent's acts can be accurately morally assessed. It is a gauge that allows accurate assessment of the moral value of an act, desire, intention and so forth.

If the agent does not have the capacity to reach an understanding of the moral standard (through whatever route, possible candidates for which being habituation, revelation, intuition, reflective consideration, inherent knowledge, and so forth), then there is no way that he can consistently assess the moral worth of different desires/ actions, and accurately judge them to be morally superior or deficient. An agent's hierarchy of values reflects what action/ desire he believes is of greatest benefit. In terms of the moral standard, what is of greatest benefit will be that which has the greatest moral worth.

This understanding does not have to be explicit. The agent does not have to be able to give an exact account of the structure of the moral standard. An implicit understanding of it is sufficient for moral responsibility. He does, however, have to be capable of using it when making moral judgements.

To be able to conform to a moral standard, the agent must be sensitive to the normative requirements that it entails. The moral standard helps to establish moral norms for those who adopt it as part of their evaluation system (Copp, 1995 esp. pp. 21, 82 and 103). Moral norms are rules and prescriptions, either general or specific, for what it is morally correct to do (Gibbard, 1985 esp. p. 12). In light of the moral standard it is rational to adopt the moral norms that it entails. In order to justify blaming someone for not conforming to a moral norm, it must be possible to rationalise one's moral censure -- there must be a reason why one feels morally indignant.

Moral norms are what allow the formation of 'ought' and 'ought-not' type moral imperatives (for example, 'you ought to be generous', 'you ought not to steal'). These imperatives are, in part, justified by appeal to the values embodied in the moral standard. One of the reasons that one can provide for blaming someone for doing something that one believes morally deficient, is that they should have known what they were doing was wrong and this should have supplied sufficient motivation for them not to do it. In other words, the moral norm was something that they should have complied with, as it was possible and rational to do so.

The ability to adopt the moral standard, and live according to it, does not necessarily mean that the agent can harmonize all of his first-order desires with that standard. It is rather that he is capable of accepting the hierarchy of values that it embodies, and of forming second-order desires (desires that the agent wants to have) that are in accordance with it (Pettit and Smith, 1996 p. 443). The ability to live according to it requires the capacity to make the desire to conform to the moral norm one's will and to act accordingly. In order to have the capacity to become the agent's will, his desire to conform to the moral standard must be the desire with the greatest latent strength. It must be the desire that can be stronger than all other desires and thereby become the agent's will, if he chooses to identify with it. In order for the desire to act morally to become his will, the agent must believe that there is greater benefit in acting in accordance with moral norms than acting contrary to them. This is what will give the desire effective strength and make it the agent's will.

Making the desire to conform to the moral norm one's will is not necessarily synonymous with being moral. The agent could act in accordance with the moral standard without believing in the moral values that it embodies (for example, he could do so from fear). Thus, for the agent to be moral, as opposed to just having the ability to act in accordance with moral norms, requires that he have the desire to be moral and not just the desire to act in a way that will most likely be perceived by others to be moral, to want that desire, and be able to make it his will. This will involve the capacity to make the moral standard the dominant hierarchy of values in his evaluation system, and to identify with the values that constitute it. It is not sufficient that the agent be aware that there is such a thing as a moral standard, and that it can be used as the yardstick against which the value of individual actions and desires can be measured. He must have the actual capacity to use it as such a measure. He must be able to come to the belief that being moral is of greatest benefit.

Let us now consider an example where Jones has a desire to strike Black (first-order desire). I will assume that Jones has identified with and adopted the moral standard. He knows that he has two options: he can either act in accordance with his first-order desire, or he can choose not to do so. He judges it to be morally superior not to strike Black. This conclusion is reached due to the fact that the moral standard is part of his evaluation system and his knowledge of it informs him that it is morally deficient (lacking moral value) to strike people because of a trivial affront. He does not want to have the desire to strike others, as he recognises that harbouring it leads to morally deficient thoughts/ actions/ and so forth, which do not conform to the moral norm. Because of his identification with the moral standard, he believes it to be of greatest benefit not to strike Black. He therefore does not form the second-order desire to do so. His first-order desire is thereby held in check (it lacks strength because it is not judged to be most beneficial). In this case, his evaluation and motivational systems coincide.

If he had acted according to his first-order desire, then not only would he have had to take ownership of both his desire and action, he would also have had to accept moral responsibility for them (as he had knowledge of the moral standard, it could have formed part of his evaluation system and could have been an effective part of his motivational system, if he had so chosen).

Certain first-order desires may have moral content in themselves, but all first-order desires are pre-reflective. Therefore, the agent does not necessarily have any control over whether they arise or not (although it may be possible for him to avoid circumstances in which he knows that a certain desire could, or would, emerge). The agent cannot be morally responsible for having a desire that he is powerless to avoid. Therefore, it is at the level of identification with those desires that the agent's moral responsibility begins to manifest itself. Negligent or intentional failure to integrate the moral standard into one's evaluation system is no excuse for failing to act morally, providing one could do so (and that it was rational for one to do so). The failure to make a judgement about the moral status of one's desires or actions is morally reprehensible in itself, providing one can do so.

It is possible to be morally responsible for having a morally deficient second-order desire that one does not act on because of a lack of courage or determination. For example, one may be a racist, desire to inflict harm on different ethnic groups, want to have this desire and yet act on the stronger desire to stay out of trouble. Given that racist violence is morally deficient, it is my claim that identifying with the desire to engage in it is also morally deficient (this is assuming that racism and racist violence is judged/ known to be morally deficient in light of the moral standard, and that the racist has access to the moral standard and can adopt it).

Conclusion

I have shown in this paper that evaluative judgement plays a role in the agent's ability to take moral ownership of the actions that he performs and the desires with which he identifies. I have contended that if the agent is capable of making such judgements and fails to do so, intentionally or due to negligence, he is still responsible for the actions that issue from the unevaluated desires, and must assume responsibility for leaving them unevaluated. However, I have also attempted to show that the capacity for evaluative judgement, in itself, is not sufficient for moral responsibility. The ability to make moral judgements is not synonymous with the ability to act morally. The agent must also be able to identify with the desire to be moral and to make that desire his will. He must have access to the moral standard and have the capacity to integrate it into his hierarchy of values. He must also have the ability to come to the belief that it is most beneficial to act in accordance with the moral norms embodied in the moral standard. If this is the case then the agent has the capacity to be a moral person, and any failures on his part are the product of his own weakness or wilfulness. If one has the capacities that I have outlined in this paper, then one must take ownership of one's morally deficient intentions, desires and actions. One is a suitable candidate for morally reactive attitudes and for the application of the categories of praise and blame.

Footnotes

1. It should be noted that, for the agent, a certain action's/desire's value may be context specific (that is, what is judged to be most valuable in one specific set of circumstances may be judged to be of diminished value in another).

2. When referring to 'the moral standard' from here onwards I mean the correct moral standard unless otherwise stated.

References

Copp, D. (1995). Morality, Normativity and Society. New York, Oxford University Press.

Gibbard, A. (1985). Moral Judgment and the Acceptance of Norms. Ethics, 96, 5-21.

Pettit, P. and Smith, M. (1996). Freedom in Belief and Desire. The Journal of Philosophy, 93, 429-499.

Watson, G. (1982a). 'Free Agency'. In Watson, G (ed.) (1982). Free Will. Oxford, Oxford University Press.

(c) Richard Corrigan 2008

E-mail: richardcorrigan@philosophyandtheology.com

Richard H. Corrigan (Ph.D) University College Dublin Editor of the Philosophical Frontiers Journal http:---

-=-

II. 'A ROPE STRETCHED OVER AN ABYSS: ETHICS, LAW AND NEUROSCIENCE' BY PIERRE POUGET

Introduction

The dream of a complete noesis of the natural world is probably one of the most and profound aspirations of the human species, and even if the word 'scientist' was only introduced in 1840, this desire to understand the rules governing our physical world was expressed in the earlier stage of human civilization. As Franciscan friar Bacon emphasized in the middle of the 13th centuries, 'The strongest argument proves nothing so long as the conclusions are unverified by experience'.[1]

Yet the scientific approach of the medicine has since then always faced difficulties in undertaking those experiments.

An important step during medical training is the dissection of human or animal cadavers. Over the course of the history this practice has been often viewed as both morally and legally unacceptable. In fact, in the city of Alexandria in ancient Egypt, where Herophilos and others explored the nervous system, the circulatory system and the anatomy of the eye, human dissection was forbidden as it was later throughout Greece and Rome. This legal interdiction had important consequences, and despite major discoveries realized in ancient Egypt, Galen (then later Hyppocratus in Greece), had to face a massive handicap in studying human physiology and anatomy. Unfortunately some of those works were seriously compromised.

The controversy only ended nearly 1,400 years later, when Vesalius introduced into Europe the scientific examination of human anatomy. The Belgium anatomist who spent several years protected at the imperial court of Charles V in order to exert his talent as a surgeon, definitively marked a new consideration of the conception of medicine. Those researches were difficult. In fact Vesalius was harassed by the Church all his life, and most of his research was possible only as physician to Emperor Charles V.

Two centuries later, in England, Vesalius was succeeded by William Harvey whose 1628 book Exercitatio Anatomica de Motus Cordis et Sanguinis in Animalibus demonstrated that the heart was the center of the circulatory process, that the same blood flows through both veins and arteries, and that the blood makes a complete circuit throughout the body.[2] To achieve these observations, William Harvey practiced numerous animal autopsies and vivisections. For the quality and the organization of his research Harvey is sometimes considered as the inventor of the modern laboratory. Following Harvey, during most of the 19th century and even the early 20th century, the increasing use of animals as subjects of scientific research was then universally accepted and approved.

In 1831, a British physiologist whose name is associated with the theory of reflex arc mediated by the spinal cord, proposed a critical aim in relation to animals as subjects of scientific research. Marshall Hall at that time was still working on the reflex function but the same year he proposed five principles that he believed should govern all animal experimentation. Since those recommendations are today formally instituted in the British Animals (Scientific Procedures) Act and the U.S. Animal Welfare Act, it is important to briefly state them:

     1. An experiment should never be performed if the necessary
         information can be obtained by observations.
    
     2. No experiment should be performed without a clearly
         defined and obtainable objective.
    
     3. Scientists should be well-informed about the work of
         their predecessors and peers in order to avoid unnecessary
         repetition of an experiment.
    
     4. Justifiable experiments should be carried out with the
         least possible infliction of suffering (often through the
         use of lower, less sentient animals).
    
     5. Every experiment should be performed under circumstances
         that would provide the clearest possible results, thereby
         diminishing the need for repetition of experiments.

During the same period, Hall also proposed the founding of a scientific society to oversee publication of research results and recommended that 'the results of experimentation be laid before the public in the simplest, plainest terms'.[3] In general, Hall was criticised by those who disapproved of animal experimentation, both within and without the medical community.

Ethics, Law and Neuro-ethics: influences on Neurosciences

Marshall Hall's recommendations are today an important part of what is considered by the public society as an ethics of scientific practice. In terms of definition, the notion of ethics refers to the second-order and reflective consideration of our moral beliefs and practices, by contrast to morality which refers to the first-order beliefs and practices about right and wrong by means of which we guide our behavior. In others words, ethics may be defined as the explicit, philosophical reflection on moral beliefs and practices. Generally speaking, the difference between ethics and morality is similar to the difference between psychology and mind. Ethics is a conscious stepping back and reflecting on morality, just as psychology is a scientific reflection on mind.

In academic terms, ethics is a branch of philosophy concerned with morals and human conduct. With the recent development of neurosciences the notion of 'neuro-ethics' arose in order to address moral and social issues concerning the conduct of research in the neurosciences and biological psychology including their clinical applications.

Typical issues in neuro-ethics include the ethics of conducting research into novel interventions in the brain itself, but also the question of the ethical and social implications of the transformed 'models of man' arising from the findings of neuroscience. Neuro-ethics is also concerned by the meaning and application of brain imaging in the courts or schools as well as the ethical and social aspects of the clinical and public health treatment of psychiatric and neurological disorders in the light of modern research. Finally, the implications of modern neuroscience for our understanding of the basis of morality and social behaviour have given rise to the problem of the transformations of our concepts of free will and responsibility. For the purpose of this article we will only discuss the critical implication of this last point.

Since neuro-ethics refers to a specific aspect of ethics, the specification of what definition of ethics we will rely on in this article is essential. Most authorship of ethical codes distinguishes codes from civil or criminal laws. In fact, codes of ethics are commonly compiled primarily by members of the professions to whom they apply when laws are written by elected officials. An important difference between ethics and law resides in the aspirational quality of ethics, contrasted with minimal standards set by the law. It is incorrect to assume automatically that 'if it's legal, it's ethical.'

A law-abiding physician is not necessarily highly skilled or compassionate. The aspirational nature of ethical codes involves questions such as: 'What virtues and acts must characterize the best of our profession?' Sometimes one might have to choose between ethics and law. While ethics generally sets the bar for conduct at a different level than law, it would be incorrect to say that the law is unconcerned with values. The adage 'you can't legislate morality' is false in that every statute mandates some act or restraint in order to preserve a moral value. In most countries, the law prohibits wearing firearms in public in order to protect human life.

By contrast, what morally true cannot always be subject to law. Nor should it be. A society in which every good act is mandated by law would be tyranny. In today's life, meaningful moral action requires reflection, choice, or even failure. In contrast, ethical codes should not be seen as requiring perfection, but as statements of those purposes toward which one aims throughout the course of one's professional life. Generally speaking, however, values inform both ethics and law: ethics focusing on particular professional values, and law setting minimal standards of conduct to preserve the common good. So while ethics can govern the profession of a particular group of neuroscientists, neuroscience is also governed by law.

As we mentioned earlier, the law is the usual term that defines a rule, made by a government or political group that governs a society. Those rules are used to regulate the way in which a society behaves, or the whole system of such rules: such as laws against driving without a valid driving licence. In neuroscience, law did in fact precede the introduction of ethics. For instance, the first law written specifically to regulate animal experimentation was written in Great Britain's 1876: The Cruelty to Animals Act.

The 1876 law, which implicitly approved animal experimentation at the same time as it set up a system of licensing and certification, was replaced by the Animals (Scientific Procedures) Act of 1986, which specifically states that 'The Secretary of State shall not grant a project license until he is satisfied that the applicant has given adequate consideration to the feasibility of achieving the purpose of the programme to be specified in the license by means not involving the use of protected animals' (Animal Welfare, UFAW, Vol. 1, No. 2, 1992). In the United States, the 1966 Animal Welfare Act, amended in 1970, 1976, 1986, 1989, and 1991, set standards for laboratory animals, including rats, mice, and birds. On January 8, 1992 the U.S. District Court in Washington, DC ruled that the U.S. Department of Agriculture had been violating the Animal Welfare Act by not enforcing its provisions as they relate to these animals.

On a liberal view, society must not interfere with scientific questions or how scientists behave towards one another. In other words, investigation in science should not be limited by law. But society certainly does have an interest in protecting vulnerable subjects. On that view, neuro-ethics can be defined as this abyssal ocean between the two continents represented by law and neuroscience.

As shown by the modifications of law in the mid of the 20th century in most contemporary societies, classical researches on ethics were concerned mainly with invasive medical and physiological research, and secondarily with the ethics of some psychological, social psychological and anthropological research. Those concerns were certainly driven by the hypothetical revolution that those invasive research might imply.

To illustrate this idea, let's suppose that a precise set of neural imaging correlates of lying has been defined. For some purposes in psychological research it could be more or less immediately apparent to the researcher when the imaging subject is lying, even if the topic of the research is something quite different. Since part of the ethics of research involves seeking only the information required in the research, this could be considered as a breach of privacy. There are long-standing questions about the ethics of using a technology to detect lying, in the courts, in interrogation, or for other purposes, and whether evidence obtained in this way would or should be admissible or usable in making police inquiries. From the point of view of neuroscientists and many social scientists this is inevitably a biased perspective: it is clear that a social science take on the neurosciences will be more descriptive and explanatory than normative. Even within this perspective, however, it would be essential for a social science to understand the findings of neuroscience.

Neurosciences and its influences on Ethics, Law and Neuro-ethics

The concepts of 'moral' and 'responsibility' are explicitly present in many of the earliest surviving Greek texts (the Homeric epics). In those texts, both human and superhuman agents are often regarded as fair targets of praise and blame on the basis of how they have behaved. Sometimes a behavior is excused because of the presence of some factor that has undermined his control.[4]

Reflection on these factors gave rise to fatalism, the view that one's future or some aspect of it is predetermined, in such a way as to make one's particular deliberations, choices and actions irrelevant to whether that particular future is realized or not. If some particular outcome is destined, then it seems that the agent concerned could not be morally responsible for that outcome. Likewise, if fatalism is accepted with respect to all human futures, then it would seems that no human agent could be morally responsible for anything. Though this mark of fatalism has sometimes exerted significant historical influence, most philosophers have rejected it on the grounds that there is no good reason to think that our futures are destined in the sense that they will expand no matter what particular deliberations we engage in, choices we make, or actions we perform.[5]

As often, when looking among the earliest surviving Greek texts, Aristotle seems to be the first to construct explicitly a theory of moral responsibility. In the course of his discussion on human virtues and their corresponding vices, Aristotle starts in Nicomachean ethics to explore their foundations.[6] For Aristotle, only a certain kind of agent qualifies as a moral agent and is thus properly subject to ascriptions of responsibility, namely, one who possess a capacity for decision.

This consideration still remains essential and even today in the contemporary Anglo-American legal tradition, one of the requirements for criminal punishment is showing that the accused meets a test for being able to act responsibly. The failure to meet that requirement is a possible defense to a criminal prosecution. There are two basic components to the test: a cognitive requirement and a volitional requirement. The cognitive component focuses on whether the offender had the capacity to understand the wrongful and or unlawful nature of the criminal act. The volitional component asks whether or not the offender had the ability to control whether or not he committed the criminal act.

Generally, only people suffering from extreme and obvious deficits are able successfully to invoke the defense; and often not even then.[7] This concept of personal responsibility largely enlightened individualism but we should remember, that this conception was a late development of our legal system, and that its remains unpopular in many parts of the world today. It is important to keep in mind that the intuitive psychology idea of human action we possess is definitively the product of such enlightenment.

The neuroscience studies of decision-making and impulse control have major implications for the legal system. Those topics vary, including prediction of behavior, neuropsychiatric instruments that can be used for help in skills determinations, improvement in lie detection or even detection of brain death. Being able to enhance specific skills may raise the possibility of mandated enhancement, such as requiring people to take an antidepressant drug to make them less angry or irritable. Electrode stimulation of medio-frontal part of the cortex can temporarily modify the behavior of a macaque monkey[8][9] and we can imagine some procedures to modify the brain in order to treat addictions.[10]

In many aspects, the potential for discrimination based on neuroscientific tests and procedures raise serious issues regarding the exceptional treatment of individuals. Questions of privacy and confidentiality are also problematic, such as the extensive information gathered in a single imaging procedure. If admissible, are there other reasons for a court for not using the information that future neuroscience finding may be able to provide? Should a court allow testimony that a person has a superior memory or inferior activity of brain control? Should we introduce the possibility of refusing neuroscientific tests?

If people's actions are caused by factors for which they are not responsible, how can they be held responsible for actions that occur as a result of these factors? Neurophysiological experiments show that before a subject is even consciously aware of a decision to perform an act, the brain was active. The brain, as a physical organ is carrying on the ongoing action before the consciousness and awareness of the subject.[11] So then, can there be free choice in a deterministic scientific world of explanation?

When a violent act occurs, the quest is not simply to understand it as an activity of neurons but to assess responsibility. However, the point where the ability to inhibit an act is impaired is not clear. Of course, social rules are not based on neuroscientific findings and responsibility is a social construct. But recent neuroscientific findings raise some issues for the legal system that cannot be ignored. Old or recent discussions of these issues leave the impression of three disparate approaches with their own conceptions and projections. First, at a philosophical level, the debates about 'free will' and its perception in terms of determinism have not been fully resolved. Secondly, the law and more generally the entire legal system assesses responsibility of people as intentional agents governed by reason and goals. Finally, the neuroscience and its extraordinary development is now permitting to raises questions about the functioning of the brain and its mysterious relationship to the mind.

The Case of Mr. Puppet

In their recent paper, to illustrate the profound implication of responsibility in neuroscience, Green and Cohen (2004) used the case of a certain 'Mr. Puppet'. 'Mr. Puppet' is a criminal designed by a group of scientists through tight genetic and environmental control.[12] Having being arrested, Mr. Puppet will be judged for his unacceptable social behavior. The leader of the group of scientists is called to the stand by the defense, and here is what Greene and Cohen had him say:

     It is very simple, really. I designed him. I carefully
     selected every gene in his body and carefully scripted
     every significant event in his life so that he would become
     precisely what he is today. I selected his mother knowing
     that she would let him cry for hours and hours before
     picking him up. I carefully selected each of his relatives,
     teachers, friends, enemies, etc., and told them exactly what
     to say to him and how to treat him. Things generally went as
     planned, but not always. For example, the angry letters
     written to his dead father were not supposed to appear
     until he was fourteen, but by the end of his thirteenth
     year he had already written four of them. In retrospect I
     think this was because of a handful of substitutions I made
     to his eighth chromosome. At any rate, my plans for him
     succeeded, as they have for 95% of the people I've
     designed. I assure you that the accused deserves none of
     the credit.

Could a change in the chromosome determine the timing of a nasty letter written? Nothing in the genome does contain all the information that will specify any particular action. The fact is that even if the environmental regulations are impossible to fully apprehend and control, Greene and Cohen illustrate how it is difficult to consider Mr. Puppet to be responsible for his actions. Because those 'forces beyond his control played a dominant role in causing him to commit the crimes, it is hard to think of him as anything more than a pawn.'

Law and liberty

As illustrated in the preceding text from Greene and Cohen, the notion of free will implies an ability of exerting control based on volition. Neuroscience will not change this fact. In order to solve this problem over history, a distinction has been made between responsibility understood as attributability and responsibility as accountability.

The central idea to be able to judge whether or not an agent is responsible for an action in the sense of attributability, is to examine whether this particular action illustrate important information about the nature of the agent[13] (Watson 1996). In other words, to regard an agent in the attributability sense of responsibility is simply to believe that the merit or fault identified properly belongs to the agent. On the other hand, while responsibility can be shared among group of subjects, accountability cannot. The idea being that there is no 'shared accountability'. Discussion of the place and role of the reactive attitudes in human life continues to be a central theme in accounts of the concept of responsibility. What is certain is that the interest of neuroscience in apprehending the concept of moral responsibility and its application would certainly be an essential element for the future of our society.

Concluding discussion

The refinement and the development of experimental techniques in neurosciences now permits without too much difficulty observation of the level of activity of clearly defined regions of the brain of humans or animals during the performance of various tasks. By recording the activity of single neurons, scalp potential or variation of blood flow, one can today, literally, observe how one's own brain is thinking.

At the level of the cells, we are familiar today with the nature of the nerve influx and the various electrical phenomena which confer upon the neuron its properties of excitability and its information-processing capabilities. At the molecular level, in addition to identifying the structure of the channels, neurotransmitters and their receivers, the identification of an abundance of molecules which interact in cascades enables us to understand the functional and molecular substrates of fundamental phenomena such as pleasure, suffering, dependence, memory and the formation of cognitive maps in the brain.

Genetics has also been particularly fertile in the area of the neurosciences, revealing families of genes for the enzymes, receptors and linking proteins which take care of the different neuronal functions. The discovery of regulating genes, which are responsible for the development of the brain in response to exposure to the influences of its environment, holds out considerable prospects for the understanding of the phylogenesis of the brain.

Should we consider the neurosciences to be a poisoned chalice on which the worst forms of ideology may be expressed? Looking to history has shown that if such temptations of using scientific discoveries for atrocious reasons may arise at a specific time, they never become part of the foundations of human society. The purpose of this article was to examine the complex relationship that neuroscience has with ethics and law and to present with optimism the wonderful but critical challenge that our society will have to face with in order to resolve how ethics and law should influence neuroscience.

As discussed by Cohen and Green (2004, but also see e.g. Churchland 1981; Bisiach, 1988), in the next decades our society will undoubtedly have to reconcile the discoveries of neuroscience discoveries and the determinism of the brain's function with the current conceptions or interpretations of the law and notion of responsibility.[14][15] It is also important to underline that at the same time, reactions from our society may force our legal system to adapt in order to restrict what is considered acceptable as an experimental study in neuroscience. Already, today an increasing number of regulations and laws influence the way the research in neurosciences is carried out.

Once again, although many people believe that, in principle, human behavior is the physical result of a causally determined chain of biophysical events, most people put that aside when making moral judgments or casting their vote.

Footnotes

1. Bacon R. (1265). Opus Majus. Translated in English by Belle Burke Robert (1928). Heyl, Paul R. Publication.

2. Harvey W. (1628). Exercitatio Anatomica de Motus Cordis et Sanguinis in Animalibus. Translated by Robert Willis (1993). New York: P.F. Collier & Son Company.

3. Rupke NA. (1987). Vivisection in Historical perspective. by N.A. Rupke. Beckenham: Croom Helm.

4. Homer. The Odyssey, translated by Edward McCrorie (2004). Baltimore: Johns Hopkins University Press.

5. Sartre (1948). Existentialism Is a Humanism. Yale Univ Press 1953.

6. Aristotle. Nicomachean ethics. Translated by W.D. Ross. ebook.at.adelaide 2006.

7. Lewis DO, Pincus JH, Feldman M, Jackson L, Bard B. Psychiatric, neurological, and psychoeducational characteristics of 15 death row inmates in the United States. AmJ Psychiatry 1986;143:838-45.

8. Histed MH., Miller EK. (2006). Microstimulation of frontal cortex can reorder a remembered spatial sequence. Plos Biol. May; 4(5):e134.

9. Stuphorn V., Schall JD. Executive control of countermanding saccades by the supplementary eye field. Nat Neurosci. 2006 Jul;9(7):925-31.

10. Haber SB, Kim KS, Mailly P, Calzavara R. (2006). Reward-related cortical inputs define a large striatal region in primates that interface with associative cortical connections, providing a substrate for incentive-based learning. J Neurosci. Aug 9;26(32):8368-76.

11. Libet, B., Gleason, C. A., Wright, E. W., and Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely voluntary act. Brain, 106:623-642.

12. Greene, J. D., Cohen J. D. (2004) For the law, neuroscience changes nothing and everything. Philosophical Transactions of the Royal Society of London B, 359, 1775-17785.

13. Watson, Gary, 1996, 'Two Faces of Responsibility.' Philosophical Topics 24: 227-248.

14. Bisiach, E. (1988). The (haunted) brain and consciousness. In (A. Marcel and E. Bisiach, eds) Consciousness in Contemporary Science. Oxford University Press.

15. Churchland, P. S. (1981). On the alleged backwards referral of experiences and its relevance to the mind-body problem. Philosophy of Science, 48:165-181.

(c) Pierre Pouget 2008

E-mail: pierre.pouget@vanderbilt.edu

Center for Integrative & Cognitive Neuroscience Vanderbilt Vision Research Center Department of Psychology Vanderbilt University, Nashville, TN 37203

-=-

III. 'SOME REMARKS ON THE NATURE OF PHILOSOPHY' BY HUBERTUS FREMEREY

In this essay I comment on some inherent limitations of philosophy. It could be questioned whether this is philosophy at all. But thinking about the inherent limits of philosophy is meta-philosophy. I did not need Godel to know that you cannot criticize a theory from within. If you are a true Marxist, you cannot concede that there is a meta-theory to Marxism, since in your Marxist world Marxism is the highest form of theory possible. Thus from a Marxist point of view, for to be a 'meta-Marxist' you have to be wrong. And in the opinion of the analytical philosopher there cannot be such a thing as 'meta-analytical' philosophy, since to be analytic is by definition the highest form of philosophical thinking. But we all know that this is wrong, because analytical philosophy misses many problems by denying their existence. As a result analytical thinking becomes blinding dogmatism. When you simply define what is real, then if something does not fit your criteria of 'reality', it cannot be real. What did Hamlet answer to Polonius when he was asked what he was reading? 'Words, words, nothing but words!'

The point of my essay below is just this: When you define what philosophy is, the most important philosophy may simply evade you (since according to your predefined standards it is no philosophy at all), and then you may be in for a shock, because 'that which is no philosophy at all' has turned out to be a more vital and more effective philosophy than your own. Compare it to art: For many lovers of art the painting of Cezanne and Gauguin and Picasso some hundred years ago was 'no art at all'. And surely not what we call 'primitive art' today or even pop-art or op-art or abstract art etc. But our concept of art has changed. In the same way our concept of philosophy has changed. This is not just from the confrontation with Hindu or Buddhist or Chinese or African philosophy in the first line, but as much from confrontation with phenomenology, hermeneutics, structuralism, feminism and language analysis etc.

Kant saw certain limitations in philosophy, Hegel, Husserl, Heidegger, Wittgenstein and others saw different limitations, and this is the way philosophy proceeds: Not only by solving logical problems, but by expanding the limits (= definitions) of philosophy and seeing problems from new perspectives and in a different light, which has nothing to do with logical or methodological solutions of problems, but with a change of awareness. Problems are not just there to be solved. Problems come and go, depending on light and perspective and our understanding.

Kant was not the end of philosophy, neither was Hegel, and not even Heidegger or Derrida or Wittgenstein have defined the limits of philosophy. They all did what Socrates did: Instead of solving problems, they expanded our awareness of what philosophical problems can be. No analytical philosophy will ever tell you where philosophy ends. If you think otherwise you have a restricted idea of what philosophy is in the same way as the critics of Picasso had a restricted idea of what art is.

In this time of globalization, we see a new and rising interest in what is called 'intercultural philosophy'. So I take my illustrating example from a note on Indian philosophy. From the Wiki-article on Indian Philosophy (http:---) I take the following:

     Chatterjee and Datta give this definition, explaining that
     a cornerstone of Indian philosophy is a tradition of
     respect for multiple views:
    
     'Indian philosophy denotes the philosophical speculations
     of all Indian thinkers, ancient or modern, Hindus or
     non-Hindus, theists or atheists... Indian philosophy is
     marked... by a striking breadth of outlook which only
     testifies to its unflinching devotion to the search for
     truth. Though there were many different schools and their
     views differed sometimes vary widely, yet each school took
     care to learn the views of all the others and did not come
     to any conclusions before considering thoroughly what
     others had to say and how their points could be met... If
     the openness of mind -- the willingness to listen to what
     others have to say -- has been one of the chief causes of
     the wealth and greatness of Indian philosophy in the past,
     it has a definite moral for the future.'[1]
    
     [1] Chatterjee, Satischandra; Datta, Dhirendramohan (1984).
     An Introduction to Indian Philosophy, Eighth Reprint
     Edition, Calcutta: University of Calcutta.

But, as is well known, Indian philosophy in all its breadth of understanding did not arrive at modern 'Western' rational science. And why not? Because there was no felt need to even ask for it.

This may sound strange. But think again: All children in the world sometime will ask their parents: 'Mom, dad, what is the moon?' But the parents might answer -- quite naturally: 'We don't know, God made it, he will know, and that suffices.' Indeed, for some, it does. Because how should we know what the moon is -- and why? To find out has been a very difficult task, and the driving force behind this achievement has not been the moon itself but the invention of telescopes in the times of Galilei and Kepler around 1600. If you have a telescope you will see many things you did not see before. But why should you build a telescope? Perhaps for navigation on seagoing ships? Or for observing the stars as an astrologer? Surely not for observing the moon in the first place.

And in this same way we may ask: Why at all should the Hindus or the Chinese or the inhabitants of Africa have been interested in studying nature? Of course they all knew much about the plants and animals around from observations and experiences. But this is not methodical science. Only the Greeks tried to find out about nature because of a strange sort of curiousness. They wanted to live in a world that was rational, consistent and explained. This is quite uncommon and not at all natural. The Jews never even tried to develop a natural science or the math needed to support it. In the opinion of the Jews God would care about his creation, while man should care about his relation with God. This is a natural attitude. Neither Jesus nor the Buddha nor Confucius were interested in natural sciences. Not even Socrates was. They all said that to become a good human and to improve mutual understanding among humans natural science is not needed. So why bother?

This is the main explanation of the seemingly strange fact that in all of Asia and Africa nobody got really interested in doing natural science in any methodically strict way. Science starts with 'what is nature?' and 'how to find out?' Thus you should not be interested in the moon, you should be interested in the nature of nature. And you should not speculate like the astrologer and alchemist. Instead you should observe and do experiments and consider your observations and experiments critically. This is what the Greeks and later the 'Occidentals' or 'Franken' did. If you are interested in the study of nature, you eventually will find out about the moon, but the moon itself is of no help when embarking on such a grand endeavour.

In contrast, the Indians and the Chinese engaged in speculation and magical thinking. You can see the outcome in so many of todays 'kung fu' and 'mystical' movies (f.i. 'Tiger and Dragon' see http:--- ). The concept of nature in these movies (and there are many of this sort) is a magical 'Daoist' one, not a scientific one. True natural science is a Western invention and had to be imported into all of Asia and Africa. (See f.i. Joseph Needham, the great scholar of Chinese science, on this: http:--- and http:---).

But this has absolutely nothing to do with any lack of brains in Asia or Africa. There simply was no felt need to methodically study nature since it did not bear to the only questions that mattered: 'How to become a good human and how to build a stable and well governed society?' What do we care the true nature of the moon then?

Lest I am looked upon as a 'racist Eurocentric' here let me once more make the point very clear: If you dismiss mathematics as irrelevant and not worth studying, even if you are a mathematical genius you will not arrive at great results in mathematics. And if you dismiss the study of nature as not helping in the understanding and improving of man and society, you will not be knowledgeable on the nature of nature, even if you are very bright and easily would have got at great results in studying nature. It is not a matter of intelligence whether you become a gardener or a technician. It's a matter of choice. But if you choose to become a gardener, you will not come out as a famous maker of cars and airplanes. You cannot expect to be good at what you are not interested in.

Only by a strange and improbable coincidence of very special conditions did Newton stumble over his law of gravitation of 1678. He himself acknowledged that he perhaps would not have arrived at his results without the work of the mystic Behmen. Behmen was a simple shoemaker and ignorant of mathematics. But once more: To get at the law of gravitation, you first have to be interested in it. What Newton wanted to achieve was not 'industrial society' (which was completely out of sight for him and his time) but a demonstration of the wisdom of God. He was interested in the wisdom of God -- in theo-sophy -- and so was Behmen. Thus modern natural science was derived from theology and theosophy.

But instead of speculating about the nature of God, Kepler, Galileo and Newton all were mathematicians and observers of nature. This explains why in Asia and Africa and even in the eastern part of Europe, there was no Kepler nor Galileo nor Newton. What was lacking was an interest in methodical observations of nature, and a culture of mathematics, which made the calculations of Kepler and Newton possible.

Thus a telescope and an interest in methodical observations of nature and a knowledge of advanced mathematics had to come together to start modern sciences. All three were lacking in the Orient more or less, and this explains why modern natural science could not start there, because it had no cultural base from which to start.

Seen in this light, even while 'the openness of mind... has been one of the chief causes of the wealth and greatness of Indian philosophy in the past', it did not arrive at modern thinking and very probably never would have in the future, since before you can find out about nature, you first need reason and motivation to find out. The Greeks had such motivation and reason, and the scientists of the later Western Renaissance the same, but of another sort. The Greeks expected the world to be rationally understandable, and the Western Christians wanted to see God's wisdom incorporated in his creation. In both cases the driving force was a metaphysical assumption. So the main difference between Occidental and Oriental thinking in this questions was a difference of metaphysics.

Which is an aside on the current contempt of metaphysics in Western analytical philosophy of today.

To be not misunderstood: What I fight is a naive idea that it could suffice to sit down and think a bit and by this become wise and all-knowing. To become knowledgeable about electrodynamics you cannot sit down and study the Upanishads or the wisdom of Lao Tse or Confucius. Nor do you lock yourself away in a Western university, merely looking at books. You have to do experiments and you have to do math in the way Faraday and Maxwell did. You have to change not your books but your attitudes with respect to reality.

And what about the modern liberal state and human and civil rights? Did they grow in India or in China or in Africa? No! They were born in the English and French Revolutions of 1649 and 1789, and in the American Declaration of Independence of 1776. India and China with all their openness to new ideas were in fact closed to any new ideas not fitting their fundamental assumptions. And since the ideas of freedom and progress and the 'common wealth and happinesse' are metaphysical ideas, this once more is a comment on the current contempt of metaphysics.

One could speak of mere 'cultural differences', but 'cultural differences' sounds too much like 'costumes and customs'. But I am speaking of different approaches to reality here, which is metaphysics. Cf. the well known 'Athens-Jerusalem' topic.

I think we could afford some second thoughts on the state of philosophy and on its relation to the world we live in today. We should try to see the whole picture again and not be content with solving this or that analytical problem, as valuable as this may be. The destination of mankind is not an analytical but a metaphysical problem of the first order. I even expect some valuable contributions from the Asian and African traditions. But to become valuable counselors the philosophers of the East and of Africa have to understand 'the modern condition' first.

Modern man lives in a dynamic world of rapid changes and technical adaptations, not in the quasi static world of 'the ways of our ancestors.' This is what modern philosophy is up to, from whatever region of the world it may originate. All else would be 'seeking your lost Western soul in Asia and Africa' as in the days of the 'Dharma bums' of the 1970s and of the 'New Age' movement thereafter. But the task of philosophy is not that of psychotherapy, even while both ask for truth and clarification. Instead we have to clarify the true meaning of philosophy again, which is to ask for reason in a maddening world.

(c) Hubertus Fremerey 2008

E-mail: hubertus@fremerey.net

© Geoffrey Klempner 2002–2020

www.geoffreyklempner.net

klempner@fastmail.net