In Encyclopędia Britannica, Chicago, 1985, pp. 627-648
also called moral philosophy the discipline concerned with what is morally good and bad, right and wrong. The term is also applied to any system or theory of moral values or principles.
How should we live? Shall we aim at happiness or at knowledge, virtue, or the creation of beautiful objects? If we choose happiness, will it be our own or the happiness of all? And what of the more particular questions that face us: Is it right to be dishonest in a good cause? Can we justify living in opulence while elsewhere in the world people are starving? If conscripted to fight in a war we do not support, should we disobey the law? What are our obligations to the other creatures with whom we share this planet and to the generations of humans who will come after us?
Ethics deals with such questions at all levels. Its subject consists of the fundamental issues of practical decision making, and its major concerns include the nature of ultimate value and the standards by which human actions can be judged right or wrong.
The terms ethics and morality are closely related. We now often refer to ethical judgments or ethical principles where it once would have been more common to speak of moral judgments or moral principles. These applications are an extension of the meaning of ethics. Strictly speaking, however, the term refers not to morality itself but to the field of study, or branch of inquiry, that has morality as its subject matter. In this sense, ethics is equivalent to moral philosophy.
Although ethics has always been viewed as a branch of philosophy, its all-embracing practical nature links it with many other areas of study, including anthropology, biology, economics, history, politics, sociology, and theology. Yet, ethics remains distinct from such disciplines because it is not a matter of factual knowledge in the way that the sciences and other branches of inquiry are. Rather, it has to do with determining the nature of normative theories and applying these sets of principles to practical moral problems.
The origins of ethics
When did ethics begin and how did it originate? If we are referring to ethics proper—i.e., the systematic study of what we ought to do—it is clear that ethics can only have come into existence when human beings started to reflect on the best way to live. This reflective stage emerged long after human societies had developed some kind of morality, usually in the form of customary standards of right and wrong conduct. The process of reflection tended to arise from such customs, even if in the end it may have found them wanting. Accordingly, ethics began with the introduction of the first moral codes.
Virtually every human society has some form of myth to explain the origin of morality. In the Louvre in Paris there is a black Babylonian column with a relief showing the sun god Shamash presenting the code of laws to Hammurabi. The Old Testament account of God giving the Ten Commandments to Moses on Mt. Sinai might be considered another example. In Plato's Protagoras there is an avowedly mythical account of how Zeus took pity on the hapless humans, who, living in small groups and with inadequate teeth, weak claws, and lack of speed, were no match for the other beasts. To make up for these deficiencies, Zeus gave humans a moral sense and the capacity for law and justice, so that they could live in larger communities and cooperate with one another.
That morality should be invested with all the mystery and power of divine origin is not surprising. Nothing else could provide such strong reasons for accepting the moral law. By attributing a divine origin to morality, the priesthood became its interpreter and guardian, and thereby secured for itself a power that it would not readily relinquish. This link between morality and religion has been so firmly forged that it is still sometimes asserted that there can be no morality without religion. According to this view, ethics ceases to be an independent field of study. It becomes, instead, moral theology.
There is some difficulty, already known to Plato, with the view that morality was created by a divine power. In his dialogue Euthyphro, Plato considered the suggestion that it is divine approval that makes an action good. Plato pointed out that if this were the case, we could not say that the gods approve of the actions because the actions are good. Why then do the gods approve of these actions rather than others? Is their approval entirely arbitrary? Plato considered this impossible and so held that there must be some standards of right or wrong that are independent of the likes and dislikes of the gods. Modern philosophers have generally accepted Plato's argument because the alternative implies that if the gods had happened to approve of torturing children and to disapprove of helping one's neighbours, then torture would have been good and neighbourliness bad.
Problems of divine origin
A modern theist might say that since God is good, he could not possibly approve of torturing children nor disapprove of helping neighbours. In saying this, however, the theist would have tacitly admitted that there is a standard of goodness that is independent of God. Without an independent standard, it would be pointless to say that God is good; this could only mean that God is approved of by God. It seems therefore that, even for those who believe in the existence of God, it is impossible to give a satisfactory account of the origin of morality in terms of a divine creation. We need a different account.
There are other possible connections between religion and morality. It has been said that even if good and evil exist independently of God or the gods, only divine revelation can reliably inform us about good and evil. An obvious problem with this view is that those who receive divine revelations, or who consider themselves qualified to interpret them, do not always agree on what is good and what is evil. Without an accepted criterion for the authenticity of a revelation or an interpretation, we are no better off, so far as reaching moral agreement is concerned, than we would be if we were to decide on good and evil ourselves with no assistance from religion.
Traditionally, a more important link between religion and ethics was that religious teachings were thought to provide a reason for doing what is right. In its crudest form, the reason was that those who obey the moral law will be rewarded by an eternity of bliss while everyone else roasts in hell. In more sophisticated versions, the motivation provided by religion was less blatantly self-seeking and more of an inspirational kind. Whether in its crude or sophisticated version, or something in between, religion does provide an answer to one of the great questions of ethics: Why should I do what is right? As will be seen in the course of this article, however, the answer provided by religion is by no means the only answer. It will be considered after the alternatives have been examined.
Can we do better than the religious accounts of the origin of morality? Because, for obvious reasons, we have no historical record of a human society in the period before it had any standards of right and wrong, history cannot tell us the origins of morality. Nor is anthropology able to assist because all human societies studied have already had, except perhaps during the most extreme circumstances, their own form of morality. Fortunately there is another mode of inquiry open to us. Human beings are social animals. Living in a social group is a characteristic we share with many other animal species, including our closest relatives, the apes. Presumably, the common ancestor of humans and apes also lived in a social group, so that we were social beings before we were human beings. Here, then, in the social behaviour of nonhuman animals and in the evolutionary theory that explains such behaviour, we may find the origins of human morality.
Social life, even for nonhuman animals, requires constraints on behaviour. No group can stay together if its members make frequent, no-holds-barred attacks on one another. Social animals either refrain altogether from attacking other members of the social group, or, if an attack does take place, the ensuing struggle does not become a fight to the death—it is over when the weaker animal shows submissive behaviour. It is not difficult to see analogies here with human moral codes. The parallels, however, go much further than this. Like humans, social animals may behave in ways that benefit other members of the group at some cost or risk to themselves. Male baboons threaten predators and cover the rear as the troop retreats. Wolves and wild dogs bring meat back to members of the pack not present at the kill. Gibbons and chimpanzees with food will, in response to a gesture, share their food with others of the group. Dolphins support sick or injured animals, swimming under them for hours at a time and pushing them to the surface so they can breathe.
It may be thought that the existence of such apparently altruistic behaviour is odd, for evolutionary theory states that those who do not struggle to survive and reproduce will be wiped out in the ruthless competition known as natural selection. Research in evolutionary theory applied to social behaviour, however, has shown that evolution need not be quite so ruthless after all. Some of this altruistic behaviour is explained by kin selection. The most obvious examples are those in which parents make sacrifices for their offspring. If wolves help their cubs to survive, it is more likely that genetic characteristics, including the characteristic of helping their own cubs, will spread through further generations of wolves.
Kinship and reciprocity
Less obviously, the principle also holds for assistance to other close relatives, even if they are not descendants. A child shares 50 percent of the genes of each of its parents, but full siblings too, on the average, have 50 percent of their genes in common. Thus a tendency to sacrifice one's life for two or more of one's siblings could spread from one generation to the next. Between cousins, where only 12 1/2 percent of the genes are shared, the sacrifice-to-benefit ratio would have to be correspondingly increased.
When apparent altruism is not between kin, it may be based on reciprocity. A monkey will present its back to another monkey, who will pick out parasites; after a time the roles will be reversed. Reciprocity may also be a factor in food sharing among unrelated animals. Such reciprocity will pay off, in evolutionary terms, as long as the costs of helping are less than the benefits of being helped and as long as animals will not gain in the long run by “cheating”—that is to say, by receiving favours without returning them. It would seem that the best way to ensure that those who cheat do not prosper is for animals to be able to recognize cheats and refuse them the benefits of cooperation the next time around. This is only possible among intelligent animals living in small, stable groups over a long period of time. Evidence supports this conclusion: reciprocal behaviour has been observed in birds and mammals, the clearest cases occurring among wolves, wild dogs, dolphins, monkeys, and apes.
In short, kin altruism and reciprocity do exist, at least in some nonhuman animals living in groups. Could these forms of behaviour be the basis of human ethics? There are good reasons for believing that they could. A surprising proportion of human morality can be derived from the twin bases of concern for kin and reciprocity. Kinship is a source of obligation in every human society. A mother's duty to look after her children seems so obvious that it scarcely needs to be mentioned. The duty of a married man to support and protect his family is almost equally as widespread. Duties to close relatives take priority over duties to more distant relatives, but in most societies even distant relatives are still treated better than strangers.
If kinship is the most basic and universal tie between human beings, the bond of reciprocity is not far behind. It would be difficult to find a society that did not recognize, at least under some circumstances, an obligation to return favours. In many cultures this is taken to extraordinary lengths, and there are elaborate rituals of gift giving. Often the repayment has to be superior to the original gift, and this escalation can reach such extremes as to threaten the economic security of the donor. The huge “potlatch” feasts of certain American Indian tribes are a well-known example of this type of situation. Many Melanesian societies also place great importance on giving and receiving very substantial amounts of valuable items.
Many features of human morality could have grown out of simple reciprocal practices such as the mutual removal of parasites from awkward places. Suppose I want to have the lice in my hair picked out and I am willing in return to remove lice from someone else's hair. I must, however, choose my partner carefully. If I help everyone indiscriminately, I will find myself delousing others without getting my own lice removed. To avoid this, I must learn to distinguish between those who return favours and those who do not. In making this distinction, I am separating reciprocators and nonreciprocators and, in the process, developing crude notions of fairness and of cheating. I will strengthen my links with those who reciprocate, and bonds of friendship and loyalty, with a consequent sense of obligation to assist, will result.
This is not all. The reciprocators are likely to react in a hostile and angry way to those who do not reciprocate. Perhaps they will regard reciprocity as good and “right” and cheating as bad and “wrong.” From here it is a small step to concluding that the worst of the nonreciprocators should be driven out of society or else punished in some way, so that they will not take advantage of others again. Thus a system of punishment and a notion of desert constitute the other side of reciprocal altruism.
Although kinship and reciprocity loom large in human morality, they do not cover the entire field. Typically, there are obligations to other members of the village, tribe, or nation even when these are strangers. There may also be a loyalty to the group as a whole that is distinct from loyalty to individual members of the group. It may be at this point that human culture intervenes. Each society has a clear interest in promoting devotion to the group and can be expected to develop cultural influences that exalt those who make sacrifices for the sake of the group and revile those who put their own interests too far ahead of the interests of the group. More tangible rewards and punishments may supplement the persuasive effect of social opinion. This is simply the start of a process of cultural development of moral codes.
Before considering the cultural variations in human morality and their significance for ethics, let us draw together this discussion of the origins of morality. Since we are dealing with a prehistoric period and morality leaves no fossils, any account of the origins of morality will necessarily remain to some extent speculative. It seems likely that morality is the gradual outgrowth of forms of altruism that exist in some social animals and that are the result of the usual evolutionary processes of natural selection. No myths are required to explain its existence.
Anthropology and ethics
It is commonly believed that there are no ethical universals—i.e., there is so much variation from one culture to another that no single principle or judgment is generally accepted. We have already seen that such is not the case. Of course, there are immense differences in the way in which the broad principles so far discussed are applied. The duty of children to their parents meant one thing in traditional Chinese society and means something quite different in contemporary Anglo-Saxon society. Yet, concern for kin and reciprocity to those who treat us well are considered good in virtually all human societies. Also, all societies have, for obvious reasons, some constraints on killing and wounding other members of the group.
Beyond that common ground, the variations in moral attitudes soon become more striking than the similarities. Man's fascination with such variations goes back a long way. The Greek historian Herodotus relates that Darius, king of Persia, once summoned Greeks before him and asked them how much he would have to pay them to eat their fathers' dead bodies. They refused to do it at any price. Then Darius brought in some Indians who by custom ate the bodies of their parents and asked them what would make them willing to burn their fathers' bodies. The Indians cried out that he should not mention so horrid an act. Herodotus drew the obvious moral: each nation thinks its own customs best.
Variations in morals were not systematically studied until the 19th century, when knowledge of the more remote parts of the globe began to increase. At the beginning of the 20th century, Edward Westermarck published The Origin and Development of the Moral Ideas (1906–08), two large volumes comparing differences among societies in such matters as the wrongness of killing (including killing in warfare, euthanasia, suicide, infanticide, abortion, human sacrifices, and duelling); whose duty it is to support children, the aged, or the poor; the forms of sexual relationship permitted; the status of women; the right to property and what constitutes theft; the holding of slaves; the duty to tell the truth; dietary restrictions; concern for nonhuman animals; duties to the dead; and duties to the gods. Westermarck had no difficulty in demonstrating tremendous diversity in all these issues. More recent, though less comprehensive, studies have confirmed that human societies can and do flourish while holding radically different views about all such matters.
As noted earlier, ethics itself is not primarily concerned with the description of moral systems in different societies. That task, which remains on the level of description, is one for anthropology or sociology. In contrast, ethics deals with the justification of moral principles. Nevertheless, ethics must take note of the variations in moral systems because it has often been claimed that this knowledge shows that morality is simply a matter of what is customary and is always relative to a particular society. According to this view, no ethical principles can be valid except in terms of the society in which they are held. Words such as good and bad just mean, it is claimed, “approved in my society” or “disapproved in my society,” and so to search for an objective, or rationally justifiable, ethic is to search for what is in fact an illusion.
One way of replying to this position would be to stress the fact that there are some features common to virtually all human moralities. It might be thought that these common features must be the universally valid and objective core of morality. This argument would, however, involve a fallacy. If the explanation for the common features is simply that they are advantageous in terms of evolutionary theory, that does not make them right. Evolution is a blind force incapable of conferring a moral imprimatur on human behaviour. It may be a fact that concern for kin is in accord with evolutionary theory, but to say that concern for kin is therefore right would be to attempt to deduce values from facts. As will be seen later, it is not possible to deduce values from facts in this manner. In any case, that something is universally approved does not make it right. If all human societies enslaved any tribe they could conquer, some freethinking moralists might still insist that slavery is wrong. They could not be said to be talking nonsense merely because they had few supporters. Similarly, then, universal support for principles of kinship and reciprocity cannot prove that these principles are in some way objectively justified.
This example illustrates the way in which ethics differs from a descriptive science. From the standpoint of ethics, whether human moral codes closely parallel one another or are extraordinarily diverse, the question of how an individual should act remains open. If you are thinking deeply about what you should do, your uncertainty will not be overcome by being told what your society thinks you should do in the circumstances in which you find yourself. Even if you are told that virtually all other human societies agree, you may choose not to go that way. If you are told that there is great variation among human societies over what people should do in your circumstances, you may wonder whether there can be any objective answer, but your dilemma has still not been resolved. In fact, this diversity does not rule out the possibility of an objective answer either: conceivably, most societies simply got it wrong. This, too, is something that will be taken up later in this article, for the possibility of an objective morality is one of the constant themes of ethics.
The first ethical precepts were certainly passed down by word of mouth by parents and elders, but as societies learned to use the written word, they began to set down their ethical beliefs. These records constitute the first historical evidence of the origins of ethics.
The Middle East
The earliest surviving writings that might be taken as ethics textbooks are a series of lists of precepts to be learned by boys of the ruling class of Egypt, prepared some 3,000 years before the Christian Era. In most cases, they consist of shrewd advice on how to live happily, avoid unnecessary troubles, and advance one's career by cultivating the favour of superiors. There are, however, several passages that recommend more broadly based ideals of conduct, such as the following: Rulers should treat their people justly and judge impartially between their subjects. They should aim to make their people prosperous. Those who have bread are urged to share it with the hungry. Humble and lowly people must be treated with kindness. One should not laugh at the blind or at dwarfs.
Why then should one follow these precepts? Did the ancient Egyptians believe that one should do what is good for its own sake? The precepts frequently state that it will profit a man to act justly, much as we say that “honesty is the best policy.” They also emphasize the importance of having a good name. Since these precepts are intended for the instruction of the ruling classes, however, we have to ask why helping the destitute should have contributed to an individual's good reputation among this class. To some degree the authors of the precepts must have thought that to make people prosperous and happy and to be kind to those who have least is not merely personally advantageous but good in itself.
The precepts are not works of ethics in the philosophical sense. No attempt is made to find any underlying principles of conduct that might provide a more systematic understanding of ethics. Justice, for example, is given a prominent place, but there is no elaboration of the notion of justice nor any discussion of how disagreements about what is just and unjust might be resolved. Furthermore, there is no probing of ethical dilemmas that may occur if the precepts should conflict with one another. The precepts are full of sound observations and practical wisdom, but they do not encourage theoretical speculation.
The same practical bent can be found in other early codes or lists of ethical injunctions. The great codification of Babylonian law by Hammurabi is often said to have been based on the principle of “an eye for an eye, a tooth for a tooth,” as if this were some fundamental principle of justice, elaborated and applied to all cases. In fact, the code reflects no such consistent principle. It frequently prescribes the death penalty for offenses that do not themselves cause death—e.g., for robbery or for accepting bribes. Moreover, even the eye-for-an-eye rule applies only if the eye of the original victim is that of a member of the patrician class; if it is the eye of a commoner, the punishment is a fine of a quantity of silver. Apparently such differences in punishment were not thought to require justification. At any rate, there are no surviving attempts to defend the principles of justice on which the code was based.
The Hebrew people were at different times captives of both the Egyptians and the Babylonians. It is therefore not surprising that the law of ancient Israel, which was put into its definitive form during the Babylonian Exile, shows the influence both of the ancient Egyptian precepts and of the Code of Hammurabi. The book of Exodus refers, for example, to the principle of “life for life, eye for eye, tooth for tooth.” Hebrew law does not differentiate, as the Babylonian law does, between patricians and commoners, but it does stipulate that in several respects foreigners may be treated in ways that it is not permissible to treat fellow Hebrews; for instance, Hebrew slaves, but not others, had to be freed without ransom in the seventh year. Yet, in other respects Israeli law and morality developed the humane concern shown in the Egyptian precepts for the poor and unfortunate: hired servants must be paid promptly, because they rely on their wages to satisfy their pressing needs; slaves must be allowed to rest on the seventh day; widows, orphans, and the blind and deaf must not be wronged, and the poor man should not be refused a loan. There was even a tithe providing for an incipient welfare state. The spirit of this humane concern was summed up by the injunction to “love thy neighbour as thyself,” a sweepingly generous form of the rule of reciprocity.
The famed Ten Commandments are thought to be a legacy of Semitic tribal law when important commands were taught, one for each finger, so that they could more easily be remembered. (Sets of five or 10 laws are common among preliterate civilizations.) The content of the Hebrew commandments differed from other laws of the region mainly in its emphasis on duties to God. In the more detailed laws laid down elsewhere, this emphasis continued with as much as half the legislation concerned with crimes against God and ceremonial and ritualistic matters, though there may be other explanations for some of these ostensibly religious requirements concerning the avoidance of certain foods and the need for ceremonial cleansings.
In addition to lengthy statements of the law, the surviving literature of ancient Israel includes both proverbs and the books of the prophets. The proverbs, like the precepts of the Egyptians, are brief statements without much concern for systematic presentation or overall coherence. They go further than the Egyptian precepts, however, in urging conduct that is just and upright and pleasing to God. There are correspondingly fewer references to what is needed for a successful career, although it is frequently stated that God rewards the just. In this connection the Book of Job is notable as an exploration of the problem raised for those who accept this motive for obeying the moral law: How are we to explain the fact that the best of people may suffer the worst misfortunes? The book offers no solution beyond faith in God, but the sharpened awareness of the problem it offers may have influenced some to adopt belief in reward and punishment in another realm as the only possible solution.
The literature of the prophets contains a good deal of social and ethical criticism, though more at the level of denunciation than discussion about what goodness really is or why there is so much wrongdoing. The Book of Isaiah is especially notable for its early portrayal of a utopia in which “the desert shall blossom as the rose . . . the wolf also shall dwell with the lamb . . . . They shall not hurt or destroy in all my holy mountain.”
Unlike the ethical teaching of ancient Egypt and Babylon, Indian ethics was philosophical from the start. In the oldest of the Indian writings, the Vedas, ethics is an integral aspect of philosophical and religious speculation about the nature of reality. These writings date from about 1500 BC. They have been described as the oldest philosophical literature in the world, and what they say about how people ought to live may therefore be the first philosophical ethics.
The Vedas are, in a sense, hymns, but the gods to which they refer are not persons but manifestations of ultimate truth and reality. In the Vedic philosophy, the basic principle of the universe, the ultimate reality on which the cosmos exists, is the principle of Ritam, which is the word from which the Western notion of right is derived. There is thus a belief in a right moral order somehow built into the universe itself. Hence, truth and right are linked; to penetrate through illusion and understand the ultimate truth of human existence is to understand what is right. To be an enlightened one is to know what is real and to live rightly, for these are not two separate things but one and the same.
The ethic that is thus traced to the very essence of the universe is not without its detailed practical applications. These were based on four ideals, or proper goals, of life: prosperity, the satisfaction of desires, moral duty, and spiritual perfection—i.e., liberation from a finite existence. From these ends follow certain virtues: honesty, rectitude, charity, nonviolence, modesty, and purity of heart. To be condemned, on the other hand, are falsehood, egoism, cruelty, adultery, theft, and injury to living things. Because the eternal moral law is part of the universe, to do what is praiseworthy is to act in harmony with the universe and accordingly will receive its proper reward; conversely, once the true nature of the self is understood, it becomes apparent that those who do what is wrong are acting self-destructively.
The basic principles underwent considerable modification over the ensuing centuries, especially in the Upanisads, a body of philosophical literature dating from 800 BC. The Indian caste system, with its intricate laws about what members of each caste may or may not do, is accepted by the Upanisads as part of the proper order of the universe. Ethics itself, however, is not regarded as a matter of conformity to laws. Instead, the desire to be ethical is an inner desire. It is part of the quest for spiritual perfection, which in turn is elevated to the highest of the four goals of life.
During the following centuries the ethical philosophy of this early period gradually became a rigid and dogmatic system that provoked several reactions. One, which is uncharacteristic of Indian thought in general, was the Carvaka, or materialist school, which mocked religious ceremonies, saying that they were invented by the Brahmans (the priestly caste) to ensure their livelihood. When the Brahmans defended animal sacrifices by claiming that the sacrificed beast goes straight to heaven, the members of the Carvaka asked why the Brahmans did not kill their aged parents to hasten their arrival in heaven. Against the postulation of an eventual spiritual liberation, Carvaka ethics urged each individual to seek his or her pleasure here and now.
Jainism, another reaction to the traditional Vedic outlook, went in exactly the opposite direction. The Jaina philosophy is based on spiritual liberation as the highest of all goals and nonviolence as the means to it. In true philosophical manner, the Jainas found in the principle of nonviolence a guide to all morality. First, apart from the obvious application to prohibiting violent acts to other humans, nonviolence is extended to all living things. The Jainas are vegetarian. They are often ridiculed by Westerners for the care they take to avoid injuring insects or other living things while walking or drinking water that may contain minute organisms; it is less well known that Jainas began to care for sick and injured animals thousands of years before animal shelters were thought of in Europe. The Jainas do not draw the distinction usually made in Western ethics between their responsibility for what they do and their responsibility for what they omit doing. Omitting to care for an injured animal would also be in their view a form of violence.
Other moral duties are also derived from the notion of nonviolence. To tell someone a lie, for example, is regarded as inflicting a mental injury on that person. Stealing, of course, is another form of injury, but because of the absence of a distinction between acts and omissions, even the possession of wealth is seen as depriving the poor and hungry of the means to satisfy their wants. Thus nonviolence leads to a principle of nonpossession of property. Jaina priests were expected to be strict ascetics and to avoid sexual intercourse. Ordinary Jainas, however, followed a slightly less severe code, which was intended to give effect to the major forms of nonviolence while still being compatible with a normal life.
The other great ethical system to develop as a reaction to the ossified form of the old Vedic philosophy was Buddhism. The person who became known as the Buddha, which means the “enlightened one,” was born about 563 BC, the son of a king. Until he was 29 years old, he lived the sheltered life of a typical prince, with every luxury he could desire. At that time, legend has it, he was jolted out of his idleness by the “Four Signs”: he saw in rapid succession a very feeble old man, a hideous leper, a funeral, and a venerable ascetic monk. He began to think about old age, disease, and death, and decided to follow the way of the monk. For six years he led an ascetic life of renunciation, but finally, while meditating under a tree, he concluded that the solution was not withdrawal from the world, but rather a practical life of compassion for all.
Buddhism is often thought to be a religion, and indeed over the centuries it has adopted in many places the trappings of religion. This is an irony of history, however, because the Buddha himself was a strong critic of religion. He rejected the authority of the Vedas and refused to set up any alternative creed. He saw religious ceremonies as a waste of time and theological beliefs as mere superstition. He refused to discuss abstract metaphysical problems such as the immortality of the soul. The Buddha told his followers to think for themselves and take responsibility for their own future. In place of religious beliefs and religious ceremonies, the Buddha advocated a life devoted to universal compassion and brotherhood. Through such a life one might reach the ultimate goal, Nirvana, a state in which all living things are free from pain and sorrow. There are similarities between this ethic of universal compassion and the ethics of the Jainas. Nevertheless, the Buddha was the first historical figure to develop such a boundless ethic.
In keeping with his own previous experience, the Buddha proposed a “middle path” between self-indulgence and self-renunciation. In fact, it is not so much a path between these two extremes as one that draws together the benefits of both. Through living a life of compassion and love for all, a person achieves the liberation from selfish cravings sought by the ascetic and a serenity and satisfaction that are more fulfilling than anything obtained by indulgence in pleasure.
It is sometimes thought that because the Buddhist goal is Nirvana, a state of freedom from pain and sorrow that can be reached by meditation, Buddhism teaches a withdrawal from the real world. Nirvana, however, is not to be sought for oneself alone; it is regarded as a unity of the individual self with the universal self in which all things take part. In the Mahayana school of Buddhism, the aspirant for Enlightenment even takes a vow not to accept final release until everything that exists in the universe has attained Nirvana.
The Buddha lived and taught in India, and so Buddhism is properly classified as an Indian ethical philosophy. Yet, Buddhism did not take hold in the land of its origin. Instead, it spread in different forms south into Sri Lanka and Southeast Asia, and north through Tibet to China, Korea, and Japan. In the process, Buddhism suffered the same fate as the Vedic philosophy against which it had rebelled: it became a religion, often rigid, with its own sects, ceremonies, and superstitions.
The two greatest moral philosophers of ancient China, Lao-tzu (flourished c. 6th century BC) and Confucius (551–479 BC), thought in very different ways. Lao-tzu is best known for his ideas about the Tao (literally “Way,” the Supreme Principle). The Tao is based on the traditional Chinese virtues of simplicity and sincerity. To follow the Tao is not a matter of keeping to any set list of duties or prohibitions, but rather of living in a simple and honest manner, being true to oneself, and avoiding the distractions of ordinary living. Lao-tzu's classic book on the Tao, Tao-te Ching, consists only of aphorisms and isolated paragraphs, making it difficult to draw an intelligible system of ethics from it. Perhaps this is because Lao-tzu was a type of moral skeptic: he rejected both righteousness and benevolence, apparently because he saw them as imposed on individuals from without rather than coming from their own inner nature. Like the Buddha, Lao-tzu found the things prized by the world—rank, luxury, and glamour—to be empty, worthless values when compared with the ultimate value of the peaceful inner life. He also emphasized gentleness, calm, and nonviolence. Nearly 600 years before Jesus, he said: “It is the way of the Tao . . . to recompense injury with kindness.” By returning good for good and also good for evil, Lao-tzu believed that all would become good; to return evil for evil would lead to chaos.
The lives of Lao-tzu and Confucius overlapped, and there is even an account of a meeting between them, which is said to have left the younger Confucius baffled. Confucius was the more down-to-earth thinker, absorbed in the practical task of social reform. When he was a provincial minister of justice, the province became renowned for the honesty of its people and their respect for the aged and their care for the poor. Probably because of its practical nature, the teachings of Confucius had a far greater influence on China than did those of the more withdrawn Lao-tzu.
Confucius did not organize his recommendations into any coherent system. His teachings are offered in the form of sayings, aphorisms, and anecdotes, usually in reply to questions by disciples. They aim at guiding the audience in what is necessary to become a better person, a concept translated as “gentleman” or “the superior man.” In opposition to the prevailing feudal ideal of the aristocratic lord, Confucius presented the superior man as one who is humane and thoughtful, motivated by the desire to do what is good rather than by personal profit. Beyond this, however, the concept is not discussed in any detail; it is only shown by diverse examples, some of them trite: “A superior man's life leads upwards . . . . The superior man is broad and fair; the inferior man takes sides and is petty . . . . A superior man shapes the good in man; he does not shape the bad in him.”
One of the recorded sayings of Confucius is an answer to a request from a disciple for a single word that could serve as a guide to conduct for one's entire life. He replied: “Is not reciprocity such a word? What you do not want done to yourself, do not do to others.” This rule is repeated several times in the Confucian literature and might be considered the supreme principle of Confucian ethics. Other duties are not, however, presented as derivative from this supreme principle, nor is the principle used to determine what is to be done when more specific duties—e.g., duties to parents and duties to friends, both of which were given prominence in Confucian ethics—should clash.
Confucius did not explain why the superior man chose righteousness rather than personal profit. This question was taken up more than 100 years after his death by his follower Mencius, who asserted that humans are naturally inclined to do what is humane and right. Evil is not in human nature but is the result of poor upbringing or lack of education. But Confucius also had another distinguished follower, Hsün-tzu, who said that man's nature is to seek self-profit and to envy others. The rules of morality are designed to avoid the strife that would otherwise follow from this nature. The Confucian school was united in its ideal of the superior man but divided over whether such an ideal was to be obtained by allowing people to fulfill their natural desires or by educating them to control those desires.
Early Greece was the birthplace of Western philosophical ethics. The ideas of Socrates, Plato, and Aristotle, who flourished in the 5th and 4th centuries BC, will be discussed in the next section. The sudden blooming of philosophy during that period had its roots in the ethical thought of earlier centuries. In the poetic literature of the 7th and 6th centuries BC, there were, as in the early development of ethics in other cultures, ethical precepts but no real attempts to formulate a coherent overall ethical position. The Greeks were later to refer to the most prominent of these poets and early philosophers as the seven sages, and they are frequently quoted with respect by Plato and Aristotle. Knowledge of the thought of this period is limited, for often only fragments of original writings, along with later accounts of dubious accuracy, remain.
Pythagoras (c. 580–c. 500 BC), whose name is familiar because of the geometrical theorem that bears his name, is one such early Greek thinker about whom little is known. He appears to have written nothing at all, but he was the founder of a school of thought that touched on all aspects of life and that may have been a kind of philosophical and religious order. In ancient times the school was best known for its advocacy of vegetarianism, which, like that of the Jainas, was associated with the belief that after the death of the body, the human soul may take up residence in the body of an animal. Pythagoreans continued to espouse this view for many centuries, and classical passages in the works of such writers as Ovid and Porphyry opposing bloodshed and animal slaughter can be traced back to Pythagoras.
Ironically, an important stimulus for the development of moral philosophy came from a group of teachers to whom the later Greek philosophers—Socrates, Plato, and Aristotle—were consistently hostile: the Sophists. This term was used in the 5th century to refer to a class of professional teachers of rhetoric and argument. The Sophists promised their pupils success in political debate and increased influence in the affairs of the city. They were accused of being mercenaries who taught their students to win arguments by fair means or foul. Aristotle said that Protagoras, perhaps the most famous of them, claimed to teach how “to make the weaker argument the stronger.”
The Sophists, however, were more than mere teachers of rhetorical tricks. They saw their role as imparting the cultural and intellectual qualities necessary for success, and their involvement with argument about practical affairs led them to develop views about ethics. The recurrent theme in the views of the better known Sophists, such as Protagoras, Antiphon, and Thrasymachus, is that what is commonly called good and bad or just and unjust does not reflect any objective fact of nature but is rather a matter of social convention. It is to Protagoras that we owe the celebrated epigram summing up this theme, “Man is the measure of all things.” Plato represents him as saying “Whatever things seem just and fine to each city, are just and fine for that city, so long as it thinks them so.” Protagoras, like Herodotus, was an early social relativist, but he drew a moderate conclusion from his relativism. He argued that while the particular content of the moral rules may vary, there must be rules of some kind if life is to be tolerable. Thus Protagoras stated that the foundations of an ethical system needed nothing from the gods or from any special metaphysical realm beyond the ordinary world of the senses.
The Sophist Thrasymachus appears to have taken a more radical approach—if Plato's portrayal of his views is historically accurate. He explained that the concept of justice means nothing more than obedience to the laws of society, and, since these laws are made by the strongest political group in their own interests, justice represents nothing but the interests of the stronger. This position is often represented by the slogan “Might is right.” Thrasymachus was probably not saying, however, that whatever the mightiest do really is right; he is more likely to have been denying that the distinction between right and wrong has any objective basis. Presumably he would then encourage his pupils to follow their own interests as best they could. He is thus an early representative of Skepticism about morals and perhaps of a form of egoism, the view that the rational thing to do is follow one's own interests.
It is not surprising that with ideas of this sort in circulation other thinkers should react by probing more deeply into ethics to see if the potentially destructive conclusions of some of the Sophists could be resisted. This reaction produced works that have served ever since as the cornerstone for the entire edifice of Western ethics.
Western ethics from Socrates to the 20th century
“The unexamined life is not worth living,” Socrates once observed. This thought typifies his questioning, philosophical approach to ethics. Socrates, who lived from about 470 BC until he was put to death in 399 BC, must be regarded as one of the greatest teachers of ethics. Yet, unlike other figures of comparable importance such as the Buddha or Confucius, he did not tell his audience how they should live. What Socrates taught was a method of inquiry. When the Sophists or their pupils boasted that they knew what justice, piety, temperance, or law was, Socrates would ask them to give an account of it and then show that the account offered was entirely inadequate. For instance, against the received wisdom that justice consists in keeping promises and paying debts, Socrates put forth the example of a person faced with an unusual situation: a friend from whom he borrowed a weapon has since become insane but wants the weapon back. Conventional morality gives no clear answer to this dilemma; therefore, the original definition of justice has to be reformulated. So the Socratic dialogue gets under way.
Because his method of inquiry threatened conventional beliefs, Socrates' enemies contrived to have him put to death on a charge of corrupting the youth of Athens. For those who saw adherence to the conventional moral code as more desirable than the cultivation of an inquiring mind, the charge was appropriate. By conventional standards, Socrates was indeed corrupting the youth of Athens, but he himself saw the destruction of beliefs that could not stand up to criticism as a necessary preliminary to the search for true knowledge. Here, he differed from the Sophists with their moral relativism, for he thought that virtue is something that can be known and that the good person is the one who knows of what virtue, or justice, consists.
It is therefore not entirely accurate to see Socrates as contributing a method of inquiry but no positive views of his own. He believed in goodness as something that can be known, even though he did not himself profess to know it. He also thought that those who know what good is are in fact good. This latter belief seems peculiar today, because we make a sharp distinction between what is good and what is in a person's own interests. Accordingly, it does not seem surprising if people know what they ought morally to do but then proceed to do what is in their own interests instead. How to provide such people with reasons for doing what is right has been a major problem for Western ethics. Socrates did not see a problem here at all; in his view anyone who does not act well must simply be ignorant of the nature of goodness. Socrates could say this because in ancient Greece the distinction between goodness and self-interest was not made, or at least not in the clear-cut manner that it is today. The Greeks believed that virtue is good both for the individual and for the community. To be sure, they recognized that to live virtuously might not be the best way to prosper financially, but then they did not assume, as we are prone to do, that material wealth is a major factor in whether a person's life goes well or ill.
Socrates' greatest disciple, Plato (428/427–348/347 BC), accepted the key Socratic beliefs in the objectivity of goodness and in the link between knowing what is good and doing it. He also took over the Socratic method of conducting philosophy, developing the case for his own positions by exposing errors and confusions in the arguments of his opponents. He did this by writing his works as dialogues in which Socrates is portrayed as engaging in argument with others, usually Sophists. The early dialogues are generally accepted as reasonably accurate accounts of Socrates' views, but the later ones, written many years after the death of Socrates, use the latter as a mouthpiece for ideas and arguments that were Plato's rather than those of the historical Socrates.
In the most famous of Plato's dialogues, Politeia (The Republic), the imaginary Socrates is challenged by the following example: Suppose a person obtained the legendary ring of Gyges, which has the magical property of rendering the wearer invisible. Would that person still have any reason to behave justly? Behind this challenge lies the suggestion, made by the Sophists and still heard today, that the only reason for acting justly is that one cannot get away with acting unjustly. Plato's response to this challenge is a long argument developing a position that appears to go beyond anything the historical Socrates asserted. Plato maintained that true knowledge consists not in knowing particular things but in knowing something general that is common to all the particular cases. This is obviously derived from the way in which Socrates would press his opponents to go beyond merely describing particular good, or temperate, or just acts, and to give instead a general account of goodness, or temperance, or justice. The implication is that we do not know what goodness is unless we can give this general account. But the question then arises, what is it that we know when we know this general idea of goodness? Plato's answer seems to be that what we know is some general form or idea of goodness, which is shared by every particular thing that is good. Yet, if we are truly to be able to know this form or idea of goodness, it seems to follow that it must really exist. Plato accepts this implication. His theory of forms is the view that when we know what goodness is, we have knowledge of something that is the common element in virtue of which all good things are good and, at the same time, is some existing thing, the pure form of goodness.
It has been said that all of Western philosophy consists of footnotes to Plato. Certainly the central issue around which all of Western ethics has revolved can be traced back to the debate between the Sophists, on the one hand, with their claims that goodness and justice are relative to the customs of each society or, worse still, merely a disguise for the interests of the stronger, and, on the other, Plato's defense of the possibility of knowledge of an objective form or idea of goodness.
But even if we know what goodness or justice is, why should we act justly if we can profit by doing the opposite? This remaining part of the challenge posed by the legendary ring of Gyges is still to be answered, for even if we accept that goodness is objective, it does not follow that we all have sufficient reason to do what is good. Whether goodness leads to happiness is, as has been seen from the preceding discussion of early ethics in other cultures, a perennial topic for all who think about ethics. Plato's answer is that justice consists in harmony between the three elements of the soul: intellect, emotion, and desire. The unjust person lives in an unsatisfactory state of internal discord, trying always to overcome the discomfort of unsatisfied desire but never achieving anything better than the mere absence of want. The soul of the good person, on the other hand, is harmoniously ordered under the governance of reason, and the good person finds truly satisfying enjoyment in the pursuit of knowledge. Plato remarks that the highest pleasure, in fact, comes from intellectual speculation. He also gives an argument for the belief that the human soul is immortal; therefore, even if just individuals seem to be living in poverty or illness, the gods will not neglect them in the next life, and there they will have the greatest rewards of all. In summary, then, Plato asserts that we should act justly because in doing so we are “at one with ourselves and with the gods.”
Today, this may seem like a strange account of justice and a farfetched view of what it takes to achieve human happiness. Plato does not recommend justice for its own sake, independently of any personal gains one might obtain from being a just person. This is characteristic of Greek ethics, with its refusal to recognize that there could be an irresolvable conflict between one's own interest and the good of the community. Not until Immanuel Kant, in the 18th century, does a philosopher forcefully assert the importance of doing what is right simply because it is right quite apart from self-interested motivation. To be sure, Plato must not be interpreted as holding that the motivation for each and every just act is some personal gain; on the contrary, the person who takes up justice will do what is just because it is just. Nevertheless, Plato accepts the assumption of his opponents that one could not recommend taking up justice in the first place unless doing so could be shown to be advantageous for oneself as well as for others.
In spite of the fact that many people now think differently about this connection between morality and self-interest, Plato's attempt to argue that those who are just are in the long run happier than those who are unjust has had an enormous influence on Western ethics. Like Plato's views on the objectivity of goodness, the claim that justice and personal happiness are linked has helped to frame the agenda for a debate that continues even today.
Plato founded a school of philosophy in Athens known as the Academy. Here Aristotle (384–322 BC), Plato's younger contemporary and only rival in terms of influence on the course of Western philosophy, came to study. Aristotle was often fiercely critical of Plato, and his writing is very different in style and content, but the time they spent together is reflected in a considerable amount of common ground. Thus Aristotle holds with Plato that the life of virtue is rewarding for the virtuous, as well as beneficial for the community. Aristotle also agrees that the highest and most satisfying form of human existence is that in which man exercises his rational faculties to the fullest extent. One major difference is that Aristotle does not accept Plato's theory of common essences, or universal ideas, existing independently of particular things. Thus he does not argue that the path to goodness is through knowledge of the universal form or idea of “the good.”
Aristotle's ethics are based on his view of the universe. He saw it as a hierarchy in which everything has a function. The highest form of existence is the life of the rational being, and the function of lower beings is to serve this form of life. This led him to defend slavery—because he thought barbarians were less rational than Greeks and by nature suited to be “living tools”—and the killing of nonhuman animals for food or clothing. From this also came a view of human nature and an ethical theory derived from it. All living things, Aristotle held, have inherent potentialities and it is their nature to develop that potential to the full. This is the form of life properly suited to them and constitutes their goal. What, however, is the potentiality of human beings? For Aristotle this question turns out to be equivalent to asking what it is that is distinctive about human beings, and this, of course, is the capacity to reason. The ultimate goal of humans, therefore, is to develop their reasoning powers. When they do this, they are living well, in accordance with their true nature, and they will find this the most rewarding existence possible.
Aristotle thus ends up agreeing with Plato that the life of the intellect is the highest form of life; though having a greater sense of realism than Plato, he tempered this view with the suggestion that the best feasible life for humans must also have the goods of material prosperity and close friendships. Aristotle's argument for regarding the life of the intellect so highly, however, is different from that used by Plato; and the difference is significant because Aristotle committed a fallacy that has often been repeated. The fallacy is to assume that whatever capacity distinguishes humans from other beings is, for that very reason, the highest and best of their capacities. Perhaps the ability to reason is the best of our capacities, but we cannot be compelled to draw this conclusion from the fact that it is what is most distinctive of the human species.
A broader and still more pervasive fallacy underlies Aristotle's ethics. It is the idea that an investigation of human nature can reveal what we ought to do. For Aristotle, an examination of a knife would reveal that its distinctive quality is to cut, and from this we could conclude that a good knife would be a knife that cuts well. In the same way, an examination of human nature should reveal the distinctive quality of human beings, and from this we should be able to conclude what it is to be a good human being. This line of thought makes sense if we think, as Aristotle did, that the universe as a whole has a purpose and that we exist as part of such a goal-directed scheme of things, but its error becomes glaring once we reject this view and come to see our existence as the result of a blind process of evolution. Then we know that the standards of quality for knives are a result of the fact that knives are made with a specific purpose in mind and that a good knife is one that fills this purpose well. Human beings, however, were not made with any particular purpose in mind. Their nature is the result of random forces of natural selection and thus cannot, without further moral premises, determine how they ought to live.
It is to Aristotle that we owe the notion of the final end, or, as it was later called by medieval scholars, the summum bonum—the overall good for human beings. This can be found, Aristotle wrote, by asking why we do the things that we do. If we ask why we chop wood, the answer may be to build a fire; and if we ask why we build a fire, it may be to keep warm; but, if we ask why we keep warm, the answer is likely to be simply that it is pleasant to be warm and unpleasant to be cold. We can ask the same kind of questions about other activities; the answer always points, Aristotle thought, to what he called eudaimonia. This Greek word is usually translated as “happiness,” but this is only accurate if we understand that term in its broadest sense to mean living a fulfilling, satisfying life. Happiness in the narrower sense of joy or pleasure would certainly be a concomitant of such a life, but it is not happiness in this narrower sense that is the goal.
In searching for the overall good, Aristotle separates what may be called instrumental goods from intrinsic goods. The former are good only because they lead to something else that is good; the latter are good in themselves. The distinction is neglected in the early lists of ethical precepts that were surveyed above, but it is of the first importance if a firmly grounded answer to questions about how one ought to live is to be obtained.
Aristotle is also responsible for much later thinking about the virtues one should cultivate. In his most important ethical treatise, the Ethica Nicomachea (Nicomachean Ethics), he sorts through the virtues as they were popularly understood in his day, specifying in each case what is truly virtuous and what is mistakenly thought to be so. Here, he uses the idea of the Golden Mean, which is essentially the same idea as the Buddha's middle path between self-indulgence and self-renunciation. Thus courage, for example, is the mean between two extremes: one can have a deficiency of it, which is cowardice, or one can have an excess of it, which is foolhardiness. The virtue of friendliness, to give another example, is the mean between obsequiousness and surliness.
Aristotle does not intend the idea of the mean to be applied mechanically in every instance: he says that in the case of the virtue of temperance, or self-restraint, it is easy to find the excess of self-indulgence in the physical pleasures, but the opposite error, insufficient concern for such pleasures, scarcely exists. (The Buddha, with his experience of the ascetic life of renunciation, would not have agreed.) This caution in the application of the idea is just as well, for while it may be a useful device for moral education, the notion of a mean cannot help us to discover new truths about virtue. We can only arrive at the mean if we already have a notion as to what is an excess and what is a defect of the trait in question, but this is not something to be discovered by a morally neutral inspection of the trait itself. We need a prior conception of the virtue in order to decide what is excessive and what is defective. To attempt to use the doctrine of the mean to define the particular virtues would be to travel in a circle.
Aristotle's list of the virtues differs from later Christian lists. Courage, temperance, and liberality are common to both periods, but Aristotle also includes a virtue that literally means “greatness of soul.” This is the characteristic of holding a high opinion of oneself. The corresponding vice of excess is unjustified vanity, but the vice of deficiency is humility, which for Christians is a virtue.
Aristotle's discussion of the virtue of justice has been the starting point for almost all Western accounts. He distinguishes between justice in the distribution of wealth or other goods and justice in reparation, as, for example, in punishing someone for a wrong he has done. The key element of justice, according to Aristotle, is treating like cases alike—an idea that has set later thinkers the task of working out which similarities (need, desert, talent) are relevant. As with the notion of virtue as a mean, Aristotle's conception of justice provides a framework that needs to be filled in before it can be put to use.
Aristotle distinguished between theoretical and practical wisdom. His concept of practical wisdom is significant, for it goes beyond merely choosing the means best suited to whatever ends or goals one may have. The practically wise person also has the right ends. This implies that one's ends are not purely a matter of brute desires or feelings; the right ends are something that can be known. It also gives rise to the problem that faced Socrates: How is it that people can know the difference between good and bad and still choose what is bad? As noted earlier, Socrates simply denied that this could happen, saying that those who did not choose the good must, appearances notwithstanding, be ignorant of what it is. Aristotle said that this view of Socrates was “plainly at variance with the observed facts” and, instead, offered a detailed account of the ways in which one can possess knowledge and yet not act on it because of lack of control or weakness of will.
Later Greek and Roman ethics
In ethics, as in many other fields, the later Greek and Roman periods do not display the same penetrating insight as the Classic period of 5th- and 4th-century Greek civilization. Nevertheless, the two dominant schools of thought, Stoicism and Epicureanism, represent important approaches to the question of how one ought to live.
Stoicism had its origins in the views of Socrates and Plato, as modified by Zeno and then by Chrysippus in the 3rd century BC. It gradually gained influence in Rome, chiefly through the teachings of Cicero (106–43 BC) and then later in the 1st century AD through those of Seneca. Remarkably, its chief proponents include both a slave, Epictetus, and an emperor, Marcus Aurelius. This is a fine illustration of the Stoic message that what is important is the pursuit of wisdom and virtue, a pursuit that is open to all human beings owing to their common capacity for reason and that can be carried out no matter what the external circumstances of their lives.
Today, the word stoic conjures up one who remains unmoved by the sorrows and afflictions that distress the rest of humanity. This is an accurate representation of a stoic ideal, but it must be placed in the context of a systematic approach to life. Plato held that human passions and physical desires are in need of regulation by reason (see above Plato). The Stoics went further: they rejected passions altogether as a basis for deciding what is good or bad. Physical desires cannot simply be abolished, but when we become wise we appreciate the difference between wanting something and judging it to be good. Our desires make us want something, but only our reason can judge the goodness of what is wanted. If we are wise, we will identify with our reason, not with our desires; hence, we will not place our hopes on the attainment of our physical desires nor our anxieties on our failure to attain them. Wise Stoics will feel physical pain as others do, but in their minds they will know that physical pain leaves the true reasoning self untouched. The only thing that is truly good is to live in a state of wisdom and virtue. In aiming at such a life, we are not subject to the same play of fortune that afflicts us when we aim at physical pleasure or material wealth, for wisdom and virtue are matters of the intellect and under our own control. Moreover, if matters become too grim, there is always a way of ending the pain of the physical world. The Stoics were not reluctant to counsel suicide as a means of avoiding otherwise inescapable pain.
Perhaps the most important legacy of Stoicism, however, is its conviction that all human beings share the capacity to reason. This led the Stoics to a fundamental sense of equality, which went beyond the limited Greek conception of equal citizenship. Thus Seneca claimed that the wise man will esteem the community of rational beings far above any particular community in which the accident of birth has placed him, and Marcus Aurelius said that common reason makes all individuals fellow citizens. The belief that human reasoning capacities are common to all was also important, because from it the Stoics drew the implication that there is a universal moral law, which all people are capable of appreciating. The Stoics thus strengthened the tradition that sees the universality of reason as the basis on which ethical relativism is to be rejected.
While the modern use of the term stoic accurately represents at least a part of the Stoic philosophy, anyone taking the present-day meaning of epicure as a guide to the philosophy of Epicurus (341–270 BC) would go astray. True, the Epicureans regarded pleasure as the sole ultimate good and pain as the sole evil; and they did regard the more refined pleasures as superior, simply in terms of the quantity and durability of the pleasure they provided, to the coarser pleasures. To portray them as searching for these more refined pleasures by dining at the best restaurants and drinking the finest wines, however, is the reverse of the truth. By refined pleasures, Epicurus meant pleasures of the mind, as opposed to the coarse pleasures of the body. He taught that the highest pleasure obtainable is the pleasure of tranquillity, which is to be obtained by the removal of unsatisfied wants. The way to do this is to eliminate all but the simplest wants; these are then easily satisfied even by those who are not wealthy.
Epicurus developed his position systematically. To determine whether something is good, he would ask if it increased pleasure or reduced pain. If it did, it was good as a means; if it did not, it was not good at all. Thus justice was good but merely as an expedient arrangement to prevent mutual harm. Why not then commit injustice when we can get away with it? Only because, Epicurus says, the perpetual dread of discovery will cause painful anxiety. Epicurus also exalted friendship, and the Epicureans were famous for the warmth of their personal relationships; but, again, they proclaimed that friendship is good only because of its tendency to create pleasure.
Both Stoic and Epicurean ethics can be seen as precursors of later trends in Western ethics: the Stoics of the modern belief in equality and the Epicureans of a Utilitarian ethic based on pleasure. The development of these ethical positions, however, was dramatically affected by the spreading from the East of a new religion that had its roots in a Jewish conception of ethics as obedience to a divine authority. With the conversion of Emperor Constantine I to Christianity by AD 313, the older schools of philosophy lost their sway over the thinking of the Roman Empire.
Christian ethics from the New Testament to the Scholastics
Matthew reports Jesus as having said, in the Sermon on the Mount, that he came not to destroy the law of the prophets but to fulfill it. Indeed, when Jesus is regarded as a teacher of ethics, it is clear that he was more a reformer of the Hebrew tradition than a radical innovator. The Hebrew tradition had a tendency to place great emphasis on compliance with the letter of the law; the Gospel accounts of Jesus portray him as preaching against this “righteousness of the scribes and Pharisees,” championing the spirit rather than the letter of the law. This spirit he characterized as one of love, for God and for one's neighbour. But since he was not proposing that the old teachings be discarded, he saw no need to develop a comprehensive ethical system. Christianity thus never really broke with the Jewish conception of morality as a matter of divine law to be discovered by reading and interpreting the word of God as revealed in the Scriptures.
This conception of morality had important consequences for the future development of Western ethics. The Greeks and Romans, and indeed thinkers such as Confucius too, did not have the Western conception of a distinctively moral realm of conduct. For them, everything that one did was a matter of practical reasoning, in which one could do well or poorly. In the more legalistic Judeo-Christian view, however, it is one thing to lack practical wisdom in, say, household budgeting, and a quite different and much more serious matter to fall short of what the moral law requires. This distinction between the moral and the nonmoral realms now affects every question in Western ethics, including the very way the questions themselves are framed.
Another consequence of the retention of the basically legalistic stance of Jewish ethics was that from the beginning Christian ethics had to deal with the question of how to judge the person who breaks the law from good motives or keeps it from bad motives. The latter half of this question was particularly acute because the Gospels describe Jesus as repeatedly warning of a coming resurrection of the dead at which time all would be judged and punished or rewarded according to their sins and virtues in this life. The punishments and rewards were weighty enough to motivate anyone who took this message seriously; and it was given added emphasis by the fact that it was not going to be long in coming. (Jesus said that it would take place during the lifetime of some of those listening to him.) This is, therefore, an ethic that invokes external sanctions as a reason for doing what is right, in contrast to Plato or Aristotle for whom happiness is an internal element of a virtuous life. At the same time, it is an ethic that places love above mere literal compliance with the law. These two aspects do not sit easily together. Can one love God and neighbour in order to be rewarded with eternal happiness in another life?
The fact that Jesus and Paul, too, believed in the imminence of the Second Coming led them to suggest ways of living that were scarcely feasible on any other assumption: taking no thought for the morrow; turning the other cheek; and giving away all one has. Even Paul's preference for celibacy rather than marriage and his grudging acceptance of the latter on the basis that “It is better to marry than to burn” makes some sense once we grasp that he was proposing ethical standards for what he thought would be the last generation on earth. When the expected event did not occur and Christianity became the official religion of the vast and embattled Roman Empire, Christian leaders were faced with the awkward task of reinterpreting these injunctions in a manner more suited for a continuing society.
The new Christian ethical standards did lead to some changes in Roman morality. Perhaps the most vital was a new sense of the equal moral status of all human beings. As previously noted, the Stoics had been the first to elaborate this conception, grounding equality on the common capacity to reason. For Christians, humans are equal because they are all potentially immortal and equally precious in the sight of God. This caused Christians to condemn a wide variety of practices that had been accepted by both Greek and Roman moralists. Many of these related to the taking of innocent human life: from the earliest days Christian leaders condemned abortion, infanticide, and suicide. Even killing in war was at first regarded as wrong, and soldiers converted to Christianity had refused to continue to bear arms. Once the empire became Christian, however, this was one of the inconvenient ideas that had to yield. In spite of what Jesus had said about turning the other cheek, the church leaders declared that killing in a “just war” was not a sin. The Christian condemnation of killing in gladiatorial games, on the other hand, had a more permanent effect. Finally, but perhaps most importantly, while Christian emperors continued to uphold the legality of slavery, the Christian church accepted slaves as equals, admitted them to its ceremonies, and regarded the granting of freedom to slaves as a virtuous, if not obligatory, act. This moral pressure led over several hundred years to the gradual disappearance of slavery in Europe.
The Christian contribution to improving the position of slaves can also be linked with the distinctively Christian list of virtues. Some of the virtues described by Aristotle, as, for example, greatness of soul, are quite contrary in spirit to Christian virtues such as humility. In general, it can be said that the Greeks and Romans prized independence, self-reliance, magnanimity, and worldly success. By contrast, Christians saw virtue in meekness, obedience, patience, and resignation. As the Greeks and Romans conceived virtue, a virtuous slave was almost a contradiction in terms, but for Christians there was nothing in the state of slavery that was incompatible with the highest moral character.
Christianity began with a set of scriptures incorporating many ethical injunctions but with no ethical philosophy. The first serious attempt to provide such a philosophy was made by St. Augustine of Hippo (354–430). Augustine was acquainted with a version of Plato's philosophy, and he developed the Platonic idea of the rational soul into a Christian view wherein humans are essentially souls, using their bodies as means to achieve their spiritual ends. The ultimate object remains happiness, as in Greek ethics, but Augustine saw happiness as consisting in a union of the soul with God after the body has died. It was through Augustine, therefore, that Christianity received the Platonic theme of the relative inferiority of bodily pleasures. There was, to be sure, a fundamental difference: whereas Plato saw this inferiority in terms of a comparison with the pleasures of philosophical contemplation in this world, Christians compared them unfavourably with the pleasures of spiritual existence in the next world. Moreover, Christians came to see bodily pleasures not merely as inferior but also as a positive threat to the achievement of spiritual bliss.
It was also important that Augustine could not accept the view, common to so many Greek and Roman philosophers, that philosophical reasoning was the path to wisdom and happiness. For a Christian, of course, the path had to be through love of God and faith in Jesus as the Saviour. The result was to be, for many centuries, a rejection of the use of unfettered reasoning powers in ethics.
Augustine was aware of the tension caused by the dual Christian motivations of love of God and neighbour, on the one hand, and reward and punishment in the afterlife, on the other. He came down firmly on the side of love, insisting that those who keep the moral law through fear of punishment are not really keeping it at all. But it is not ordinary human love, either, that suffices as a motivation for true Christian living. Augustine believed all men bear the burden of Adam's original sin, and so are incapable of redeeming themselves by their own efforts. Only the unmerited grace of God makes possible obedience to the “first greatest commandment” of loving God, and without such, one cannot fulfill the moral law. This view made a clear-cut distinction between Christians and pagan moralists, no matter how humble and pure the latter might be; only the former could be saved because only they could receive the blessing of divine grace. But this gain, as Augustine saw it, was purchased at the cost of denying that man is free to choose good or evil. Only Adam had this choice: he chose for all humanity, and he chose evil.
Aquinas and the moral philosophy of the Scholastics
At this point we may pass over more than 800 years in silence, for there were no major developments in ethics in the West until the rise of Scholasticism in the 12th and 13th centuries. Among the first of the significant works written during this time was a treatise on ethics by the French philosopher and theologian Peter Abelard (1079–1142). His importance in ethical theory lies in his emphasis on intentions. Abelard maintained, for example, that the sin of sexual wrongdoing consists not in the act of illicit sexual intercourse nor even in the desire for it, but in mentally consenting to that desire. In this he was far more modern than Augustine, with his doctrine of grace, and also more thoughtful than those who even today assert that the mere desire for what is wrong is as wrong as the act itself. Abelard saw that there is a problem in holding anyone morally responsible for the existence of mere physical desires. His ingenious solution was taken up by later medieval writers, and traces of it can still be found in modern discussions of moral responsibility.
Aristotle's ethical writings were not known to scholars in western Europe during Abelard's time. Latin translations became available only in the first half of the 13th century, and the rediscovery of Aristotle dominated later medieval philosophy. Nowhere is his influence more marked than in the thought of St. Thomas Aquinas (1225–74), often regarded as the greatest of the Scholastic philosophers and undoubtedly the most influential, since his teachings became the semiofficial philosophy of the Roman Catholic Church. Such is the respect in which Aquinas held Aristotle that he referred to him simply as The Philosopher, and it is not too far from the truth to say that the chief aim of Aquinas' work was to reconcile Aristotle's views with Christian doctrine.
Aquinas took from Aristotle the notion of a final end, or summum bonum, at which all action is ultimately directed; and, like Aristotle, he saw this end as necessarily linked with happiness. This conception was Christianized, however, by the idea that happiness is to be found in the love of God. Thus a person seeks to know God but cannot fully succeed in this in life on earth. The reward of heaven, where one can know God, is available only to those who merit it, though even then it is given by God's grace rather than obtained by right. Short of heaven, a person can experience only a more limited form of happiness to be gained through a life of virtue and friendship, much as Aristotle had recommended.
The blend of Aristotle's teachings and Christianity is also evident in Aquinas' views about right and wrong, and how we come to know the difference between them. Aquinas is often described as advocating a “natural law” ethic, but this term is easily misunderstood. The natural law to which Aquinas referred does not require a legislator any more than do the laws of nature that govern the motions of the planets. An even more common mistake is to imagine that this conception of natural law relies on contrasting what is natural with what is artificial. Aquinas' theory of the basis of right and wrong developed rather as an alternative to the view that morality is determined simply by the arbitrary will of God. Instead of conceiving of right and wrong in this manner as something fundamentally unrelated to human goals and purposes, Aquinas saw morality as deriving from human nature and the activities that are objectively suited to it.
It is a consequence of this natural law ethic that the difference between right and wrong can be appreciated by the use of reason and reflection on experience. Christian revelation may supplement this knowledge in some respects, but even such pagan philosophers as Aristotle could understand the essentials of virtuous living. We are, however, likely to err when we apply these general principles to the particular cases that confront us in everyday life. Corrupt customs and poor moral education may obscure the messages of natural reason. Hence, societies must enact laws of their own to supplement natural law and, where necessary, to coerce those who, because of their own imperfections, are liable to do what is wrong and socially destructive.
It follows, too, that virtue and human flourishing are linked. When we do what is right, we do what is objectively suited to our true nature. Thus the promise of heaven is no mere external sanction, rewarding actions that would otherwise be indifferent to us or even against our best interests. On the contrary, Aquinas wrote that “God is not offended by us except by what we do against our own good.” Reward and punishment in the afterlife reinforce a moral law that all humans, Christian or pagan, have adequate prior reasons for following.
In arguing for his views, Aquinas was always concerned to show that he had the authority of the Scriptures or the Church Fathers on his side, but the substance of his ethical system is to a remarkable degree based on reason rather than revelation. This is strong testimony to the power of Aristotle's example. Nonetheless, Aquinas absorbed the weaknesses as well as the strengths of the Aristotelian system. His attempt to base right and wrong on human nature, in particular, invites the objection that we cannot presuppose our nature to be good. Aquinas might reply that it is good because God made it so, but this merely shifts back one step the issue of the basis of good and bad: Did God make it good in accordance with some independent standard of goodness, or would any human nature made by God be good? If we give the former answer, we need an account of the independent standard of goodness. Because this cannot—if we are to avoid circular argument—be based on human nature, it is not clear what account Aquinas could offer. If we maintain, however, that any human nature made by God would be good, we must accept that if God had made our nature such that we flourish and achieve happiness by torturing the weak and helpless among us, that would have been what we should do in order to live virtuously.
Something resembling this second option—but without the intermediate step of an appeal to human nature—was the position taken by the last of the great Scholastic philosophers, William of Ockham (c. 1285–1349?). Ockham boldly broke with much that had been taken for granted by his immediate predecessors. Fundamental to this was his rejection of the central Aristotelian idea that all things have a final end, or goal, toward which they naturally tend. He, therefore, also spurned Aquinas' attempt to base morality on human nature, and with it the idea that happiness is man's goal and closely linked with goodness. This led him to a position in stark contrast to almost all previous Western ethics. Ockham denied all standards of good and evil that are independent of God's will. What God wills is good; what God condemns is evil. That is all there is to say about the matter. This position is sometimes called a divine approbation theory, because it defines “good” as whatever is approved by God. As indicated earlier, when discussing attempts to link morality with religion, it follows from such a position that it is meaningless to describe God himself as good. It also follows that if God had willed us to torture children, it would be good to do so. As for the actual content of God's will, according to Ockham, that is not a subject for philosophy but rather a matter for revelation and faith.
The rigour and consistency of Ockham's philosophy made it for a time one of the leading schools of Scholastic thought, but eventually it was the philosophy of Aquinas that prevailed in the Roman Catholic Church. After the Reformation, however, Ockham's view exerted influence on Protestant theologians. Meanwhile, it hastened the decline of Scholastic moral philosophy because it effectively removed ethics from the sphere of reason.
Renaissance and Reformation
The revival of Classical learning and culture that began in 15th-century Italy and then slowly spread throughout Europe did not give immediate birth to any major new ethical theories. Its significance for ethics lies, rather, in a change of focus. For the first time since the conversion of the Roman Empire to Christianity, man, not God, became the chief object of interest, and the theme was not religion but humanism—the powers, freedom, and accomplishments of human beings. This does not mean that there was a sudden conversion to atheism. Renaissance thinkers remained Christian and still considered human beings as somehow midway between the beasts and the angels. Yet, even this middle position meant that humans were special. It meant, too, a new conception of human dignity and of the importance of the individual.
Although the Renaissance did not produce any outstanding moral philosophers, there is one writer whose work is of some importance in the history of ethics: the Italian author and statesman Niccolò Machiavelli. His book Il principe (1513; The Prince) offered advice to rulers as to what they must do to achieve their aims and secure their power. Its significance for ethics lies precisely in the fact that Machiavelli's advice ignores the usual ethical rules: “It is necessary for a prince, who wishes to maintain himself, to learn how not to be good, and to use this knowledge and not use it, according to the necessities of the case.” There had not been so frank a rejection of morality since the Greek Sophists. So startling is the cynicism of Machiavelli's advice that it has been suggested that Il principe was an attempt to satirize the conduct of the princely rulers of Renaissance Italy. It may be more accurate, however, to view Machiavelli as an early political scientist, concerned only with setting out what human beings are like and how power is maintained, with no intention of passing moral judgment on the state of affairs described. In any case, Il principe gained instant notoriety, and Machiavelli's name became synonymous with political cynicism and deviousness. In spite of the chorus of condemnation, the work has led to a sharper appreciation of the difference between the lofty ethical systems of the philosophers and the practical realities of political life.
The first Protestants
It was left to the 17th-century English philosopher and political theorist Thomas Hobbes to take up the challenge of constructing an ethical system on the basis of so unflattering a view of human nature (see below). Between Machiavelli and Hobbes, however, there occurred the traumatic breakup of Western Christianity known as the Reformation. Reacting against the worldly immorality apparent in the Renaissance church, Martin Luther, John Calvin, and other leaders of the new Protestantism sought to return to the pure early Christianity of the Scriptures, especially the teachings of Paul, and of the Church Fathers, with Augustine foremost among them. They were contemptuous of Aristotle (Luther called him a “buffoon”) and of non-Christian philosophers in general. Luther's standard of right and wrong was what God commands. Like William of Ockham, Luther insisted that the commands of God cannot be justified by any independent standard of goodness: good simply means what God commands. Luther did not believe these commands would be designed to satisfy human desires because he was convinced that desires are totally corrupt. In fact, he thought that human nature was totally corrupt. In any case, Luther insisted that one does not earn salvation by good works: one is justified by faith in Christ and receives salvation through divine grace.
It is apparent that if these premises are accepted, there is little scope for human reason in ethics. As a result, no moral philosophy has ever had the kind of close association with any Protestant church that, say, the philosophy of Aquinas has had with Roman Catholicism. Yet, because Protestants emphasized the capacity of the individual to read and understand the Gospels without obtaining the authoritative interpretation of the church, the ultimate outcome of the Reformation was a greater freedom to read and write independently of the church hierarchy. This made possible a new era of ethical thought.
From this time, too, distinctively national traditions of moral philosophy began to emerge; the British tradition, in particular, developed largely independently of ethics on the Continent. Accordingly, the present discussion will follow this tradition through the 19th century before returning to consider the different line of development in continental Europe.
The British tradition: from Hobbes to the Utilitarians
Thomas Hobbes (1588–1679) is an outstanding example of the independence of mind that became possible in Protestant countries after the Reformation. God does, to be sure, play an honourable role in Hobbes's philosophy, but it is a dispensable role. The philosophical edifice stands on its own foundations; God merely crowns the apex. Hobbes was the equal of the Greek philosophers in his readiness to develop an ethical position based only on the facts of human nature and the circumstances in which humans live; and he surpassed even Plato and Aristotle in the extent to which he sought to do this by systematic deduction from clearly set out premises.
Hobbes started with a severe view of human nature: all of man's voluntary acts are aimed at self-pleasure or self-preservation. This position is known as psychological hedonism, because it asserts that the fundamental psychological motivation is the desire for pleasure. Like later psychological hedonists, Hobbes was confronted with the objection that people often seem to act altruistically. There is a story that Hobbes was seen giving alms to a beggar outside St. Paul's Cathedral. A clergyman sought to score a point by asking Hobbes if he would have given the money, had Christ not urged giving to the poor. Hobbes replied that he gave the money because it pleased him to see the poor man pleased. The reply reveals the dilemma that always faces those who propose startling new explanations for all human actions: either the theory is flagrantly at odds with how people really behave or else it must be broadened to such an extent that it loses much of what made it so shocking in the first place.
Hobbes's account of “good” is equally devoid of religious or metaphysical premises. He defined good as “any object of desire,” and insisted that the term must be used in relation to a person—nothing is simply good of itself independently of the person who desires it. Hobbes may therefore be considered a subjectivist. If one were to say, for example, of the incident just described, “What Hobbes did was good,” this statement would not be objectively true or false. It would be good for the poor man, and, if Hobbes's reply was accurate, it would also be good for Hobbes. But if a second poor person, for instance, was jealous of the success of the first, that person could quite properly say that what Hobbes did was bad.
Remarkably, this unpromising picture of self-interested individuals who have no notion of good apart from their own desires serves as the foundation of Hobbes's account of justice and morality in his masterpiece, Leviathan (1651). Starting with the premises that humans are self-interested and the world does not provide for all their needs, Hobbes argued that in the state of nature, without civil society, there will be competition between men for wealth, security, and glory. The ensuing struggle is Hobbes's famous “war of all against all,” in which there can be no industry, commerce, or civilization, and the life of man is “solitary, poor, nasty, brutish and short.” The struggle occurs because each individual rationally pursues his or her own interests, but the outcome is in no one's interest.
How can this disastrous situation be ended? Not by an appeal to morality or justice; in the state of nature these ideas have no meaning. Yet, we want to survive and we can reason. Our reason leads us to seek peace if it is attainable but to continue to use all the means of war if it is not. How is peace to be obtained? Only by a social contract. We must all agree to give up our rights to attack others in return for their giving up their rights to attack us. By reasoning in order to increase our prospects for survival, we have found the solution.
We know that a social contract will solve our problems. Our reason therefore leads us to desire such an arrangement. But how is it to come about? My reason cannot tell me to accept it while others do not. Nor is Hobbes under the illusion that the mere making of a promise or contract will carry any weight. Since we are self-interested, we will keep our promises only if it is in our interest to do so. A promise that cannot be enforced is worthless. Therefore, in making the social contract, we must establish some means of enforcing it. To do this we must all hand our powers over to some other person or group of persons who will punish anyone who breaches the contract. This person or group of persons Hobbes calls the sovereign. It may be a single person, or an elected legislature, or almost any other form of government; the essence of sovereignty consists only in having sufficient power to keep the peace by punishing those who would break it. When such a sovereign—the Leviathan of his title—exists, justice becomes meaningful in that agreements or promises are necessarily kept. At the same time, each individual has adequate reason to be just, for the sovereign will ensure that those who do not keep their agreements are suitably punished.
Hobbes witnessed the turbulence and near anarchy of the English Civil Wars (1642–51) and was keenly aware of the dangers caused by disputed sovereignty. His solution was to insist that sovereignty must not be divided. Because the sovereign was appointed to enforce the social contract fundamental to peace and everything desired, it can only be rational to resist the sovereign if the sovereign directly threatens one's life. Hobbes was, in effect, a supporter of absolute sovereignty, and this has been the focus of much political discussion of his ideas. His significance for ethics, however, lies rather in his success in dealing with the subject independently of theology and of those quasi-theological or quasi-Aristotelian accounts that see the world as designed for the benefit of human beings. With this achievement, he brought ethics into the modern era.
Early intuitionists: Cudworth, More, and Clarke
There was, of course, immediate opposition to Hobbes's views. Ralph Cudworth (1617–88), one of a group known as the Cambridge Platonists, defended a position in some respects similar to that of Plato. That is to say, Cudworth believed the distinction between good and evil does not lie in human desires but is something objective and can be known by reason, just as the truths of mathematics can be known by reason. Cudworth was thus a forerunner of what has since come to be called intuitionism, the view that there are objective moral truths that can be known by a kind of rational intuition. This view was to attract the support of a line of distinguished thinkers until the 20th century when it became for a time the dominant view in British academic philosophy.
Henry More (1614–87), another leading member of the Cambridge Platonists, attempted to give effect to the comparison between mathematics and morality by listing moral axioms that can be seen as self-evidently true, just as the axioms of geometry are seen to be self-evident. In marked contrast to Hobbes, More included an axiom of benevolence: “If it be good that one man should be supplied with the means of living well and happily, it is mathematically certain that it is doubly good that two should be so supplied, and so on.” Here, More was attempting to build on something that Hobbes himself accepted—namely, our own desire to be supplied with the means of living well. More, however, wanted to enlist reason to lead us beyond this narrow egoism to a universal benevolence. There are traces of this line of thought in the Stoics, but it was More who introduced it into British ethical thinking, wherein it is still very much alive.
Samuel Clarke (1675–1729), the next major intuitionist, accepted More's axiom of benevolence in slightly different words. He was also responsible for a principle of equity, which, though derived from the Golden Rule so widespread in ancient ethics, was formulated with a new precision: “Whatever I judge reasonable or unreasonable for another to do for me, that by the same judgment I declare reasonable or unreasonable that I in the like case should do for him.” As for the means by which these moral truths are known, Clarke accepted Cudworth's and More's analogy with truths of mathematics and added the idea that what human reason discerns is a certain “fitness or unfitness” about the relationship between circumstances and actions. The right action in a given set of circumstances is the fitting one; the wrong action is unfitting. This is something known intuitively; it is self-evident.
Clarke's notion of fitness is obscure, but intuitionism faces a still more serious problem that has always been a barrier to its acceptance. Suppose we accept the ability of reason to discern that it would be wrong to deceive a person in order to profit from the deception. Why should our discerning this truth provide us with a motive sufficient to override our desire to profit? The intuitionist position divorces our moral knowledge from the forces that motivate us. The former is a matter of reason, the latter of desire.
The punitive power of Hobbes's sovereign is, of course, one way to provide sufficient motivation for obedience to the social contract and to the laws decreed by the sovereign as necessary for the peaceful functioning of society. The intuitionists, however, wanted to show that morality is objective and holds in all circumstances whether there is a sovereign or not. Reward and punishment in the afterlife, administered by an all-powerful God, would provide a more universal motive; and some intuitionists, such as Clarke, did make use of this divine sanction. Other thinkers, however, wanted to show that it is reasonable to do what is good independently of the threats of any external power, human or divine. This desire lay behind the development of the major alternative to intuitionism in 17th- and 18th-century British moral philosophy: moral sense theory. The debate between the intuitionist and moral sense schools of thought aired for the first time the major issue in what is still the central debate in moral philosophy: Is morality based on reason or on feelings?
Shaftesbury and the moral sense school
The term moral sense was first used by the 3rd Earl of Shaftesbury (1671–1713), whose writings reflect the optimistic tone both of the school of thought he founded and of so much of the philosophy of the 18th-century Enlightenment. Shaftesbury believed that Hobbes had erred by presenting a one-sided picture of human nature. Selfishness is not the only natural passion. We also have natural feelings directed to others: benevolence, generosity, sympathy, gratitude, and so on. These feelings give us an “affection for virtue,” which leads us to promote the public interest. Shaftesbury called this affection the moral sense, and he thought it created a natural harmony between virtue and self-interest. Shaftesbury was, of course, realistic enough to acknowledge that we also have contrary desires and that not all of us are virtuous all of the time. Virtue could, however, be recommended because—and here Shaftesbury picked up a theme of Greek ethics—the pleasures of virtue are superior to the pleasures of vice.
Butler on self-interest and conscience
Joseph Butler (1692–1752), a bishop of the Church of England, developed Shaftesbury's position in two ways. He strengthened the case for a harmony between morality and enlightened self-interest by claiming that happiness occurs as a by-product of the satisfaction of desires for things other than happiness itself. Those who aim directly at happiness do not find it; those who have their goals elsewhere are more likely to achieve happiness as well. Butler was not doubting the reasonableness of pursuing one's own happiness as an ultimate aim. He went so far as to say that “ . . . when we sit down in a cool hour, we can neither justify to ourselves this or any other pursuit, till we are convinced that it will be for our happiness, or at least not contrary to it.” He held, however, that direct and simple egoism is a self-defeating strategy. Egoists will do better for themselves by adopting immediate goals other than their own interests and living their everyday life in accordance with these more immediate goals.
Butler's second addition to Shaftesbury's account was the idea of conscience. This he saw as a second natural guide to conduct, alongside enlightened self-interest. Butler believed that there is no inconsistency between the two; he admitted, however, that skeptics may doubt “the happy tendency of virtue” and for them conscience can serve as an authoritative guide. Just what reason these skeptics have to follow conscience, if they believe its guidance to be contrary to their own happiness, is something that Butler did not adequately explain. Nevertheless, his introduction of conscience as an independent source of moral reasoning reflects an important difference between ancient and modern ethical thinking. The Greek and Roman philosophers would have had no difficulty in accepting everything Butler said about the pursuit of happiness, but they would not have understood his idea of another independent source of rational guidance. Although Butler insisted that the two operate in harmony, this was for him a fortunate fact about the world and not a necessary principle of reason. Thus his recognition of conscience opened the way for later formulations of a universal principle of conduct at odds with the path indicated by even the most enlightened self-interested reasoning.
The climax of moral sense theory: Hutcheson and Hume
The moral sense school reached its fullest development in the works of two Scottish philosophers, Francis Hutcheson (1694–1746) and David Hume (1711–76). Hutcheson was concerned with showing, against the intuitionists, that moral judgment cannot be based on reason and therefore must be a matter of whether an action is “amiable or disagreeable” to one's moral sense. Like Butler's notion of conscience, Hutcheson's moral sense does not find pleasing only, or even predominantly, those actions that are in one's own interest. On the contrary, Hutcheson conceived moral sense as based on a disinterested benevolence. This led him to state, as the ultimate criterion of the goodness of an action, a principle that was to serve as the basis for the Utilitarian reformers: “that action is best which procures the greatest happiness for the greatest numbers . . . .”
Hume, like Hutcheson, held that reason cannot be the basis of morality. His chief ground for this conclusion was that morality is essentially practical: there is no point in judging something good if the judgment does not incline us to act accordingly. Reason alone, however, Hume regarded as “the slave of the passions.” Reason can show us how best to achieve our ends, but it cannot determine our ultimate desires and is incapable of moving us to action except in accordance with some prior want or desire. Hence, reason cannot give rise to moral judgments.
This is an important argument that is still employed in the debate between those who believe that morality is based on reason and those who base it instead on emotion or feelings. Hume's conclusion certainly follows from his premises. Can either premise be denied? We have seen that intuitionists such as Cudworth and Clarke maintained that reason can lead to action. Reason, they would have said, leads us to see a particular action as fitting in given circumstances and therefore to do it. Hume would have none of this. “Tis not contrary to reason,” he provocatively asserted, “to prefer the destruction of the whole world to the scratching of my finger.” To show that he was not embracing the view that only egoism is rational, Hume continued: “Tis not contrary to reason to choose my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me.” His point was simply that to have these preferences is to have certain desires or feelings; they are not matters of reason at all. The intuitionists might insist that moral and mathematical reasoning are analogous, but this analogy was not helpful here. We can know a truth of geometry and not be motivated to act in any way.
What of Hume's other premise that morality is essentially practical and moral judgments must lead to action? This can be denied more easily. We could say that moral judgments merely tell us what is right or wrong. They do not lead to action unless we want to do what is right. Then Hume's argument would do nothing to undermine the claim that moral judgments are based on reason. But there is a price to pay. The terms right and wrong lose much of their force. We can no longer assert that those who know what is right but do what is wrong are in any way irrational. They are just people who do not happen to have the desire to do what is right. This desire—because it leads to action—must be acknowledged to be based on feeling rather than reason. Denying that morality is necessarily action-guiding means abandoning the idea, so important to those defending the objectivity of morality, that some things are objectively required of all rational beings.
Hume's forceful presentation of this argument against a rational basis for morality would have been enough to earn him a place in the history of ethics, but it is by no means his only achievement in this field. In A Treatise of Human Nature (1739–40) Hume points, almost as an afterthought, to the fact that writers on morality regularly start by making various observations about human nature or about the existence of a god—all statements of fact about what is the case—and then suddenly switch to statements about what ought or ought not be done. Hume says that he cannot conceive how this new relationship of “ought” can be deduced from the preceding statements that were related by “is”; and he suggests these authors should explain how this deduction is to be achieved. The point has since been called Hume's Law and taken as proof of the existence of a gulf between facts and values, or between “is” and “ought.” This places too much weight on Hume's brief and ironic comment, but there is no doubt that many writers, both before and after Hume, have argued as if values could easily be deduced from facts. They can usually be found to have smuggled values in somewhere. Attention to Hume's Law makes it easy for us to detect such logically illicit contraband.
Hume's positive account of morality is in line with that of the moral sense school: “The hypothesis which we embrace is plain. It maintains that morality is determined by sentiment. It defines virtue to be whatever mental action or quality gives to a spectator the pleasing sentiment of approbation; and vice the contrary.” In other words, Hume takes moral judgments to be based on a feeling. They do not reflect any objective state of the world. Having said that, however, it may still be asked whether this feeling is one that is common to all of us or one that varies from individual to individual. If Hume gives the former answer, moral judgments retain a kind of objectivity. While they do not reflect anything out there in the universe apart from human feelings, one's judgments may be true or false depending on whether they capture this universal human moral sentiment. If, on the other hand, the feeling varies from one individual to the next, moral judgments become entirely subjective. People's judgments would express their own feelings, and to reject someone else's judgment as wrong would merely be to say that one's own feelings were different.
Hume does not make entirely clear which of these two views he holds; but if he is to avoid breaching his own rule about not deducing an “ought” from an “is,” he cannot hold that a moral judgment can follow logically from a description of the feelings that an action gives to a particular group of spectators. From the mere existence of a feeling we cannot draw the inference that we ought to obey it. For Hume to be consistent on this point—and even with his central argument that moral judgments must move to action—the moral judgment must be based not on the fact that all people, or most people, or even the speaker, have a certain feeling; it must rather be based on the actual experience of the feeling by whoever accepts the judgment. This still leaves it open whether the feeling is common to all or limited to the person accepting the judgment, but it shows that, in either case, the “truth” of a judgment for any individual depends on whether that individual actually has the appropriate feeling. Is this “truth” at all? As will be seen below, 20th-century philosophers with views broadly similar to Hume's have suggested that moral judgments have a special kind of meaning not susceptible of truth or falsity in the ordinary way.
The intuitionist response: Price and Reid
Powerful as they were, Hume's arguments did not end the debate between the moral sense theorists and the intuitionists. They did, however, lead Richard Price (1723–91), Thomas Reid (1710–96), and later intuitionists to abandon the idea that moral truths can be established by some process of demonstrative reasoning akin to that used in mathematics. Instead, these proponents of intuitionism took the line that our notions of right and wrong are simple, objective ideas, directly perceived by us and not further analyzable into anything such as “fitness.” We know of these ideas, not through any moral sense based on feelings, but rather through a faculty of reason or of the intellect that is capable of discerning truth. Since Hume, this has been the only plausible form of intuitionism. Yet, Price and Reid failed to explain adequately just what are the objective moral qualities that we perceive directly and how they connect with the actions we choose.
At this point the argument over whether morality is based on reason or feelings was temporarily exhausted, and the focus of British ethics shifted from such questions about the nature of morality as a whole to an inquiry into which actions are right and which are wrong. Today, the distinction between these two types of inquiry would be expressed by saying that whereas the 18th-century debate between intuitionism and the moral sense school dealt with questions of metaethics, 19th-century thinkers became chiefly concerned with questions of normative ethics. The positions we take in metaethics over whether ethics is objective or subjective, for example, do not tell us what we ought to do. That task is the province of normative ethics.
The impetus to the discussion of normative ethics was provided by the challenge of Utilitarianism. The essential principle of Utilitarianism was, as noted above, put forth by Hutcheson. Curiously, it gained further development from the widely read theologian William Paley (1743–1805), who provides a good example of the independence of metaethics and normative ethics. His position on the nature of morality was similar to that of Ockham and Luther—namely, he held that right and wrong are determined by the will of God. Yet, because he believed that God wills the happiness of his creatures, his normative ethics were Utilitarian: whatever increases happiness is right; whatever diminishes it is wrong.
Notwithstanding these predecessors, Jeremy Bentham (1748–1832) is properly considered the father of modern Utilitarianism. It was he who made the Utilitarian principle serve as the basis for a unified and comprehensive ethical system that applies, in theory at least, to every area of life. Never before had a complete, detailed system of ethics been so consistently constructed from a single fundamental ethical principle.
Bentham's ethics began with the proposition that nature has placed human beings under two masters: pleasure and pain. Anything that seems good must either be directly pleasurable, or thought to be a means to pleasure or to the avoidance of pain. Conversely, anything that seems bad must either be directly painful, or thought to be a means to pain or to the deprivation of pleasure. From this Bentham argued that the words right and wrong can only be meaningful if they are used in accordance with the Utilitarian principle, so that whatever increases the net surplus of pleasure over pain is right and whatever decreases it is wrong.
Bentham then set out how we are to weigh the consequences of an action, and thereby decide whether it is right or wrong. We must, he says, take account of the pleasures and pains of everyone affected by the action, and this is to be done on an equal basis: “Each to count for one, and none for more than one.” (At a time when Britain had a major trade in slaves, this was a radical suggestion; and Bentham went further still, explicitly extending consideration to nonhuman animals as well.) We must also consider how certain or uncertain the pleasures and pains are, their intensity, how long they last, and whether they tend to give rise to further feelings of the same or of the opposite kind.
Bentham did not allow for distinctions in the quality of pleasure or pain as such. Referring to a popular game, he affirmed that “quantity of pleasure being equal, pushpin is as good as poetry.” This led his opponents to characterize his philosophy as one fit for pigs. The charge is only half true. Bentham could have defended a taste for poetry on the grounds that whereas one tires of mere games, the pleasures of a true appreciation of poetry have no limit; thus the quantities of pleasure obtained by poetry are greater than those obtained by pushpin. All the same, one of the strengths of Bentham's position is its honest bluntness, which it owes to his refusal to be fazed by the contrary opinions either of conventional morality or of refined society. He never thought that the aim of Utilitarianism was to explain or justify ordinary moral views; it was, rather, to reform them.
John Stuart Mill (1806–73), Bentham's successor as the leader of the Utilitarians and the most influential British thinker of the 19th century, had some sympathy for the view that Bentham's position was too narrow and crude. His essay “Utilitarianism” (1861) introduced several modifications, all aimed at a broader view of what is worthwhile in human existence and at implications less shocking to established moral convictions. Although his position was based on the maximization of happiness (and this is said to consist in pleasure and the absence of pain), he distinguished between pleasures that are higher and those that are lower in quality. This enabled him to say that it is “better to be Socrates dissatisfied than a fool satisfied.” The fool, he argued, would only be of a different opinion because he did not know both sides of the question.
Mill sought to show that Utilitarianism is compatible with moral rules and principles relating to justice, honesty, and truthfulness by arguing that Utilitarians should not attempt to calculate before each action whether that specific action will maximize utility. Instead, they should be guided by the fact that an action falls under a general principle (such as the principle that we should keep our promises), and adherence to that general principle tends to increase happiness. Only under special circumstances is it necessary to consider whether an exception may have to be made.
Mill's easily readable prose ensured a wide audience for his exposition of Utilitarianism, but as a philosopher he was markedly inferior to the last of the 19th-century Utilitarians, Henry Sidgwick (1838–1900). Sidgwick's Methods of Ethics (1874) is the most detailed and subtle work of Utilitarian ethics yet produced. Especially noteworthy is his discussion of the various principles accepted by what he calls common sense morality—i.e., the morality accepted by most people without systematic thought. Price, Reid, and some adherents of their brand of intuitionism thought that such principles (e.g., those of truthfulness, justice, honesty, benevolence, purity, and gratitude) were self-evident, independent moral truths. Sidgwick was himself an intuitionist as far as the basis of ethics was concerned: he believed that the principle of Utilitarianism must ultimately be based on a self-evident axiom of rational benevolence. Nonetheless, he strongly rejected the view that all principles of common sense morality are themselves self-evident. He went on to demonstrate that the allegedly self-evident principles conflict with one another and are vague in their application. They could only be part of a coherent system of morality, he argued, if they were regarded as subordinate to the Utilitarian principle, which defined their application and resolved the conflicts between them.
Sidgwick was satisfied that he had reconciled common sense morality and Utilitarianism by showing that whatever was sound in the former could be accounted for by the latter. He was, however, troubled by his inability to achieve any such reconciliation between Utilitarianism and egoism, the third method of ethical reasoning dealt with in his book. True, Sidgwick regarded it as self-evident that “from the point of view of the universe” one's own good is of no greater value than the like good of any other person, but what could be said to the egoist who expresses no concern for the point of view of the universe, taking his stand instead on the fact that his own good mattered more to him than anyone else's? Bentham had apparently believed either that self-interest and the general happiness are not at odds or that it is the legislator's task to reward or punish actions so as to see that they are not. Mill also had written of the need for sanctions but was more concerned with the role of education in shaping human nature in such a way that one finds happiness in doing what benefits all. By contrast, Sidgwick was convinced that this could lead at best to a partial overlap between what is in one's own interest and what is in the interest of all. Hence, he searched for arguments with which to convince the egoist of the rationality of universal benevolence but failed to find any. The Methods of Ethics concludes with an honest admission of this failure and an expression of dismay at the fact that, as a result, “. . . it would seem necessary to abandon the idea of rationalizing [morality] completely.”
The continental tradition: from Spinoza to Nietzsche
If Hobbes is to be regarded as the first of a distinctively British philosophical tradition, the Dutch-Jewish philosopher Benedict Spinoza (1632–77) appropriately occupies the same position in continental Europe. Unlike Hobbes, Spinoza did not provoke a long-running philosophical debate. In fact, his philosophy was neglected for a century after his death and was in any case too much of a self-contained system to invite debate. Nevertheless, Spinoza held positions on crucial issues thatwere in sharp contrast to those taken by Hobbes, and these differences were to grow over the centuries during which British and continental European philosophy followed their own paths.
The first of these contrasts with Hobbes is Spinoza's attitude toward natural desires. As has been noted, Hobbes took self-interested desire for pleasure as an unchangeable fact about human nature and proceeded to build a moral and political system to cope with it. Spinoza did just the opposite. He saw natural desires as a form of bondage. We do not choose to have them of our own will. Our will cannot be free if it is subject to forces outside itself. Thus our real interests lie not in satisfying these desires but in transforming them by the application of reason. Spinoza thus stands in opposition not only to Hobbes but also to the position later to be taken by Hume, for Spinoza saw reason not as the slave of the passions but as their master.
The second important contrast is that while individual humans and their separate interests are always assumed in Hobbes's philosophy, this separation is simply an illusion from Spinoza's viewpoint. Everything that exists is part of a single system, which is at the same time nature and God. (One possible interpretation of this is that Spinoza was a pantheist, believing that God exists in every aspect of the world and not apart from it.) We, too, are part of this system and are subject to its rationally necessary laws. Once we know this, we understand how irrational it would be to desire that things should be different from the way they are. This means that it is irrational to envy, to hate, and to feel guilt, for these emotions presuppose the possibility of things being different. So we cease to feel such emotions and find peace, happiness, and even freedom—in Spinoza's terms the only freedom there can be—in understanding the system of which we are a part.
A view of the world so different from our everyday conceptions as that of Spinoza's cannot be made to seem remotely plausible when presented in summary form. To many philosophers it remains implausible even when complete. Its value for ethics, however, lies not in its validity as a whole, but in the introduction into continental European philosophy of a few key ideas: that our everyday nature may not be our true nature; that we are part of a larger unity; and that freedom is to be found in following reason.
The German philosopher and mathematician Gottfried Wilhelm Leibniz (1646–1716), the next great figure in the Rationalist tradition, gave scant attention to ethics, perhaps because of his belief that the world is governed by a perfect God, and hence must be the best of all possible worlds. As a result of Voltaire's hilarious parody in Candide (1758), this position has achieved a certain notoriety. It is not generally recognized, however, that it does at least provide a consistent solution to a problem that has baffled thinking Christians for many centuries: How can there be evil in a world governed by an all-powerful, all-knowing, and all-good God? Leibniz's solution may not be plausible, but there may be no better one if the above premises are allowed to pass unchallenged.
It was the French philosopher and writer Jean-Jacques Rousseau (1712–78) who took the next step. His Discours sur l'origine et les fondements de l'inégalité parmi les hommes (1755; A Discourse upon the Origin and Foundation of the Inequality Among Mankind) depicted a state of nature very different from that described by Hobbes as well as from Christian conceptions of original sin. Rousseau's “noble savages” lived isolated, trouble-free lives, supplying their simple wants from the abundance that nature provided and even coming to each other's aid in times of need. Only when someone claimed possession of a piece of land did laws have to be introduced, and with them came civilization and all its corrupting influences. This is, of course, a message that resembles one of Spinoza's key points: The human nature we see before us in our fellow citizens is not the only possibility; somewhere, there is something better. If we can find a way to reach it, we will have found the solution to our ethical and social problems.
Rousseau revealed his route in his Contrat social (1762; A Treatise on the Social Compact, or Social Contract). It required rule by the “general will.” This may sound like democracy and, in a sense, it was democracy that Rousseau advocated; but his conception of rule by the general will is very different from the modern idea of democratic government. Today, we assume that in any society the interests of different citizens will be in conflict, and that as a result for every majority that succeeds in having its will implemented there will be a minority that fails to do so. For Rousseau, on the other hand, the general will is not the sum of all the individual wills in the community but the true common will of all the citizens. Even if a person dislikes and opposes a decision carried by the majority, that decision represents the general will, the common will in which he shares. For this to be possible, Rousseau must be assuming that there is some common good in which all human beings share and hence that their true interests coincide. As man passes from the state of nature to civil society, he has to “consult his reason rather than study his inclinations.” This is not, however, a sacrifice of his true interests, for in following reason he ceases to be a slave to “physical impulses” and so gains moral freedom.
This leads to a picture of civilized human beings as divided selves. The general will represents the rational will of every member of the community. If an individual opposes the decision of the general will, his opposition must stem from his physical impulses and not from his true, autonomous will. For obvious reasons, this idea was to find favour with such autocratic leaders of the French Revolution as Robespierre. It also had a much less sinister influence on one of the outstanding philosophers of modern times: Immanuel Kant of Germany.
Interestingly, Kant (1724–1804) acknowledged that he had despised the ignorant masses until he read Rousseau and came to appreciate the worth that exists in every human being. For other reasons too, Kant is part of the tradition deriving from both Spinoza and Rousseau. Like his predecessors, Kant insisted that actions resulting from desires cannot be free. Freedom is to be found only in rational action. Moreover, whatever is demanded by reason must be demanded of all rational beings; hence, rational action cannot be based on a single individual's personal desires, but must be action in accordance with something that he can will to be a universal law. This view roughly parallels Rousseau's idea of the general will as that which, as opposed to the individual will, a person shares with the whole community. Kant extended this community to all rational beings.
Kant's most distinctive contribution to ethics was his insistence that our actions possess moral worth only when we do our duty for its own sake. He first introduced this idea as something accepted by our common moral consciousness and only then tried to show that it is an essential element of any rational morality. In claiming that this idea is central to the common moral consciousness, Kant was expressing in heightened form a tendency of Judeo-Christian ethics and revealing how much the Western ethical consciousness had changed since the time of Socrates, Plato, and Aristotle.
Does our common moral consciousness really insist that there is no moral worth in any action done for any motive other than duty? Certainly we would be less inclined to praise the young man who plunges into the surf to rescue a drowning child if we learned that he did it because he expected a handsome reward from the child's millionaire father. This feeling lies behind Kant's disagreement with all those moral philosophers who have argued that we should do what is right because that is the path to happiness, either on earth or in heaven. But Kant went further than this. He was equally opposed to those who see benevolent or sympathetic feelings as the basis of morality. Here he may be reflecting the moral consciousness of 18th-century Protestant Germany, but it appears that even then the moral consciousness of Britain, as reflected in the writings of Shaftesbury, Hutcheson, Butler, and Hume, was very different. The moral consciousness of Western civilization in the last quarter of the 20th century also appears to be different from the one Kant was describing.
Kant's ethics is based on his distinction between hypothetical and categorical imperatives. He called any action based on desires a hypothetical imperative, meaning by this that it is a command of reason that applies only if we desire the goal. For example, “Be honest, so that people will think well of you!” is an imperative that applies only if you want people to think well of you. A similarly hypothetical analysis can be given of the imperatives suggested by, say, Shaftesbury's ethics: “Help those in distress, if you sympathize with their sufferings!” In contrast to such approaches to ethics, Kant said that the commands of morality must be categorical imperatives: they must apply to all rational beings, regardless of their wants and feelings. To most philosophers this poses an insuperable problem: a moral law that applied to all rational beings, irrespective of their personal wants and desires, could have no specific goals or aims because all such aims would have to be based on someone's wants or desires. It took Kant's peculiar genius to seize upon precisely this implication, which to others would have refuted his claims, and to use it to derive the nature of the moral law. Because nothing else but reason is left to determine the content of the moral law, the only form this law can take is the universal principle of reason. Thus the supreme formal principle of Kant's ethics is: “Act only on that maxim through which you can at the same time will that it should become a universal law.”
Kant still faced two major problems. First, he had to explain how we can be moved by reason alone to act in accordance with this supreme moral law; and, second, he had to show that this principle is able to provide practical guidance in our choices. If we were to couple Hume's theory that reason is always the slave of the passions with Kant's denial of moral worth to all actions motivated by desires, the outcome would be that no actions can have moral worth. To avoid such moral skepticism, Kant maintained that reason alone can lead to action. Unfortunately he was unable to say much in defense of this claim. Of course, the mere fact that we otherwise face so unpalatable a conclusion is in itself a powerful incentive to believe that somehow a categorical imperative must be possible, but this is not convincing to anyone not already wedded to Kant's view of moral worth. At one point Kant appeared to be taking a different line. He wrote that the moral law inevitably produces in us a feeling of reverence or awe. If he meant to say that this feeling then becomes the motivation for obedience, however, he was conceding Hume's point that reason alone is powerless to bring about action. It would also be difficult to accept that anything, even the moral law, can necessarily produce a certain kind of feeling in all rational beings regardless of their psychological constitution. Thus this approach does not succeed in clarifying Kant's position or rendering it plausible.
Kant gave closer attention to the problem of how his supreme formal principle of morality can provide guidance in concrete situations. One of his examples is as follows. Suppose that I plan to get some money by promising to pay it back, although I have no intention of keeping my promise. The maxim of such an action might be “Make false promises when it suits you to do so.” Could such a maxim be a universal law? Of course not. If promises were so easily broken, no one would rely on them, and the practice of promising would cease. For this reason, I know that the moral law does not allow me to carry out my plan.
Not all situations are so easily decided. Another of Kant's examples deals with aiding those in distress. I see someone in distress, whom I could easily help, but I prefer not to do so. Can I will as a universal law the maxim that a person should refuse assistance to those in distress? Unlike the case of promising, there is no strict inconsistency in this maxim being a universal law. Kant, however, says that I cannot will it to be such because I may someday be in distress myself, and I would then want assistance from others. This type of example is less convincing than the previous one. If I value self-sufficiency so highly that I would rather remain in distress than escape from it through the intervention of another, Kant's principle no longer tells me that I have a duty to assist those in distress. In effect, Kant's supreme principle of practical reason can only tell us what to do in those special cases in which turning the maxim of our action into a universal law yields a contradiction. Outside this limited range, the moral law that was to apply to all rational beings regardless of their wants and desires cannot guide us except by appealing to our desires.
Kant does offer alternative formulations of the categorical imperative, and one of these has been seen as providing more substantial guidance than the formulation so far considered. This formulation is: “So act that you treat humanity in your own person and in the person of everyone else always at the same time as an end and never merely as means.” The connection between this formulation and the first one is not entirely clear, but the idea seems to be that when I choose for myself I treat myself as an end. If, therefore, in accordance with the principle of universal law, I must choose so that all could choose similarly, I must respect everyone else as an end. Even if this is valid, the application of the principle raises further questions. What is it to treat someone merely as a means? Using a person as a slave is an obvious example; Kant, like Bentham, was making a stand against this kind of inequality while it still flourished as an institution in some parts of the world. But to condemn slavery we have only to give equal weight to the interests of the slaves. Does Kant's principle take us any further than Utilitarianism? Modern Kantians hold that it does because they interpret it as denying the legitimacy of sacrificing the rights of one human being in order to benefit others.
One thing that can be said confidently is that Kant was firmly opposed to the Utilitarian principle of judging every action by its consequences. His ethics is a deontology. In other words, the rightness of an action depends on whether it accords with a rule irrespective of its consequences. In one essay Kant went so far as to say that it would be wrong to tell a lie even to a would-be murderer who came to your door seeking to kill an innocent person hidden in your house. This kind of situation illustrates how difficult it is to remain a strict deontologist when principles may clash. Apparently Kant believed that his principle of universal law required that one never tell lies, but it could also be argued that his principle of treating everyone as an end would necessitate doing everything possible to save the life of an innocent person. Another possibility would be to formulate the maxim of the action with sufficient precision to define the circumstances under which it would be permissible to tell lies—e.g., we could all agree to a universal law that permitted lies to people intending to commit murder. Kant did not explore such solutions.
Kant's philosophy deeply affected subsequent German thought, but there were several aspects of it that troubled later thinkers. One of these was his portrayal of human nature as irreconcilably split between reason and emotion. In Briefe über die ästhetische Erziehung des Menschen (1795; Letters on the Aesthetic Education of Man), the dramatist and literary theorist Friedrich Schiller suggested that while this might apply to modern human beings, it was not the case in ancient Greece where reason and feeling seemed to have been in harmony. (There is, as suggested earlier, some basis for this claim insofar as the Greek moral consciousness did not make the modern distinction between morality and self-interest.) Schiller's suggestion may have been the spark that led Georg Wilhelm Friedrich Hegel (1770–1831) to develop the first philosophical system that has historical change as its core.
As Hegel presents it, all of history is the progress of mind or spirit along a logically necessary path that leads to freedom. Human beings are manifestations of this universal mind, although at first they do not realize this. Freedom cannot be achieved until human beings do realize it, and so feel at home in the universe. There are echoes of Spinoza in Hegel's idea of mind as something universal and also in his conception of freedom as based on knowledge. What is original, however, is the way in which all of history is presented as leading to the goal of freedom. Thus Hegel accepts Schiller's view that for the ancient Greeks, reason and feeling were in harmony, but he sees this as a naive harmony that could exist only as long as the Greeks did not see themselves as free individuals with a conscience independent of the views of the community. For freedom to develop, it was necessary for this harmony to break down. This occurred as a result of the Reformation, with its insistence on the right of individual conscience. But the rise of individual conscience left human beings divided between conscience and self-interest, between reason and feeling. We have seen how many philosophers tried unsuccessfully to bridge this gulf until Kant's insistence that we must do our duty for duty's sake made the division an apparently inevitable part of moral life. For Hegel, however, it can be overcome by a synthesis of the harmonious communal nature of Greek life with the modern freedom of individual conscience.
In Naturrecht und Staatswissenschaft im Grundrisse, alternatively entitled Grundlinien der Philosophie des Rechts (1821; The Philosophy of Right), Hegel described how this synthesis could be achieved in an organic community. The key to his solution is the recognition that human nature is not fixed but is shaped by the society in which one lives. The organic community would foster those desires that most benefit the community. It would imbue its members with the sense that their own identity consists in being a part of the community, so that they would no more think of going off in pursuit of their own private interests than one's left arm would think of going off without the rest of the body. Nor should it be forgotten that such organic relationships are reciprocal: the organic community will no more disregard the interests of its members than an individual would disregard an injury to his or her arm. Harmony would thus prevail but not the naive harmony of ancient Greece. The citizens of Hegel's organic community do not obey its laws and customs simply because they are there. With the independence of mind characteristic of modern times, they can only give their allegiance to institutions that they recognize as conforming to rational principles. The modern organic state, unlike the ancient Greek city-state, is self-consciously based on rationally selected principles.
Hegel provided a new approach to the ancient problem of reconciling morality and self-interest. Others had accepted the problem as part of the inevitable nature of things and looked for ways around it. Hegel looked at it historically and saw it as a problem only in a certain type of society. Instead of solving the problem as it existed, he looked to the emergence of a new form of society in which it would disappear. In this way Hegel claimed to have overcome one great problem that was insoluble for Kant.
Hegel also believed that he had the solution to the other key weakness in Kant's ethics—namely, the difficulty of giving content to the supreme formal moral principle. In Hegel's organic community, the content of our moral duty would be given to us by our position in society. We would know that our duty was to be a good parent, a good citizen, a good teacher, merchant, or soldier, as the case might be. It is an ethic that has been called “my station and its duties.” It might be thought that this is a limited, conservative conception of what we ought to do with our lives, especially when compared with Kant's principle of universal law, which does not base what we ought to do on what our particular station in society happens to be. Hegel would have replied that because the organic community is based on universally valid principles of reason, it complies with Kant's principle of universal law. Moreover, without the specific content provided by the concrete institutions and practices of a society, that principle would remain an empty formula.
Hegel's philosophy has both a conservative and a radical side. The conservative aspect is reflected in the ethic of “my station and its duties,” and even more strongly in the significant resemblance between Hegel's detailed description of the organic society and the actual institutions of the Prussian state in which he lived and taught for the last decade of his life. This resemblance, however, was in no way a necessary implication of Hegel's philosophy as a whole. After Hegel's death, a group of his more radical followers known as the Young Hegelians hailed the manner in which he had demonstrated the need for a new form of society to overcome the separation between self and community but scorned the implication that the state in which they were living could be this solution to all the problems of history. Among this group was a young student named Karl Marx.
Marx (1818–83) has often been presented by his followers as a scientist rather than a moralist. He did not deal directly with the ethical issues that occupied the philosophers so far discussed. His Materialist conception of history is, rather, an attempt to explain all ideas, whether political, religious, or ethical, as the product of the particular economic stage that society has reached. Thus a feudal society will regard loyalty and obedience to one's lord as the chief virtues. A capitalist economy, on the other hand, requires a mobile labour force and expanding markets, so that freedom, especially the freedom to sell one's labour, is its key ethical conception. Because Marx saw ethics as a mere by-product of the economic basis of society, he frequently took a dismissive stance toward it. Echoing the Sophist Thrasymachus, Marx said that the “ideas of the ruling class are in every epoch the ruling ideas.” With his coauthor Friedrich Engels, he was even more scornful in the Manifest der Kommunistischen Partei (1848; The Communist Manifesto), in which morality, law, and religion are referred to as “so many bourgeois prejudices behind which lurk in ambush just as many bourgeois interests.”
A sweeping rejection of ethics, however, is difficult to reconcile with the highly moralistic tone of Marx's condemnation of the miseries the capitalist system inflicts upon the working class and with his obvious commitment to hastening the arrival of the Communist society that will end such iniquities. After Marx died, Engels tried to explain this apparent inconsistency by saying that as long as society was divided into classes, morality would serve the interests of the ruling class. A classless society, on the other hand, would be based on a truly human morality that served the interests of all human beings. This does make Marx's position consistent by setting him up as a critic, not of ethics as such, but rather of the class-based moralities that would prevail until the Communist revolution.
By studying Marx's earlier writings—those produced when he was a Young Hegelian—one obtains a slightly different, though not incompatible, impression of the place of ethics in Marx's thought. There seems no doubt that the young Marx, like Hegel, saw human freedom as the ultimate goal. He also held, as did Hegel, that freedom could only be obtained in a society in which the dichotomy between private interest and the general interest had disappeared. Under the influence of socialist ideas, however, he formed the view that merely knowing what was wrong with the world would not achieve anything. Only the abolition of private property could lead to the transformation of human nature and so bring about the reconciliation of the individual and the community. Theory, Marx concluded, had gone as far as it could; even the theoretical problems of ethics, as illustrated in Kant's division between reason and feeling, would remain insoluble unless one moved from theory to practice. This is what Marx meant in the famous thesis that is engraved on his tombstone: “The philosophers have only interpreted the world, in various ways; the point is to change it.” The goal of changing the world stemmed from Marx's attempt to overcome one of the central problems of ethics; the means now passed beyond philosophy.
Friedrich Nietzsche (1844–1900) was a literary and social critic, not a systematic philosopher. In ethics, the chief target of his criticism is the Judeo-Christian tradition. He describes Jewish ethics as a “slave morality” based on envy. Christian ethics is, in his opinion, even worse because it makes a virtue of meekness, poverty, and humility, telling one to turn the other cheek rather than to struggle. It is the ethics of the weak, who hate and fear strength, pride, and self-affirmation. Such an ethics undermines the human drives that have led to the greatest and most noble human achievements.
Nietzsche thought the era of traditional religion to be over: “God is dead,” perhaps his most widely repeated aphorism, was his paradoxical way of putting it. Yet, what was to be put in its place? Nietzsche took from Aristotle the concept of greatness of soul, the unchristian virtue that included nobility and a justified pride in one's achievements. He suggested a reevaluation of values that would lead to a new ideal: the Übermensch, a term usually translated as “Superman” and given connotations that suggest that Nietzsche would have regarded Hitler as an ideal type. Nietzsche's praise of “the will to power” is taken as further evidence that he would have approved of Hitler. This interpretation owes much to Nietzsche's racist sister, who after his death compiled a volume of his unpublished writings, arranging them to make it appear that he was a forerunner of Nazi thinking. This is at best a partial truth. Nietzsche was almost as contemptuous of pan-German racism and anti-Semitism as he was of the ethics of Judaism and Christianity. What Nietzsche meant by Übermensch was a person who could rise above the limitations of ordinary morality; and by “the will to power” it seems that Nietzsche had in mind self-affirmation and not necessarily the use of power to oppress others.
Nevertheless, Nietzsche left himself wide open to those who wanted his philosophical imprimatur for their crimes against humanity. His belief in the importance of the Übermensch made him talk of ordinary people as “the herd,” who did not really matter. In Jenseits von Gut und Böse (1886; Beyond Good and Evil), he wrote with approval of “the distinguished type of morality,” according to which “one has duties only toward one's equals; toward beings of a lower rank, toward everything foreign to one, one may act as one sees fit, ‘as one's heart dictates' ”—in any event, beyond good and evil. The point is that the Übermensch is above all ordinary moral standards: “The distinguished type of human being feels himself as value-determining; he does not need to be ratified; he judges ‘that which is harmful to me is harmful as such'; he knows that he is the something which gives value to objects; he creates values.” In this Nietzsche was a forerunner of Existentialism rather than Nazism, but then Existentialism, precisely because it gives no basis for choosing other than authenticity, is not incompatible with Nazism.
Nietzsche's position on ethical matters represents a stark contrast to that of Henry Sidgwick, the last major figure of 19th-century British ethics treated in this article. Sidgwick believed in objective standards for ethical judgments and thought that the subject of ethics had over the centuries made progress toward these standards. He saw his own work as building carefully on that progress. Nietzsche, on the other hand, would have us sweep away everything since Greek ethics and not keep much of that either. The superior types would then be able to freely create their own values as they saw fit.
20th-century Western ethics
The brief historical survey of Western ethics from Socrates to the 20th century provided above has shown three constant themes. Since the Sophists, there have been (1) disagreements over whether ethical judgments are truths about the world or only reflections of the wishes of those who make them; (2) frequent attempts to show, in the face of considerable skepticism, either that it is in one's own interests to do what is good or that, even though this is not necessarily in one's own interests, it is the rational thing to do; and (3) repeated debates over just what goodness and the standard of right and wrong might be. The 20th century has seen new twists to these old themes and an increased attention to the application of ethics to practical problems. Each of these major questions is considered below in terms of metaethics, normative ethics, and applied ethics.
As previously noted, metaethics deals not with substantive ethical theories or moral judgments but rather with questions about the nature of these theories and judgments. Among 20th-century philosophers in English-speaking countries, those defending the objectivity of ethical judgments have most often been intuitionists or naturalists; those taking a different view have been emotivists or prescriptivists.
Moore and the naturalistic fallacy
At first it was the intuitionists who dominated the scene. In 1903 the Cambridge philosopher G.E. Moore presented in Principia Ethica his “open question argument” against what he called the naturalistic fallacy. The argument can in fact be found in Sidgwick and to some extent in the 18th-century intuitionists, but Moore's statement of it somehow caught the imagination of philosophers for the first half of the 1900s. Moore's aim was to prove that “good” is the name of a simple, unanalyzable quality. His chief target was the attempt to define good in terms of some natural quality of the world whether it be “pleasure” (he had John Stuart Mill in mind), or “more evolved” (here he refers to Herbert Spencer, who had tried to build an ethical system around Darwin's theory of evolution), or simply the idea of what is natural itself, as in appeals to a law of nature—hence the label naturalistic fallacy (i.e., the fallacy of treating good as if it were the name of a natural property). But the label is not apt because Moore's argument applied, as he acknowledged, to any attempt to define good in terms of something else, including something metaphysical or supernatural such as “what God wills.”
The so-called open question argument itself is simple enough. It consists of taking the proposed definition of good and turning it into a question. For instance, if the proposed definition is “Good means whatever leads to the greatest happiness of the greatest number,” then Moore would ask: “Is whatever leads to the greatest happiness of the greatest number good?” Moore is not concerned whether we answer yes or no. His point is that if the question is at all meaningful—if a negative answer is not plainly self-contradictory—then the definition cannot be right, for a definition is supposed to preserve the meaning of the term defined. If it does, a question of the type Moore asks would be absurd for all who understand the meaning of the term. Compare, for example, “Do all squares have four equal sides?”
Moore's argument does show that definitions of the kind he criticized do not capture all that we ordinarily mean by the term good. It would still be open to a would-be naturalist to admit that the definition does not capture everything that we ordinarily mean by the term, and add that all this shows is that ordinary usage is muddled and in need of revision. (We shall see that J.L. Mackie was later to make this part of his defense of subjectivism.) As for Mill, it is questionable whether he really intended to offer a definition of the term good; he seems to have been more interested in offering a criterion by which we could ascertain which actions are good. As Moore acknowledged, the open question argument does not do anything to show that pleasure, for example, is not the sole criterion of the goodness of an action. It shows only that this cannot be known to be true by definition, and so, if it is to be known at all, it must be known by some other means.
In spite of these doubts, Moore's argument was widely accepted at the time as showing that all attempts to derive ethical conclusions from anything not itself ethical in nature are bound to fail. The point was soon seen to be related to that made by Hume in his remarks on writers who move from “is” to “ought.” Moore, however, would have considered Hume's own account of morality to be naturalistic because of its definition of virtue in terms of the sentiments of the spectator. The upshot was that for 30 years after the publication of Principia Ethica intuitionism was the dominant metaethical position in British philosophy. In addition to Moore, its supporters included H.A. Prichard and Sir W.D. Ross.
The 20th-century intuitionists were not far removed philosophically from their 18th-century predecessors—those such as Richard Price who had learned from Hume's criticism and did not attempt to reason his way to ethical conclusions but claimed rather that ethical knowledge is gained through an immediate apprehension of its truth. In other words, a true ethical judgment is self-evident as long as we are reflecting clearly and calmly and our judgment is not distorted by self-interest or faulty moral upbringing. Ross, for example, took “the convictions of thoughtful, well-educated people” as “the data of ethics,” observing that while some may be illusory, they should only be rejected when they conflict with others that are better able to stand up to “the test of reflection.”
The intuitionists differed on the nature of the moral truths that are apprehended in this way. For Moore it was self-evident that certain things are valuable: e.g., the pleasures of friendship and the enjoyment of beauty. On the other hand, Ross thought we know it to be our duty to do acts of a certain type. These differences will be dealt with in the discussion of normative ethics. They are, however, significant to metaethical intuitionism because they reveal the lack of agreement, even among the intuitionists themselves, about moral judgments that each claims to be self-evident.
This disagreement was one of the reasons for the eventual rejection of intuitionism, which, when it came, was as complete as its acceptance had been in earlier decades. But there was also a more powerful philosophical motive working against intuitionism. During the 1930s, Logical Positivism, brought from Vienna by Ludwig Wittgenstein and popularized by A.J. Ayer in his manifesto Language, Truth and Logic (1936), became influential in British philosophy. According to the Logical Positivists, all true statements fall into two categories: logical truths and statements of fact. Moral judgments cannot fit comfortably into either category. They cannot be logical truths, for these are mere tautologies that can tell us nothing more than what is already contained in the definitions of the terms. Nor can they be statements of fact because these must, according to the Logical Positivists, be at least in principle verifiable; there is no way of verifying the truths that the intuitionists claimed to apprehend. The truths of mathematics, on which intuitionists had continued to rely as the one clear parallel case of a truth known by its self-evidence, were explained now as logical truths. In this view, mathematics tells us nothing about the world; it is simply a logical system, true by the definitions of the terms involved, which may be useful in our dealings with the world. Thus the intuitionists lost the one useful analogy to which they could appeal in support of the existence of a body of self-evident truths known by reason alone. It seemed to follow that moral judgments could not be truths at all.
In his above-cited Language, Truth and Logic, Ayer offered an alternative account: moral judgments are not statements at all. When we say, “You acted wrongly in stealing that money,” we are not expressing any fact beyond that stated by “You stole that money.” It is, however, as if we had stated this fact with a special tone of abhorrence, for in saying that something is wrong, we are expressing our feelings of disapproval toward it.
This view was more fully developed by Charles Stevenson in Ethics and Language (1945). As the titles of books of this period suggest, philosophers were now paying more attention to language and to the different ways in which it could be used. Stevenson distinguished the facts a sentence may convey from the emotive impact it is intended to have. Moral judgments are significant, he urged, because of their emotive impact. In saying that something is wrong, we are not merely expressing our disapproval of it, as Ayer suggested. We are encouraging those to whom we speak to share our attitude. This is why we bother to argue about our moral views, while on matters of taste we may simply agree to differ. It is important to us that others share our attitudes on war, equality, or killing; we do not care if they prefer to take their tea with lemon and we do not.
The emotivists were immediately accused of being subjectivists. In one sense of the term subjectivist, the emotivists could firmly reject this charge. Unlike other subjectivists in the past, they did not hold that those who say, for example, “Stealing is wrong,” are making a statement of fact about their own feelings or attitudes toward stealing. This view—more properly known as subjective naturalism because it makes the truth of moral judgments depend on a natural, albeit subjective, fact about the world—could be refuted by Moore's open question argument. It makes sense to ask: “I know that I have a feeling of approval toward this, but is it good?” It was the emotivists' view, however, that moral judgments make no statements of fact at all. The emotivists could not be defeated by the open question argument because they agreed that no definition of “good” in terms of facts, natural or unnatural, could capture the emotive element of its meaning. Yet, this reply fails to confront the real misgivings behind the charge of subjectivism: the concern that there are no possible standards of right and wrong other than one's own subjective feelings. In this sense, the emotivists were subjectivists.
About this time a different form of subjectivism was becoming fashionable on the Continent and to some extent in the United States. Existentialism was as much a literary as a philosophical movement. Its leading figure, Jean-Paul Sartre, propounded his ideas in novels and plays as well as in his major philosophical treatise, L'Être et le néant (1943; Being and Nothingness). For Sartre, because there is no God, human beings have not been designed for any particular purpose. The Existentialists express this by stating that our existence precedes our essence. In saying this, they make clear their rejection of the Aristotelian notion that just as we can recognize a good knife once we know that the essence of a knife is to cut, so we can recognize a good human being once we understand the essence of human nature. Because we have not been designed for any specific end, we are free to choose our own essence, which means to choose how we will live. To say that we are compelled by our situation, our nature, or our role in life to act in a certain way is to exhibit “bad faith.” This seems to be the only term of disapproval the Existentialists are prepared to use. As long as we choose “authentically,” there are no moral standards by which our conduct can be criticized.
This, at least, is the view most widely held by the Existentialists. In one work, a brochure entitled L'Existentialisme est un humanisme (1946; “Existentialism Is a Humanism”; Eng. trans., Existentialism and Humanism), Sartre backs away from so radical a subjectivism by suggesting a version of Kant's idea that we must be prepared to apply our judgments universally. He does not reconcile this view with conflicting statements elsewhere in his writings, and it is doubtful if it can be regarded as a statement of his true ethical views. It may reflect, however, a widespread postwar reaction to the spreading knowledge of what happened at Auschwitz and other Nazi death camps. One leading German prewar Existentialist, Martin Heidegger, had actually become a Nazi. Was this “authentic choice” just as good as Sartre's own choice to join the French Résistance? Is there really no firm ground from which such a choice could be rejected? This seemed to be the upshot of the pure Existentialist position, just as it was an implication of the ethical emotivism that was dominant among English-speaking philosophers. It is scarcely surprising that many philosophers should search for a metaethical view that did not commit them to this conclusion. The means used by Sartre in L'Existentialisme est un humanisme were also to have their parallel, though in a much more sophisticated form, in British moral philosophy.
In The Language of Morals (1952), R.M. Hare supported some of the elements of emotivism but rejected others. He agreed that in making moral judgments we are not primarily seeking to describe anything; but neither, he said, are we simply expressing our attitudes. Instead, he suggested that moral judgments prescribe; that is, they are a form of imperative sentence. Hume's rule about not deriving an “is” from an “ought” can best be explained, according to Hare, in terms of the impossibility of deriving any prescription from a set of descriptive sentences. Even the description “There is an enraged bull bearing down on you” does not necessarily entail the prescription “Run!” because I may have been searching for ways of killing myself in such a way that my children can still benefit from my life insurance. Only I can choose whether the prescription fits what I want. Herein lies moral freedom: because the choice of prescription is individual, no one can tell another what he or she must think right.
Hare's espousal of the view that moral judgments are prescriptions led commentators on his first book to classify him with the emotivists as one who did not believe in the possibility of using reason to arrive at ethical conclusions. That this was a mistake became apparent with the publication of his second book, Freedom and Reason (1963). The aim of the book was to show that the moral freedom guaranteed by prescriptivism is, notwithstanding its element of choice, compatible with a substantial amount of reasoning about moral judgments. Such reasoning is possible, Hare wrote, because moral judgments must be “universalizable.” This notion owed something to the ancient Golden Rule and even more to Kant's first formulation of the categorical imperative. In Hare's treatment, however, these ideas were refined so as to eliminate their obvious defects. Moreover, for Hare universalizability is not a substantive moral principle but a logical feature of the moral terms. This means that anyone who uses such terms as right and ought is logically committed to universalizability.
To say that a moral judgment must be universalizable means, for Hare, that if I judge a particular action—say, a man's embezzlement of a million dollars from his employer—to be wrong, I must also judge any relevantly similar action to be wrong. Of course, everything will depend on what is allowed to count as a relevant difference. Hare's answer is that all features may count, except those that contain ineliminable uses of words such as I or my, or singular terms such as proper names. In other words, the fact that he embezzled a million dollars in order to be able to take holidays in Tahiti, whereas I embezzled the same sum so as to channel it from my wealthy employer to those starving in Africa, may be a relevant difference; the fact that the man's crime benefitted him, whereas my crime benefitted me, cannot be so.
This notion of universalizability can also be used to test whether a difference that is alleged to be relevant—for instance, skin colour or even the position of a freckle on one's nose—really is relevant. Hare emphasized that the same judgment must be made in all conceivable cases. Thus if a Nazi were to claim that he may kill a person because that person is Jewish, he must be prepared to prescribe that if, somehow, it should turn out that he is himself of Jewish origin, he should also be killed. Nothing turns on the likelihood of such a discovery; the same prescription has to be made in all hypothetically, as well as actually, similar cases. Since only an unusually fanatical Nazi would be prepared to do this, universalizability is a powerful means of reasoning against certain moral judgments, including those made by the Nazis. At the same time, since there could be fanatical Nazis who are prepared to die for the purity of the Aryan race, the argument of Freedom and Reason allows that the role played by reason in ethics does have definite limits. Hare's position at this stage, therefore, appeared to be a compromise between the extreme subjectivism of the emotivists and some more objectivist view of ethics. As so often happens with those who try to take the middle ground, Hare was soon to receive criticism from both sides.
For a time, Moore's presentation of the naturalistic fallacy halted attempts to define “good” in terms of natural qualities such as happiness. The effect was, however, both local and temporary. In the United States, Ralph Barton Perry was untroubled by Moore's arguments. His General Theory of Value (1926) gave an account of value that was objectivist and much less mysterious than the intuitionist accounts, which were at that time dominating British philosophy. Perry suggested that there is no such thing as value until a being desires something, and nothing can have intrinsic value considered apart from all desiring beings. A novel, for example, has no value at all unless there is a being who desires to read it or perhaps use it for some other purpose, such as starting a fire on a cold night. Thus Perry is a naturalist, for he defines value in terms of the natural quality of being desired or, as he puts it, being an object of an interest. His naturalism is objectivist, in spite of this dependence of value on desires, because value is defined as any object of any interest. Accordingly, even if I do not desire, say, this encyclopaedia for any purpose at all, I cannot deny that it has some value so long as there is some being who does desire it. Moreover, Perry believed it followed from his theory that the greatest moral value is to be found in whatever leads to the harmonious integration of interests.
In Britain, Moore's impact was for a long time too great for any form of naturalism to be taken seriously. It was only as a response to Hare's intimation that any principle could be a moral principle so long as it satisfied the formal requirement of universalizability that philosophers such as Philippa Foot, Elizabeth Anscombe, and Geoffrey Warnock began to suggest that perhaps a moral principle must also have a particular kind of content—i.e., it must deal, for instance, with some aspect of wants, welfare, or flourishing.
The problem with these suggestions, Hare soon pointed out, is that if we define morality in such a way that moral principles are restricted to those that maximize well-being, then if there is a person who is not interested in maximizing well-being, moral principles, as we have defined them, will have no prescriptive force for that person. This reply elicited two responses—namely, those of Anscombe and Foot.
Anscombe went back to Aristotle, suggesting that we need a theory of human flourishing that will provide an account of what any person must do in order to flourish, and so will lead to a morality that every one of us has reason to follow. No such theory was forthcoming, however, until 1980 when John Finnis offered a theory of basic human goods in his Natural Law and Natural Rights. The book was acclaimed by Roman Catholic moral theologians and philosophers, but natural law ethics continues to have few followers outside these circles.
Foot initially attempted to defend a similarly Aristotelian view in which virtue and self-interest are necessarily linked, but she came to the conclusion that this link could not be made. This led her to abandon the assumption that we all have adequate reasons for doing what is right. Like Hume, she suggested that it depends on what we desire and especially on how much we care about others. She observed that morality is a system of hypothetical, not categorical, imperatives.
A much cruder form of naturalism surfaced from a different direction with the publication of Edward O. Wilson's Sociobiology: The New Synthesis (1975). Wilson, a biologist rather than a philosopher, claimed that new developments in the application of evolutionary theory to social behaviour would allow ethics to be “removed from the hands of philosophers” and “biologicized.” It was not the first time that a scientist, frustrated by the apparent lack of progress in ethics as compared to the sciences, had proposed some way of transforming ethics into a science. In a later book, On Human Nature (1978), Wilson suggested that biology justifies specific values (including the survival of the gene pool) and, because man is a mammal rather than a social insect, universal human rights. Other sociobiologists have gone further still, reviving the claims of earlier “social Darwinists” to the effect that Darwin's theory of evolution shows why it is right that there should be social inequality.
As the above section on the origin of ethics suggests, evolutionary theory may indeed have something to reveal about the origins and nature of the systems of morality used by human societies. Wilson is, however, plainly guilty of breaching Hume's rule when he tries to draw from a theory of a factual nature ethical premises that tell us what we ought to do. It may be that, coupled with the premise that we wish our species to survive for as long as possible, evolutionary theory will suggest the direction we ought to take, but even that premise cannot be regarded as unquestionable. It is not impossible to imagine circumstances in which life is so grim that extinction is preferable. That choice cannot be dictated by science. It is even less plausible to suppose that more specific choices about social equality can be settled by evolutionary theory. At best, the theory would indicate the costs we might incur by moving to greater equality; it could not conceivably tell us whether incurring those costs is justifiable.
Recent developments in metaethics
In view of the heat of the debate between Hare and his naturalist opponents during the 1960s, the next development was surprising. At first in articles and then in the book Moral Thinking (1981), Hare offered a new understanding of what is involved in universalizability that relies on treating moral ideals in a similar fashion to ordinary desires or preferences. In Freedom and Reason the universalizability of moral judgments prevented me from giving greater weight to my own interests, simply on the grounds that they are mine, than I was prepared to give to anyone else's interests. In Moral Thinking Hare argued that to hold an ideal, whether it be a Nazi ideal such as the purity of the Aryan race or a more conventional ideal such as that justice must be done irrespective of the consequences, is really to have a special kind of preference. When I ask whether I can prescribe a moral judgment universally, I must take into account all the ideals and preferences held by all those who will be affected by the action I am judging; and in taking these into account, I cannot give any special weight to my own ideals merely because they are my own. The effect of this application of universalizability is that for a moral judgment to be universalizable it must ultimately be based on the maximum possible satisfaction of the preferences of all those affected by it. Thus Hare claimed that his reading of the formal property of universalizability inherent in moral language enables him to solve the ancient problem of showing how reason can, at least in principle, resolve ethical disagreement. Moral freedom, on the other hand, has been reduced to the freedom to be an amoralist and to avoid using moral language altogether.
Hare's position was immediately challenged by J.L. Mackie in Ethics: Inventing Right and Wrong (1977). In the course of a defense of moral subjectivism, Mackie argued that Hare had stretched the notion of universalizability far beyond anything that is really inherent in moral language. Moreover, even if such a notion were embodied in our way of thinking and talking about morality, Mackie insisted that we would always be free to reject such notions and to decide what to do without concerning ourselves with whether our judgments are universalizable in Hare's, or indeed in any, sense. According to Mackie, our ordinary use of moral language presupposes that moral judgments are statements about something in the universe and, therefore, can be true or false. This is, however, a mistake. Drawing on Hume, Mackie says that there cannot be any matters of fact that make it rational for everyone to act in a certain way. If we do not reject morality altogether, we can only base our moral judgments on our own desires and feelings.
There are a number of contemporary British philosophers who do not accept either Hare's or Mackie's metaethical views. Those who hold forms of naturalism have already been mentioned. Others, including the Oxford philosophers David Wiggins and John McDowell, have employed modern semantic theories of the nature of truth to show that even if moral judgments do not correspond to any objective facts or self-evident truths, they may still be proper candidates for being true or false. This position has become known as moral realism. For some, it makes moral judgments true or false at the cost of taking objectivity out of the notion of truth.
Many modern writers on ethics, including Mackie and Hare, share a view of the nature of practical reason derived from Hume. Our reasons for acting morally, they hold, must depend on our desires because reason in action applies only to the best way of achieving what we desire. This view of practical reason virtually precludes any general answer to the question “Why should I be moral?” Until very recently, this question had received less attention in the 20th century than in earlier periods. In the early part of the century, such intuitionists as H.A. Prichard had rejected all attempts to offer extraneous reasons for being moral. Those who understood morality would, they said, see that it carried its own internal reasons for being followed. For those who could not see these reasons, the situation was reminiscent of the story of the emperor's new clothes.
The question fared no better with the emotivists. They defined morality so broadly that anything an individual desires can be considered to be moral. Thus there can be no conflict between morality and self-interest, and if anyone asks “Why should I be moral?” the emotivist response would be to say “Because whatever you most approve of doing is, by definition, your morality.” Here the question is effectively being rejected as senseless, but this reply does nothing to persuade the questioners to act in a benevolent or socially desirable way. It merely tells them that no matter how antisocial their actions may be, they can still be moral as the emotivists define the term.
For Hare, on the other hand, the question “Why should I be moral?” amounts to asking why I should act only on those judgments that I am prepared to universalize; and the answer he gives is that unless this is what I want to do, it is not always possible to give an adult a reason for doing so. At the same time, Hare does believe that if someone asks why children should be brought up to be morally good, the answer is that they are more likely to be happy if they develop habits of acting morally.
Other philosophers have put the question to one side, saying that it is a matter for psychologists rather than for philosophers. In earlier periods, of course, psychology was considered a branch of philosophy rather than a separate discipline, but in fact psychologists have also had little to say about the connection between morality and self-interest. In Motivation and Personality (1954) and other works, Abraham H. Maslow developed a psychological theory reminiscent of Shaftesbury in its optimism about the link between personal happiness and moral values, but Maslow's factual evidence was thin. Victor Emil Frankl, a psychotherapist, has written several popular books defending a position essentially similar to that of Joseph Butler on the attainment of happiness. The gist of this view is known as the paradox of hedonism. In The Will to Meaning (1969), Frankl states that those who aim directly at happiness do not find it; those whose lives have meaning or purpose apart from their own happiness find happiness as well.
The U.S. philosopher Thomas Nagel has taken a different approach to the question of how we may be motivated to act altruistically. Nagel challenges the assumption that Hume was right about reason being subordinate to desires. In The Possibility of Altruism (1969), Nagel sought to show that if reason must always be based on desire, even our normal idea of prudence (that we should give the same weight to our future pains and pleasures as we give to our present ones) becomes incoherent. Once we accept the rationality of prudence, however, Nagel argued that a very similar line of argument can lead us to accept the rationality of altruism—i.e., the idea that the pains and pleasures of another individual are just as much a reason for one to act as are one's own pains and pleasures. This means that reason alone is capable of motivating moral action; hence, it is unnecessary to appeal to self-interest or benevolent feelings. Though not an intuitionist in the ordinary sense, Nagel has effectively reopened the 18th-century debate between the moral sense school and the intuitionists who believed that reason alone can play a role in action.
The most influential work in ethics by a U.S. philosopher since the early 1960s, John Rawls's Theory of Justice (1971), is for the most part centred on normative ethics, and so will be discussed in the next section; it has, however, had some impact in metaethics as well. To argue for his principles of justice, Rawls uses the idea of a hypothetical contract, in which the contracting parties are behind a “veil of ignorance” that prevent them from knowing any particular details about their own attributes. Thus one cannot try to benefit oneself by choosing principles of justice that favour the wealthy, the intelligent, males, or whites. The effect of this requirement is in many ways similar to Hare's idea of universalizability, but Rawls claims that it avoids, as the former does not, the trap of grouping together the interests of different individuals as if they all belonged to one person. Accordingly, the old social contract model that had largely been neglected since the time of Rousseau has had a new wave of popularity as a form of argument in ethics.
The other aspect of Rawls's thought to have metaethical significance is his so-called method of reflective equilibrium—the idea that a sound moral theory is one that matches reflective moral judgments. In A Theory of Justice Rawls uses this method to justify tinkering with the original model of the hypothetical contract until it produces results that are not too much at odds with ordinary ideas of justice. To his critics, this represents a reemergence of a conservative form of intuitionism, for it means that new moral theories are tested against ordinary moral intuitions. If a theory fails to match enough of these, it will be rejected no matter how strong its own foundations may be. In Rawls's defense it may be said that it is only our “reflective moral judgments” that serve as the testing ground—our ordinary moral intuitions may be rejected, perhaps simply because they are contrary to a well-grounded theory. If such be the case, the charge of conservatism may be misplaced, but in the process the notion of some independent standard by which the moral theory may be tested has been weakened, perhaps so far as to become virtually meaningless.
Perhaps the most impressive work of metaethics published in the United States in recent years is R.B. Brandt's Theory of the Good and the Right (1979). Brandt returns to something like the naturalism of Ralph Barton Perry but with a distinctive late 20th-century American twist. He spends little time on the concept of good, believing that everything capable of being expressed by this word can be more clearly stated in terms of rational desires. To explicate this notion of a rational desire, Brandt appeals to cognitive psychotherapy. An ideal process of cognitive psychotherapy would eliminate many desires: those based on false beliefs, those which one has only because one is ignoring the feelings or desires that are likely to be expressed in the future, the desires or aversions that are artificially caused by others, desires that are based on early deprivation, and so on. The desires that an individual would still have, undiminished in strength after going through this process, are what Brandt is prepared to call rational desires.
In contrast to his view of the term good, Brandt does think that the notions of morally right and morally wrong are useful. He suggests that, in calling an action morally wrong, we should mean that it would be prohibited by any moral code that all fully rational people would support for the society in which they are to live. (Brandt then argues that fully rational people would support that moral code which would maximize happiness, but the justification of this claim is a task for normative ethics, not metaethics.)
Brandt's final chapter is an indication of the revival of interest in the question, as he phrases it, “Is it always rational to act morally?” His answer, echoing Shaftesbury in modern guise, is that such desires as benevolence would survive cognitive psychotherapy, and so a rational person would be benevolent. A rational person would also have other moral motives, including an aversion to dishonesty. These motives will occasionally conflict with self-interested desires, and there can be no guarantee that the moral motives will be the stronger. If they are not, and in spite of the fact that a rational person would support a code favouring honesty, Brandt is unable to say that it would be irrational to follow self-interest rather than morality. A fully rational person might support a certain kind of moral code and yet not act in accordance with it on every occasion.
As the century draws to a close, the issues that divided Plato and the Sophists are still dividing moral philosophers. Ironically, the one position that now has few defenders is Plato's view that “good” refers to an idea or property having an objective existence quite apart from anyone's attitudes or desires—on this point the Sophists appear to have won out at last. Yet, this still leaves ample room for disagreement about the extent to which reason can bring about agreed decisions on what we ought to do. There also remains the dispute about whether it is proper to refer to moral judgments as true and false. On the other central question of metaethics, the relationship between morality and self-interest, a complete reconciliation of the two continues to prove—at least for those not prepared to appeal to a belief in reward and punishment in another life—as elusive as it did for Sidgwick at the end of the 19th century.
Normative ethics seeks to set norms or standards for conduct. The term is commonly used in reference to the discussion of general theories about what one ought to do, a central part of Western ethics since ancient times. Normative ethics continued to hold the spotlight during the early years of the 20th century, with intuitionists such as W.D. Ross engaged in showing that an ethic based on a number of independent duties was superior to Utilitarianism. With the rise of Logical Positivism and emotivism, however, the logical status of normative ethics seemed doubtful: Was it not simply a matter of whatever one approved? Nor was the analysis of language, which dominated philosophy in English-speaking countries during the 1950s, any more congenial to normative ethics. If philosophy could do no more than analyze words and concepts, how could it offer guidance about what one ought to do? The subject was therefore largely neglected until the 1960s, when emotivism and linguistic analysis were both on the retreat and moral philosophers once again began to think about how individuals ought to live.
A crucial question of normative ethics is whether actions are to be judged right or wrong solely on the basis of their consequences. Traditionally, those theories that judge actions by their consequences have been known as teleological theories, while those that judge actions according to whether they fall under a rule have been referred to as deontological theories. Although the latter term continues to be used, the former has been replaced to a large extent by the more straightforward term consequentialist. The debate over this issue has led to the development of different forms of consequentialist theories and to a number of rival views.
Varieties of consequentialism
The simplest form of consequentialism is classical Utilitarianism, which holds that every action is to be judged good or bad according to whether its consequences do more than any alternative action to increase—or, if that is impossible, to limit any unavoidable decrease in—the net balance of pleasure over pain in the universe. This is often called hedonistic Utilitarianism.
G.E. Moore's normative position offers an example of a different form of consequentialism. In the final chapters of the aforementioned Principia Ethica and also in Ethics (1912), Moore argued that the consequences of actions are decisive for their morality, but he did not accept the classical Utilitarian view that pleasure and pain are the only consequences that matter. Moore asked his readers to picture a world filled with all possible imaginable beauty but devoid of any being who can experience pleasure or pain. Then the reader is to imagine another world, as ugly as can be but equally lacking in any being who experiences pleasure or pain. Would it not be better, Moore asked, that the beautiful world rather than the ugly world exist? He was clear in his own mind that the answer was affirmative, and he took this as evidence that beauty is good in itself, apart from the pleasure it brings. He also considered that the friendship of close personal relationships has a similar intrinsic value independent of its pleasantness. Moore thus judged actions by their consequences but not solely by the amount of pleasure they produced. Such a position was once called ideal Utilitarianism because it was a form of Utilitarianism based on certain ideals. Today, however, it is more frequently referred to by the general label consequentialism, which includes, but is not limited to, Utilitarianism.
R.M. Hare is another example of a consequentialist. His interpretation of universalizability leads him to the view that for a judgment to be universalizable, it must prescribe what is most in accord with the preferences of all those affected by the action. This form of consequentialism is frequently called preference Utilitarianism because it attempts to maximize the satisfaction of preferences, just as classical Utilitarianism endeavours to maximize pleasure or happiness. Part of the attraction of such a view lies in the way in which it avoids making judgments about what is intrinsically good, finding its content instead in the desires that people, or sentient beings generally, do have. Another advantage is that it overcomes the objection, which so deeply troubled Mill, that the production of simple, mindless pleasure becomes the supreme goal of all human activity. Against these advantages we must put the fact that most preference Utilitarians want to base their judgments, not on the desires that people actually have, but rather on those they would have if they were fully informed and thinking clearly. It then becomes essential to discover what people would want under these conditions, and, because most people most of the time are less than fully informed and clear in their thoughts, the task is not an easy one.
It may also be noted in passing that Hare claims to derive his version of Utilitarianism from universalizability, which in turn he draws from moral language and moral concepts. Moore, on the other hand, had simply found it self-evident that certain things were intrinsically good. Another Utilitarian, the Australian philosopher J.J.C. Smart, has defended hedonistic Utilitarianism by asserting that he has a favourable attitude to making the surplus of happiness over misery as large as possible. As these differences suggest, consequentialism can be held on the basis of widely differing metaethical views.
Consequentialists may also be separated into those who ask of each individual action whether it will have the best consequences, and those who ask this question only of rules or broad principles and then judge individual actions by whether they fall under a good rule or principle. The distinction having arisen in the specific context of Utilitarian ethics, the former are known as act-Utilitarians and the latter as rule-Utilitarians.
Rule-Utilitarianism developed as a means of making the implications of Utilitarianism less shocking to ordinary moral consciousness. (The germ of this approach is seen in Mill's defense of Utilitarianism.) There might be occasions, for example, when stealing from one's wealthy employer in order to give to the poor would have good consequences. Yet, surely it would be wrong to do so. The rule-Utilitarian solution is to point out that a general rule against stealing is justified on Utilitarian grounds, because otherwise there could be no security of property. Once the general rule has been justified, individual acts of stealing can then be condemned whatever their consequences because they violate a justifiable rule.
This suggests an obvious question, one already raised by the above account of Kant's ethics: How specific may the rule be? Although a rule prohibiting stealing may have better consequences than no rule at all against stealing, would not the best consequences of all follow from a rule that permitted stealing only in those special cases in which it is clear that stealing will have better consequences than not stealing? But what then is the difference between act- and rule-Utilitarianism? In Forms and Limits of Utilitarianism (1965), David Lyons argued that if the rule were formulated with sufficient precision to take into account all its causally relevant consequences, rule-Utilitarianism would collapse into act-Utilitarianism. If rule-Utilitarianism is to be maintained as a distinct position, then there must be some restriction on how specific the rule can be so that at least some relevant consequences are not taken into account.
To ignore relevant consequences is to break with the very essence of consequentialism; rule-Utilitarianism is therefore not a true form of Utilitarianism at all. That, at least, is the view taken by Smart, who has derided rule-Utilitarianism as “rule-worship” and consistently defended act-Utilitarianism. Of course, when time and circumstances make it awkward to calculate the precise consequences of an action, Smart's act-Utilitarian will resort to rough and ready “rules of thumb” for guidance; but these rules of thumb have no independent status apart from their usefulness in predicting likely consequences, and if ever we are clear that we will produce better consequences by acting contrary to the rule of thumb, we should do so. If this leads us to do things that are contrary to the rules of conventional morality, then, Smart says, so much the worse for conventional morality.
Today, straightforward rule-Utilitarianism has few supporters. On the other hand, a number of more complex positions have been proposed, bridging in some way the distance between rule-Utilitarianism and act-Utilitarianism.
In Moral Thinking Hare distinguished two levels of thought about what we ought to do. At the critical level we may reason about the principles that should govern our action and consider what would be for the best in a variety of hypothetical cases. The correct answer here, Hare believed, is always that the best action will be the one that has the best consequences. This principle of critical thinking is not, however, well-suited for everyday moral decision making. It requires calculations that are difficult to carry out under the most ideal circumstances and virtually impossible to carry out properly when we are hurried or liable to be swayed by our emotions or our interests. Everyday moral decisions are the proper domain of the intuitive level of moral thought. At this intuitive level we do not enter into fine calculations of consequences; instead, we act in accordance with fundamental moral principles that we have learned and accepted as determining, for practical purposes, whether an act is right or wrong. Just what these moral principles should be is a task for critical thinking. They must be the principles that, when applied intuitively by most people, will produce the best consequences overall, and they must also be sufficiently clear and brief to be made part of the moral education of children. Hare therefore can avoid the dilemma of the rule-Utilitarian while still preserving the advantages of that position. Given that ordinary moral beliefs reflect the experience of many generations, Hare believed that judgments made at the intuitive level will probably not be too different from judgments made by conventional morality. At the same time, Hare's restriction on the complexity of the intuitive principles is fully consequentialist in spirit.
Some recently published work has gone further still in this direction. Following on earlier discussions of the difficulties consequentialists may have in trusting one another—since the word of a Utilitarian is only as good as the consequences of keeping the promise appear to him to be—Donald Regan has explored the problems of cooperation among Utilitarians in his Utilitarianism and Co-operation (1980) and has come out with a further variation designed to make cooperation feasible and thus to achieve the best consequences on the whole. In Reasons and Persons (1984), Derek Parfit argued that to aim always at producing the best consequences would be indirectly self-defeating; we would be cutting ourselves off from some of the greatest goods of human life, including those close personal relationships that demand that we sacrifice the ideal of impartial benevolence to all in order that we may give preference to those we love. We therefore need, Parfit suggested, not simply a theory of what we should all do, but a theory of what motives we should all have. Parfit, like Hare, plausibly contended that recognizing this distinction will bring the practical application of consequentialist theories closer to conventional moral judgments.
An ethic of prima facie duties
In the first third of the 20th century, it was the intuitionists, especially W.D. Ross, who provided the major alternative to Utilitarianism. Because of this situation, the position described below is sometimes called intuitionism, but it seems less likely to cause confusion if we reserve that label for the quite distinct metaethical position held by Ross—and incidentally by Sidgwick as well—and refer to the normative position by the more descriptive label, an “ethic of prima facie duties.”
Ross's normative ethic consists of a list of duties, each of which is to be given independent weight: fidelity, reparation, gratitude, beneficence, nonmaleficence, and self-improvement. If an act falls under one and only one of these duties, it ought to be carried out. Often, of course, an act will fall under two or more duties: I may owe a debt of gratitude to someone who once helped me, but beneficence will be better served if I help others in greater need. This is why the duties are, Ross says, prima facie rather than absolute; each duty can be overridden if it conflicts with a more stringent duty.
An ethic structured in this manner may match our ordinary moral judgments more closely than a consequentialist ethic, but it suffers from two serious drawbacks. First, how can we be sure that just those duties listed by Ross are independent sources of moral obligation? Ross could only respond that if we examine them closely we will find that these, and these alone, are self-evident. But others, even other intuitionists, have found that what was self-evident to Ross was not self-evident to them. Second, if we grant Ross his list of independent prima facie moral duties, we still need to know how to decide, in a particular situation, when a less stringent duty is overridden by a more stringent one. Here, too, Ross had no better answer than an unsatisfactory appeal to intuition.
Rawls's theory of justice
When philosophers again began to take an interest in normative ethics in the 1960s after an interval of some 30 years, no theory could rival the ability of Utilitarianism to provide a plausible and systematic basis for moral judgments in all circumstances. Yet, many people found themselves unable to accept Utilitarianism. One common ground for dissatisfaction was that Utilitarianism does not offer any principle of justice beyond the basic idea that everyone's happiness—or preferences, depending on the form of Utilitarianism—counts equally. Such a principle is quite compatible with sacrificing the welfare of some to the greater welfare of others. This situation explains the enthusiastic welcome accorded to Rawls's Theory of Justice when it appeared in 1971. Rawls offered an alternative to Utilitarianism that came close to matching its rival's ability to provide a systematic theory of what one ought to do and, at the same time, led to conclusions about justice very different from those of the Utilitarians.
Rawls asserted that if people had to choose principles of justice from behind a “veil of ignorance” that restricted what they could know of their own position in society, they would not seek to maximize overall utility. Instead, they would safeguard themselves against the worst possible outcome, first, by insisting on the maximum amount of liberty compatible with the like liberty for others; and, second, by requiring that wealth be distributed so as to make the worst-off members of the society as well-off as possible. This second principle is known as the “maximin” principle, because it seeks to maximize the welfare of those at the minimum level of society. Such a principle might be thought to lead directly to an insistence on the equal distribution of wealth, but Rawls points out that if we accept certain assumptions about the effect of incentives and the benefits that may flow to all from the productive labours of the most talented members of society, the maximin principle could allow considerable inequality.
In the decade following its appearance, A Theory of Justice was subjected to unprecedented scrutiny by moral philosophers throughout the world. Two major issues emerged: Were the two principles of justice soundly derived from the original contract situation? And did the two principles amount, in themselves, to an acceptable theory of justice?
To the first question, the general verdict was negative. Without appealing to specific psychological assumptions about an aversion to risk—and Rawls disclaimed any such assumptions—there was no convincing way in which Rawls could exclude the possibility that the parties to the original contract would choose to maximize average utility, thus giving themselves the best possible chance of having a high level of welfare. True, each individual making such a choice would have to accept the possibility that he would end up with a very low level of welfare, but that might be a risk worth running for the sake of a chance at a very high level.
Even if the two principles cannot validly be derived from the original contract, they might be sufficiently attractive to stand on their own either as self-evident moral truths—if we are objectivists—or as principles to which we might have favourable attitudes. Maximin, in particular, has proved attractive in a variety of disciplines, including welfare economics, a field in which preference Utilitarianism once reigned unchallenged. But maximin has also had its critics, who have pointed out that the principle could require us to forgo very great benefits to the vast majority if, for some reason, this would require some loss (no matter how trivial) to the worst-off members of society.
One of Rawls's severest critics, Robert Nozick of the United States, rejected the assumption that lies behind not only the maximin principle but behind any principle that seeks to achieve a pattern of distribution by taking from one group in order to give to another. In attempting to bring about a certain pattern of distribution, Nozick said, these principles ignore the question of how the individuals from whom wealth will be taken acquired their wealth in the first place. If they have done so by wholly legitimate means without violating the rights of others, then Nozick held that no one, not even the state, can have the right to take their wealth from them without their consent.
Although appeals to rights have been common since the great 18th-century declarations of the rights of man, most ethical theorists have treated rights as something that must be derived from more basic ethical principles or else from accepted social and legal practices. Recently, however, there have been attempts to turn this tendency around and make rights the basis of the ethical theory. It is in the United States, no doubt because of its history and constitution, that the appeal to rights as a fundamental moral principle has been most common. Nozick's Anarchy, State and Utopia (1974) is one example of a rights-based theory, although it is mostly concerned with the application of the theory in the political sphere and says very little about other areas of normative ethics. Unlike Rawls, who for all his disagreement with Utilitarianism is still a consequentialist of sorts, Nozick is a deontologist. Our rights to life, liberty, and legitimately acquired property are absolute, and no act can be justified if it violates them. On the other hand, we have no duty to assist people in the preservation of their rights. If others go about their own affairs without infringing on the rights of others, I must not infringe on their rights; but if they are starving, I have no duty to share my food with them. We can appeal to the generosity of the rich, but we have absolutely no right to tax them against their will so as to provide relief for the poor. This doctrine has found favour with some Americans on the political right, but it has proved too harsh for most students of ethics.
To illustrate the variety of possible theories based on rights, we can take as another example the one propounded by Ronald Dworkin in Taking Rights Seriously (1977). Dworkin agreed with Nozick that rights are not to be overridden for the sake of improved welfare: rights are, he said, “trumps” over ordinary consequentialist considerations. Dworkin's view of rights, however, derives from a fundamental right to equal concern and respect. This makes it much broader than Nozick's theory, since respect for others may require us to assist them and not merely leave them to fend for themselves. Accordingly, Dworkin's view obliges the state to intervene in many areas to ensure that rights are respected.
In its emphasis on equal concern and respect, Dworkin's theory is part of a recent revival of interest in Kant's principle of respect for persons as the fundamental principle of ethics. This principle, like the principle of justice, is often said to be ignored by Utilitarians. Rawls invoked it when setting out the underlying rationale of his theory of justice. The concept, however, suffers from vagueness, and attempts to develop it into something more specific that could serve as the basis for a complete ethical theory have not—unless Rawls's theory is to count as one of them—offered a satisfactory basis for ethical decision making.
Natural law ethics
As far as secular moral philosophy is concerned, during most of the 20th century, natural law ethics has been considered a lifeless medieval relic, preserved only in Roman Catholic schools of moral theology. It is still true that the chief proponents of natural law are of that particular religious persuasion, but they have recently begun to defend their position by arguments that make no explicit appeal to their religious beliefs. Instead, they start their ethics with the claim that there are certain basic human goods that we should not act against. In the list offered by John Finnis in Natural Law and Natural Rights (1980), for example, these goods are life, knowledge, play, aesthetic experience, friendship, practical reasonableness, and religion. The identification of these goods is a matter of reflection, assisted by the findings of anthropologists. Each of the basic goods is regarded as equally fundamental; there is no hierarchy among them.
It would, of course, be possible to hold a consequentialist ethic that identified several basic human goods of equal importance and judged actions by their tendency to produce or maintain these goods. Thus, if life is a good, any action that led to a preventable loss of life would, other things being equal, be wrong. Natural law ethics, however, rejects this consequentialist approach. It makes the claim that it is impossible to measure the basic goods against each other. Instead of engaging in consequentialist calculations, the natural law ethic is built on the absolute prohibition of any action that aims directly against any basic good. The killing of the innocent, for instance, is always wrong, even if somehow killing one innocent person were to be the only way of saving thousands of innocent people. What is not adequately explained in this rejection of consequentialism is why the life of one innocent person—about whom, let us say, we know no more than that he is innocent—cannot be measured against the lives of a thousand innocent people about whom we have precisely the same information.
Natural law ethics does allow one means of softening the effect of its absolute prohibitions. This is the doctrine of double effect, traditionally applied by Roman Catholic writers to some cases of abortion. If a pregnant woman is found to have a cancerous uterus, the doctrine of double effect allows a doctor to remove the uterus notwithstanding the fact that such action will kill the fetus. This allowance is made not because the life of the mother is regarded as more valuable than the life of the fetus, but because in removing the uterus the doctor is held not to aim directly at the death of the fetus. Instead, its death is an unwanted and indirect side effect of the laudable act of removing a diseased organ. On the other hand, a different medical condition might mean that the only way of saving the mother's life is by directly killing the fetus. Some years ago before the development of modern obstetric techniques, this was the case if the head of the fetus became lodged during delivery. Then the only way of saving the life of the woman was to crush the skull of the fetus. Such a procedure was prohibited, for in performing it the doctor would be directly killing the fetus. This ruling was applied even to those cases in which the death of the mother would certainly bring about the death of the fetus as well. The claim was that the doctor who killed the fetus directly was responsible for a murder, but the deaths from natural causes of the mother and fetus were not considered to be the doctor's doing. The example is significant because it indicates the lengths to which proponents of the natural law ethics are prepared to go in order to preserve the absolute nature of the prohibitions.
All of the normative theories considered so far have had a universal focus—i.e., if they have been consequentialist theories, the goods they sought to achieve were sought for all capable of benefitting from them; and if they were deontological theories, the deontological principles applied equally to whoever might do the act in question. Ethical egoism departs from this consensus, suggesting that we should each consider only the consequences of our actions for our own interests. The great advantage of such a position is that it avoids any possible conflict between morality and self-interest. If it is rational for us to pursue our own interest, then, if the ethical egoist is right, the rationality of morality is equally clear.
We can distinguish two forms of egoism. The individual egoist says, “Everyone should do what is in my interests.” This indeed is egoism, but it is incapable of being couched in a universalizable form, and so it is arguably not a form of ethical egoism. Nor is the individual egoist likely to be able to persuade others to follow a course of action that is so obviously designed to benefit only the person who is advocating it.
Universal egoism is based on the principle “Everyone should do what is in her or his own interests.” This principle is universalizable, since it contains no reference to any particular individual and it is clearly an ethical principle. Others may be disposed to accept it because it appears to offer them the surest possible way of furthering their own interests. Accordingly, this form of egoism is from time to time seized upon by some popular writer who proclaims it the obvious answer to all our ills and has no difficulty finding agreement from a segment of the general public. The U.S. writer Ayn Rand is perhaps the best 20th-century example. Rand's version of egoism is expounded in the novel Atlas Shrugged (1957) by her hero, John Galt, and in The Virtue of Selfishness (1965), a collection of her essays. It is a confusing mixture of appeals to self-interest and suggestions that everyone will benefit from the liberation of the creative energy that will flow from unfettered self-interest. Overlaying all this is the idea that true self-interest cannot be served by stealing, cheating, or similarly antisocial conduct.
As this example illustrates, what starts out as a defense of ethical egoism very often turns into an indirect form of Utilitarianism; the claim is that we will all be better off if each of us does what is in his or her own interest. The ethical egoist is virtually compelled to make this claim because otherwise there is a paradox in the fact that the ethical egoist advocates ethical egoism at all. Such advocacy would be contrary to the very principle of ethical egoism, unless the egoist benefits from others' becoming ethical egoists. If we see our interests as threatened by others' pursuing their own interests, we will certainly not benefit by others' becoming egoists; we would do better to keep our own belief in egoism secret and advocate altruism.
Unfortunately for ethical egoism, the claim that we will all be better off if every one of us does what is in his or her own interest is incorrect. This is shown by what are known as “prisoner's dilemma” situations, which are playing an increasingly important role in discussions of ethical theory. The basic prisoner's dilemma is an imaginary situation in which two prisoners are accused of a crime. If one confesses and the other does not, the prisoner who confesses will be released immediately and the other who does not will spend the next 20 years in prison. If neither confesses, each will be held for a few months and then both will be released. And if both confess, they will each be jailed for 15 years. The prisoners cannot communicate with one another. If each of them does a purely self-interested calculation, the result will be that it is better to confess than not to confess no matter what the other prisoner does. Paradoxical as it might seem, two prisoners, each pursuing his own interest, will end up worse than they would if they were not egoists.
The example might seem bizarre, but analogous situations occur quite frequently on a larger scale. Consider the dilemma of the commuter. Suppose that each commuter finds his or her private car a little more convenient than the bus; but when each of them drives a car, the traffic becomes so congested that everyone would be better off if they all took the bus and the buses moved quickly without traffic holdups. Because private cars are somewhat more convenient than buses, however, and the overall volume of traffic is not appreciably affected by one more car on the road, it is in the interest of each to continue using a private car. At least on the collective level, therefore, egoism is self-defeating—a conclusion well brought out by Parfit in his aforementioned Reasons and Persons.
The most striking development in the study of ethics since the mid-1960s has been the growth of interest among philosophers in practical, or applied, ethics; i.e., the application of normative theories to practical moral problems. This is not, admittedly, a totally new departure. From Plato onward moral philosophers have concerned themselves with practical questions, including suicide, the exposure of infants, the treatment of women, and the proper behaviour of public officials. Christian philosophers, notably Augustine and Aquinas, examined with great care such matters as when a war was just, whether it could ever be right to tell a lie, or if a Christian woman did wrong to commit suicide in order to save herself from rape. Hobbes had an eminently practical purpose in writing his Leviathan, and Hume wrote about the ethics of suicide. Practical concerns continued with the British Utilitarians, who saw reform as the aim of their philosophy: Bentham wrote on an incredible variety of topics, and Mill is celebrated for his essays on liberty and on the subjection of women.
Nevertheless, during the first six decades of the 20th century moral philosophers largely isolated themselves from practical ethics—something that now seems all but incredible, considering the traumatic events through which most of them lived. There were one or two notable exceptions. The philosopher Bertrand Russell was very much involved in practical issues, but his stature among his colleagues was based on his work in logic and metaphysics and had nothing to do with his writings on topics such as disarmament and sexual morality. Russell himself seems to have regarded his practical contributions as largely separate from his philosophical work and did not develop his ethical views in any systematic or rigorous fashion.
The prevailing view of the period was that moral philosophy is quite separate from “moralizing,” a task best left to preachers. What was not generally considered was whether moral philosophers could, without merely preaching, make an effective contribution to discussions of practical issues involving difficult ethical questions. The value of such work began to be widely recognized only during the 1960s, when first the U.S. civil rights movement and subsequently the Vietnam War and the rise of student activism started to draw philosophers into discussions of the moral issues of equality, justice, war, and civil disobedience. (Interestingly, there has been very little discussion of sexual morality—an indication that a subject once almost synonymous with the term morals has become marginal to our moral concerns.)
The founding, in 1971, of Philosophy and Public Affairs, a new journal devoted to the application of philosophy to public issues, provided both a forum and a new standard of rigour for these contributions. Applied ethics soon became part of the teaching of most philosophy departments of universities in English-speaking countries. Here it is not possible to do more than briefly mention some of the major areas of applied ethics and point to the issues that they raise.
Applications of equality
Since much of the early impetus for applied ethics came from the U.S. civil rights movement, such topics as equality, human rights, and justice have been prominent. We often make statements such as “All humans are equal” without thinking too deeply about the justification for the claims. Since the mid-1960s much has been written about how they can be justified. Discussions of this sort have led in several directions, often following social and political movements. The initial focus, especially in the United States, was on racial equality, and here, for once, there was a general consensus among philosophers on the unacceptability of discrimination against blacks. With so little disagreement about racial discrimination itself, the centre of attention soon moved to reverse discrimination: Is it acceptable to favour blacks for jobs and enrollment in universities and colleges because they had been discriminated against in the past and were generally so much worse off than whites? Or is this, too, a form of racial discrimination and unacceptable for that reason?
Inequality between the sexes has been another focus of discussion. Does equality here mean ending as far as possible all differences in the sex roles, or could we have equal status for different roles? There has been a lively debate—both between feminists and their opponents and, on a different level, among feminists themselves—about what a society without sexual inequality would be like. Here, too, the legitimacy of reverse discrimination has been a contentious issue. Feminist philosophers have also been involved in debates over abortion and new methods of reproduction. These topics will be covered separately below.
Many discussions of justice and equality are limited in scope to a single society. Even Rawls's theory of justice, for example, has nothing to say about the distribution of wealth between societies, a subject that could make acceptance of his maximin principle much more onerous. But philosophers have now begun to think about the moral implications of the inequality in wealth between the affluent nations (and their citizens) and those living in countries subject to famine. What are the obligations of those who have plenty when others are starving? It has not proved difficult to make a strong case for the view that affluent nations, as well as affluent individuals, ought to be doing much more to help the poor than they are generally now doing.
There is one issue related to equality in which philosophers have led, rather than followed, a social movement. In the early 1970s, a group of young Oxford-based philosophers began to question the assumption that while all humans are entitled to equal moral status, nonhuman animals automatically have an inferior position. The publication in 1972 of Animals, Men and Morals: An Inquiry into the Maltreatment of Non-humans, edited by Roslind and Stanley Godlovitch and John Harris, was followed three years later by Peter Singer's Animal Liberation and then by a flood of articles and books that established the issue as a part of applied ethics. At the same time, these writings provided the philosophical basis for the animal liberation movement, which has had an effect on attitudes and practices toward animals in many countries.
Environmental issues raise a host of difficult ethical questions, including the ancient one of the nature of intrinsic value. Whereas many philosophers in the past have agreed that human experiences have intrinsic value and the Utilitarians at least have always accepted that the pleasures and pains of nonhuman animals are of some intrinsic significance, this does not show why it is so bad if dodos become extinct or a rain forest is cut down. Are these things to be regretted only because of the loss to humans or other sentient creatures? Or is there more to it than that? Some philosophers are now prepared to defend the view that trees, rivers, species (considered apart from the individual animals of which they consist), and perhaps ecological systems as a whole have a value independent of the instrumental value they may have for humans or other sentient creatures.
Our concern for the environment also raises the question of our obligations to future generations. How much do we owe to the future? From a social contract view of ethics or for the ethical egoist, the answer would seem to be: nothing. For we can benefit them, but they are unable to reciprocate. Most other ethical theories, however, do give weight to the interests of coming generations. Utilitarians, for one, would not think that the fact that members of future generations do not exist yet is any reason for giving less consideration to their interests than we give to our own, provided only that we are certain that they will exist and will have interests that will be affected by what we do. In the case of, say, the storage of radioactive wastes, it seems clear that what we do will indeed affect the interests of generations to come.
The question becomes much more complex, however, when we consider that we can affect the size of future generations by the population policies we choose and the extent to which we encourage large or small families. Most environmentalists believe that the world is already dangerously overcrowded. This may well be so, but the notion of overpopulation conceals a philosophical issue that is ingeniously explored by Derek Parfit in Reasons and Persons (1984). What is optimum population? Is it that population size at which the average level of welfare will be as high as possible? Or is it the size at which the total amount of welfare—the average multiplied by the number of people—is as great as possible? Both answers lead to counterintuitive outcomes, and the question remains one of the most baffling mysteries in applied ethics.
War and peace
The Vietnam War ensured that discussions as to the justness of a war and of the legitimacy of conscription and civil disobedience were prominent in early writings in applied ethics. There was considerable support for civil disobedience against unjust aggression and against unjust laws even in a democracy.
With the cessation of hostilities in Vietnam and the end of conscription, interest in these questions declined. Concern about nuclear weapons in the early 1980s, however, has caused philosophers to argue about whether nuclear deterrence can be an ethically acceptable strategy if it means using civilian populations as potential nuclear targets. Jonathan Schell's Fate of the Earth (1982) raised several philosophical questions about what we ought to do in the face of the possible destruction of all life on our planet.
Abortion, euthanasia, and the value of human life
A number of ethical questions cluster around both ends of the human life span. Whether abortion is morally justifiable has popularly been seen as depending on our answer to the question “When does a human life begin?” Many philosophers believe this to be the wrong question to ask because it suggests that there might be a factual answer that we can somehow discover through advances in science. Instead, these philosophers think we need to ask what it is that makes killing a human being wrong and then consider whether these characteristics, whatever they might be, apply to the fetus in an abortion. There is no generally agreed upon answer, yet some philosophers have presented surprisingly strong arguments to the effect that not only the fetus but even the newborn infant has no right to life. This position has been defended by Jonathan Glover in Causing Death and Saving Lives (1977) and in more detail by Michael Tooley in Abortion and Infanticide (1984).
Such views have been hotly contested, especially by those who claim that all human life, irrespective of its characteristics, must be regarded as sacrosanct. The task for those who defend the sanctity of human life is to explain why human life, no matter what its characteristics, is specially worthy of protection. Explanation could no doubt be provided in terms of such traditional Christian doctrines as that all humans are made in the image of God or that all humans have an immortal soul. In the current debate, however, the opponents of abortion have eschewed religious arguments of this kind without finding a convincing secular alternative.
Somewhat similar issues are raised by euthanasia when it is nonvoluntary, as, for example, in the case of severely disabled newborn infants. Euthanasia, however, can be voluntary, and this has brought it support from some who hold that the state should not interfere with the free, informed choices of its citizens in matters that do not cause others harm. (The same argument is often invoked in defense of the pro-choice position in the abortion controversy; but it is on much weaker ground in this case because it presupposes what it needs to prove—namely, that the fetus does not count as an “other.”) Opposition to voluntary euthanasia has centred on practical matters such as the difficulty of adequate safeguards and on the argument that it would lead to a “slippery slope” that would take us to nonvoluntary euthanasia and eventually to the compulsory involuntary killing of those the state considers to be socially undesirable.
Philosophers have also canvassed the moral significance of the distinction between killing and allowing to die, which is reflected in the fact that many physicians will allow a patient with an incurable condition to die when life could still be prolonged, but they will not take active steps to end the patient's life. Consequentialist philosophers, among them both Glover and Tooley, have denied that this distinction possesses any intrinsic moral significance. For those who uphold a system of absolute rules, on the other hand, a distinction between acts and omissions is essential if they are to render plausible the claim that we must never breach a valid moral rule.
The issues of abortion and euthanasia are included in one of the fastest growing areas of applied ethics, that dealing with ethical issues raised by new developments in medicine and the biological sciences. This subject, known as bioethics, often involves interdisciplinary work, with physicians, lawyers, scientists, and theologians all taking part. Centres for research in bioethics have been established in Australia, Britain, Canada, and the United States. Many medical schools have added the discussion of ethical issues in medicine to their curricula. Governments have sought to deal with the most controversial issues by appointing special committees to provide ethical advice.
Several key themes run through the subjects covered by bioethics. One, related to abortion and euthanasia, is whether the quality of a human life can be a reason for ending it or for deciding not to take steps to prolong it. Since medical science can now keep alive severely disabled infants who a few years ago would have died soon after birth, pediatricians are regularly faced with this question. The issue received national publicity in Britain in 1981 when a respected pediatrician was charged with murder, following the death of an infant with Down's syndrome. Evidence at the trial indicated that the parents had not wanted the child to live and that the pediatrician had consequently prescribed a narcotic painkiller. The doctor was acquitted. The following year, in the United States, an even greater furor was caused by a doctor's decision to follow the wishes of the parents of a Down's syndrome infant and not carry out surgery without which the baby would die. The doctor's decision was upheld by the Supreme Court of Indiana, and the baby died before an appeal could be made to the U.S. Supreme Court. In spite of the controversy and efforts by government officials to ensure that handicapped infants are given all necessary lifesaving treatment, in neither Britain nor the United States is there any consensus about the decisions that should be made when severely disabled infants are born or by whom these decisions should be made.
Medical advances have raised other related questions. Even those who defend the doctrine of the sanctity of all human life do not believe that doctors have to use extraordinary means to prolong life, but the distinction between ordinary and extraordinary means, like that between acts and omissions, is itself under attack. Critics assert that the wishes of the patient or, if these cannot be ascertained, the quality of the patient's life provides a more relevant basis for a decision than the nature of the means to be used.
Another central theme is that of patient autonomy. This arises not only in the case of voluntary euthanasia but also in the area of human experimentation, which has come under close scrutiny following reported abuses. It is generally agreed that patients must give informed consent to any experimental procedures. But how much and how detailed information is the patient to be given? The problem is particularly acute in the case of randomly controlled trials, which scientists consider the most desirable way of testing the efficacy of a new procedure but which require that the patient agree to being administered randomly one of two or more forms of treatment.
The allocation of medical resources became a life-and-death issue when hospitals obtained dialysis machines and had to choose which of their patients suffering from kidney disease would be able to use the scarce machines. Some argued for “first come, first served,” whereas others thought it obvious that younger patients or patients with dependents should have preference. Kidney machines are no longer as scarce, but the availability of various other exotic, expensive lifesaving techniques is limited; hence, the search for rational principles of distribution continues.
New issues arise as further advances are made in biology and medicine. In 1978 the birth of the first human being to be conceived outside the human body initiated a debate about the ethics of in vitro fertilization. This soon led to questions about the freezing of human embryos and what should be done with them if, as happened in 1984 with two embryos frozen by an Australian medical team, the parents should die. The next controversy in this area arose over commercial agencies offering infertile married couples a surrogate mother who would for a fee be impregnated with the sperm of the husband and then surrender the resulting baby to the couple. Several questions emerged: Should we allow women to rent their wombs to the highest bidder? If a woman who has agreed to act as a surrogate changes her mind and decides to keep the baby, should she be allowed to do so?
The culmination of such advances in human reproduction will be the mastery of genetic engineering. Then we will all face the question posed by the title of Jonathan Glover's probing book What Sort of People Should There Be? (1984). Perhaps this will be the most challenging issue for 21st-century ethics.
For an introduction to the major theories of ethics, the reader should consult Richard B. Brandt, Ethical Theory: The Problems of Normative and Critical Ethics (1959), an excellent comprehensive textbook. William K. Frankena, Ethics, 2nd ed. (1973), is a much briefer treatment. Another concise work is Bernard Williams, Ethics and the Limits of Philosophy (1985). There are several useful collections of classical and modern writings; among the better ones are Oliver A. Johnson, Ethics: Selections from Classical and Contemporary Writers, 5th ed. (1984); and James Rachels (ed.), Understanding Moral Philosophy (1976), which places greater emphasis on modern writers.
Origins of ethics
Joyce O. Hertzler, The Social Thought of the Ancient Civilizations (1936, reissued 1961), is a wide-ranging collection of materials. Edward Westermarck, The Origin and Development of the Moral Ideas, 2 vol., 2nd ed. (1912–17, reprinted 1971), is dated but still unsurpassed as a comprehensive account of anthropological data. Mary Midgley, Beast and Man: The Roots of Human Nature (1978, reissued 1980), is excellent on the links between biology and ethics; and Edward O. Wilson, Sociobiology: The New Synthesis (1975), and On Human Nature (1978), contain controversial speculations on the biological basis of social behaviour. Richard Dawkins, The Selfish Gene (1976, reprinted 1978), is another evolutionary account, fascinating but to be used with care.
History of Western ethics
Henry Sidgwick, Outlines of the History of Ethics for English Readers, 6th enlarged ed. (1931, reissued 1967), is a triumph of scholarship and brevity. William Edward Hartpole Lecky, History of European Morals from Augustus to Charlemagne, 2 vol., 3rd rev. ed. (1877, reprinted 1975), is fascinating and informative. Among more recent histories, Vernon J. Bourke, History of Ethics (1968, reissued in 2 vol., 1970), is remarkably comprehensive; while Alasdaire MacIntyre, A Short History of Ethics (1966), is a readable personal view.
Surama Dasgupta, Development of Moral Philosophy in India (1961, reissued 1965), is a clear discussion of the various schools. Sarvepalli Radhakrishnan and Charles A. Moore (eds.), A Source Book in Indian Philosophy (1957, reprinted 1967), is a collection of key primary sources. For Buddhist texts, see Edward Conze et al. (eds.), Buddhist Texts Through the Ages (1954, reissued 1964).
Standard introductions to the works of classic Chinese authors mentioned in the article are E.R. Hughes (ed.), Chinese Philosophy in Classical Times (1942, reprinted 1966); and Fung Yu-Lan, A History of Chinese Philosophy, 2 vol., trans. from the Chinese (1952–53, reprinted 1983).
Ancient Greek and Roman ethics
Jonathan Barnes, The Presocratic Philosophers, rev. ed. (1982), treats Greek ethics before Socrates. The central texts of the Classic period of Greek ethics are Plato, Politeia (The Republic), Euthyphro, Protagoras, and Gorgias; and Aristotle, Ethica Nicomachea (Nicomachean Ethics). A concise introduction to the ethical thought of this period is provided by Pamela Huby, Greek Ethics (1967); and Christopher Rowe, An Introduction to Greek Ethics (1976). Significant writings of the Stoics include Marcus Tullius Cicero, De officiis (On Duties); Lucius Annaeus Seneca, Epistulae morales (Moral Essays); and Marcus Aurelius, D. imperatoris Marci Antonini Commentariorum qvos sibi ipsi scripsit libri XII (The Meditations of the Emperor Marcus Antoninus). From Epicurus only fragments remain; they have been collected in Cyril Bailey (ed.), Epicurus, the Extant Remains (1926, reprinted 1979). The most complete of the surviving works of the Epicureans is Lucretius, De rerum natura (On the Nature of Things).
Early and medieval Christian ethics
In addition to the Gospels and Paul's letters, important writings include St. Augustine, De civitate Dei (413–426; The City of God), and Enchiridion ad Laurentium de fide, spe, et caritate (421; Enchiridion to Laurentius on Faith, Hope and Love); Peter Abelard, Ethica (c. 1135; Ethics); and St. Thomas Aquinas, Summa theologiae (1265 or 1266–73). On the history of the transition from Roman ethics to Christianity, W.E.H. Lecky, op.cit., remains unsurpassed. D.J. O'Connor, Aquinas and Natural Law (1967), is a brief introduction to the most important of the Scholastic writers on ethics.
Ethics of the Renaissance and Reformation
Machiavelli's chief works are available in modern translations: Niccolò Machiavelli, The Prince, trans. and ed. by Peter Bondanella and Mark Musa (1984), and The Discourses, trans. by Leslie J. Walker (1975). For Luther's writings, see the comprehensive edition Martin Luther, Works, 55 vol., ed. by Jaroslav Pelikan et al. (1955–76). Calvin's major work is available in Jean Calvin, Institutes of the Christian Religion, trans. by Henry Beveridge, 2 vol. (1979).
The British tradition from Hobbes to the Utilitarians
The key works of this period include Thomas Hobbes, Leviathan (1651); Ralph Cudworth, Eternal and Immutable Morality (published posthumously, 1688); Henry More, Enchiridion Ethicum (1662); Samuel Clarke, Boyle lectures for 1705, published in his Works, 4 vol. (1738–42); 3rd Earl of Shaftesbury, “Inquiry Concerning Virtue or Merit,” published together with other essays in his Characteristicks of Men, Manners, Opinions, Times (1711); Joseph Butler, Fifteen Sermons (1726); Francis Hutcheson, Inquiry into the Original of Our Ideas of Beauty and Virtue (1725), and A System of Moral Philosophy, 2 vol. (1755); David Hume, A Treatise of Human Nature (1739–40), and An Enquiry Concerning the Principles of Morals (1751); Richard Price, A Review of the Principal Questions and Difficulties in Morals (1758); Thomas Reid, Essays on the Active Powers of the Human Mind (1758); William Paley, The Principles of Moral and Political Philosophy (1785); Jeremy Bentham, Introduction to the Principles of Morals and Legislation (1789); John Stuart Mill, Utilitarianism (1863); and Henry Sidgwick, The Methods of Ethics (1874). Selections of the major texts of this period are brought together in D.D. Raphael (ed.), British Moralists, 1650–1800, 2 vol. (1969); and in D.H. Monro (ed.), A Guide to the British Moralists (1972). Useful introductions to separate writers include J. Kemp, Ethical Naturalism (1970), on Hobbes and Hume; W.D. Hudson, Ethical Intuitionism (1967), on the intuitionists from Cudworth to Price and the debate with the moral sense school; and Anthony Quinton, Utilitarian Ethics (1973). C.D. Broad, Five Types of Ethical Theory (1930, reprinted 1971), includes clear accounts of the ethics of Butler, Hume, and Sidgwick. J.L. Mackie, Hume's Moral Theory (1980), brilliantly traces the relevance of Hume's work to current disputes about the nature of ethics.
The continental tradition from Spinoza to Nietzsche
The major texts are available in many English translations. See Baruch Spinoza, The Ethics and Selected Letters, trans. by Samuel Shirley, ed. by Seymour Feldman (1982); Jean-Jacques Rousseau, A Discourse on Inequality, trans. by Maurice Cranston (1984), and The Social Contract, annotated ed., trans. by Charles M. Sherover (1974); Immanuel Kant, Grounding for the Metaphysics of Morals, trans. by James W. Ellington (1981), and Critique of Practical Reason, and Other Writings in Moral Philosophy, ed. and trans. by Lewis White Beck (1949, reprinted 1976); G.W.F. Hegel, Phenomenology of Spirit, trans. by A.V. Miller (1977), and Hegel's Philosophy of Right, trans. by T.M. Knox (1967, reprinted 1980); Karl Marx, Economic and Philosophic Manuscripts of 1844, ed. by Dirk J. Struik (1964), Capital: A Critique of Political Economy, trans. by David Fernbach, 3 vol. (1981), and The Communist Manifesto of Marx and Engels, ed. by Harold J. Laski (1967, reprinted 1975); Friedrich Nietzsche, Beyond Good and Evil: Prelude to a Philosophy of the Future, trans. by R.J. Hollingdale (1973), and The Genealogy of Morals: A Polemic, trans. by Horace B. Samuel (1964). Among the easier introductory studies are H.B. Acton, Kant's Moral Philosophy (1970); and Peter Singer, Hegel (1983), and Marx (1980). C.D. Broad, op. cit., contains readable accounts of the ethics of both Spinoza and Kant.
20th-century Western ethics
The most influential writings in metaethics during the 20th century have been George Edward Moore, Principia Ethica (1903, reprinted 1976); W.D. Ross, The Right and the Good (1930, reprinted 1973); A.J. Ayer, Language, Truth, and Logic (1936, reissued 1974); Charles L. Stevenson, Ethics and Language (1944, reprinted 1979); R.M. Hare, The Language of Morals (1952, reprinted 1972), and Freedom and Reason (1963, reprinted 1977); and, in France, Jean-Paul Sartre, Being and Nothingness (1956, reissued 1978; originally published in French, 1943), and Existentialism and Humanism (1948, reprinted 1977; originally published in French, 1946). Ralph Barton Perry, General Theory of Value (1926, reprinted 1967), was highly regarded in the United States but comparatively neglected elsewhere. Wilfrid Sellars and John Hospers (eds.), Readings in Ethical History, 2nd ed. (1970), contains the most important pieces of writing on ethics from the first half of the 20th century. Widely discussed later works include Thomas Nagel, The Possibility of Altruism (1970, reissued 1978); G.J. Warnock, The Object of Morality (1971); J.L. Mackie, Ethics: Inventing Right and Wrong (1977); Richard B. Brandt, A Theory of the Good and the Right (1979); John Finnis, Natural Law and Natural Rights (1980); and R.M. Hare, Moral Thinking: Its Levels, Method, and Point (1981). A defense of naturalism can be found in two important articles by Philippa Foot, “Moral Beliefs” and “Moral Arguments,” both originally published in 1958 and later reprinted in her Virtues and Vices, and Other Essays in Moral Philosophy (1978, reprinted 1981). David Wiggins, Truth, Invention, and the Meaning of Life (1976), is a statement of what has come to be known as “moral realism.” Mary Warnock, Ethics Since 1900, 3rd ed. (1978); G.J. Warnock, Contemporary Moral Philosophy (1967); and W.D. Hudson, A Century of Moral Philosophy (1980), provide guidance through 20th-century metaethical disputes.
For Moore's ideal Utilitarianism, see G.E. Moore, Ethics, 2nd ed. (1966). The best short statement of an act-Utilitarian position is J.J.C. Smart's contribution to J.J.C. Smart and Bernard Williams, Utilitarianism: For and Against (1973). R.M. Hare, op. cit., is an extended argument for a form of preference Utilitarianism that allows some scope to moral principles while not departing from act-Utilitarianism at the level of critical thought. David Lyons, Forms and Limits of Utilitarianism (1965), probes the distinction between act- and rule-Utilitarianism. Richard B. Brandt, op. cit., includes a defense of a version of rule-Utilitarianism. Donald Regan, Utilitarianism and Co-operation (1980), is an ingenious discussion of how the need to cooperate can be incorporated into Utilitarian theory. Amartya Sen and Bernard Williams (eds.), Utilitarianism and Beyond (1982), is a collection of essays on the difficulties of the Utilitarian position. A major contribution to consequentialist theory is Derek Parfit, Reasons and Persons (1984), which includes penetrating arguments on the nature of consequentialist reasoning in ethics. The standard defense of an ethic of prima facie duties remains W.D. Ross, op. cit. H.J. McCloskey, Meta-Ethics and Normative Ethics (1969), is a restatement with some modifications. The most widely discussed alternative theory to Utilitarianism in recent years is set forth in John Rawls, A Theory of Justice (1971, reprinted 1981). Robert Nozick, Anarchy, State, and Utopia (1974), criticizes Rawls and presents a rights-based theory. Another work giving prominence to rights is Ronald Dworkin, Taking Rights Seriously (1977). Very different from the approach of both Nozick and Dworkin is the attempt to ground rights in natural law in John Finnis, op. cit., and a shorter and more accessible introduction to natural law ethics is Fundamentals of Ethics (1983). Egoism as a theory of rationality is discussed by Derek Parfit, op. cit.; a useful collection of readings on this topic is David P. Gauthier (ed.), Morality and Rational Self-Interest (1970); see also Ronald D. Milo (ed.), Egoism and Altruism (1973).
Many of the best examples of applied ethics are to be found in journal articles, particularly in Philosophy and Public Affairs (quarterly). There are many anthologies of representative samples of such writings. Among the better ones are James Rachels (ed.), Moral Problems, 3rd ed. (1979); Jan Narveson (ed.), Moral Issues (1983); and Manuel Velasquez and Cynthia Rostankowski, Ethics, Theory and Practice (1985). There are also books and collections on specific topics. Marshall Cohen, Thomas Nagel, and Thomas Scanlon (eds.), Equality and Preferential Treatment (1977), is a collection of some of the best articles on equality and reverse discrimination; while Alan H. Goldman, Justice and Reverse Discrimination (1979), is a book-length treatment of the issues. Some of the more philosophically probing discussions of feminism are Janet Radcliffe Richards, The Sceptical Feminist (1980, reprinted with corrections, 1982); Mary Midgley and Judith Hughes, Women's Choices: Philosophical Problems Facing Feminism (1983); and Alison M. Jaggar, Feminist Politics and Human Nature (1983). The moral obligations of the wealthy toward the starving are discussed in the anthology World Hunger and Moral Obligation, ed. by William Aiken and Hugh LaFollette.
The ethics of the treatment of animals has given rise to much philosophical discussion. Books arguing for radical change include Stanley Godlovitch, Roslind Godlovitch, and John Harris (eds.), Animals, Man, and Morals: An Enquiry into the Maltreatment of Non-Humans (1971); Peter Singer, Animal Liberation: A New Ethics for Our Treatment of Animals (1975); Stephen R.L. Clark, The Moral Status of Animals (1977, reissued 1984); and Tom Regan, The Case for Animal Rights (1983). R.G. Frey, Interests and Rights: The Case Against Animals (1980), and Rights, Killing, and Suffering: Moral Vegetarianism and Applied Ethics (1983), resist some of these arguments. Mary Midgley, Animals and Why They Matter (1983), takes a middle course.
Essays dealing with ethical issues raised by concern for the environment are collected in Robert Elliot and Arran Gare (eds.), Environmental Philosophy (1983); and K.S. Shrader-Frechette, Environmental Ethics (1981). Useful full-length studies include John Passmore, Man's Responsibility for Nature: Ecological Problems and Western Tradition, 2nd ed. (1980); and H.J. McCloskey, Ecological Ethics and Politics (1983). For specific problems of future generations, see R. Sikora and Brian Barry (eds.), Obligations to Future Generations (1979). A difficult but fascinating discussion of the problem of optimum population size in an ideal world can be found in Derek Parfit, op. cit.
Michael Walzer, Just and Unjust Wars (1977), is a fine study of the morality of war; Richard A. Wasserstrom (ed.), War and Morality (1970), is a valuable collection of essays. Nigel Blake and Kay Pole (eds.), Objections to Nuclear Defence (1984), and Dangers of Deterrence (1984), are collections of philosophical writings on nuclear war.
There is an immense amount of literature on abortion, though of various philosophical depth. Michael Tooley, Abortion and Infanticide (1983), is a penetrating study. For contrasting views, see Germain G. Grisez, Abortion: The Myths, the Realities, and the Arguments (1970); and Baruch A. Brody, Abortion and the Sanctity of Human Life: A Philosophical View (1975). Another notable treatment is L.W. Sumner, Abortion and Moral Theory (1981). Joel Feinberg (ed.), The Problem of Abortion, 2nd ed. (1984), is a good collection of essays. For a discussion of sanctity of life issues in general, including both abortion and euthanasia, see Jonathan Glover, Causing Death and Saving Lives (1977); and Peter Singer, Practical Ethics (1979). The specific problem of the treatment of severely handicapped infants is discussed in Helga Kuhse and Peter Singer, Should the Baby Live? (1985).
For a comprehensive textbook on bioethics, see Tom. L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 2nd ed. (1983). Anthologies of essays on diverse topics in bioethics include Samuel Gorovitz et al. (eds.), Moral Problems in Medicine, 2nd ed. (1983); and John Arras and Robert Hunt (comp.), Ethical Issues in Modern Medicine, 2nd ed. (1983). James F. Childress, Who Should Decide? (1982), deals with paternalism in medical care; while Peter Singer and Deane Wells, The Reproduction Revolution: New Ways of Making Babies (1984), focusses on the new reproductive technology. For the philosophical issues underlying genetic engineering and other methods of altering the human organism, see Jonathan Glover, What Sort of People Should There Be? (1984).