出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2015/12/01 17:32:21」(JST)
Look up ethics in Wiktionary, the free dictionary. |
Part of a series on |
Philosophy |
---|
Plato – Kant – Nietzsche
|
Philosophers |
|
Traditions |
|
Periods |
|
Literature |
|
Branches |
|
Lists |
|
Philosophy portal |
|
Ethics or moral philosophy is the branch of philosophy that involves systematizing, defending, and recommending concepts of right and wrong conduct.[1] The term ethics derives from the Ancient Greek word ἠθικός ethikos, which is derived from the word ἦθος ethos (habit, “custom”). The branch of philosophy axiology comprises the sub-branches of Ethics and aesthetics, each concerned with concepts of value.[2]
As a branch of philosophy, ethics investigates the questions “What is the best way for people to live?” and “What actions are right or wrong in particular circumstances?” In practice, ethics seeks to resolve questions of human morality, by defining concepts such as good and evil, right and wrong, virtue and vice, justice and crime. As a field of intellectual enquiry, moral philosophy also is related to the fields of moral psychology, descriptive ethics, and value theory.
The three major areas of study within ethics are:[1]
The neutrality of this section is disputed. Relevant discussion may be found on the talk page. Please do not remove this message until conditions to do so are met. (June 2014) |
Dino Lobaton states that "standard definitions of ethics have typically included such phrases as 'the science of the ideal human character' or 'the science of moral duty' ".[3] Richard William Paul and Linda Elder define ethics as "a set of concepts and principles that guide us in determining what behavior helps or harms sentient creatures".[4] The Cambridge Dictionary of Philosophy states that the word ethics is "commonly used interchangeably with 'morality' ... and sometimes it is used more narrowly to mean the moral principles of a particular tradition, group or individual."[5][clarification needed] Paul and Elder state that most people confuse ethics with behaving in accordance with social conventions, religious beliefs and the law and don't treat ethics as a stand-alone concept.[4]
The word "ethics" in English refers to several things.[6] It can refer to philosophical ethics—a project that attempts to use reason in order to answer various kinds of ethical questions.[citation needed] It can also be used to describe a particular person's own, idiosyncratic principles or habits.[7] For example: "Joe has good ethics."
Meta-ethics asks how we understand, know about, and what we mean when we talk about what is right and what is wrong.[8] An ethical question fixed on some particular practical question—such as, "Should I eat this particular piece of chocolate cake?"—cannot be a meta-ethical question. A meta-ethical question is abstract and relates to a wide range of more specific practical questions. For example, "Is it ever possible to have secure knowledge of what is right and wrong?" would be a meta-ethical question.
Meta-ethics has always accompanied philosophical ethics. For example, Aristotle implies that less precise knowledge is possible in ethics than in other spheres of inquiry, and he regards ethical knowledge as depending upon habit and acculturation in a way that makes it distinctive from other kinds of knowledge. Meta-ethics is also important in G.E. Moore's Principia Ethica from 1903. In it he first wrote about what he called the naturalistic fallacy. Moore was seen to reject naturalism in ethics, in his Open Question Argument. This made thinkers look again at second order questions about ethics. Earlier, the Scottish philosopher David Hume had put forward a similar view on the difference between facts and values.
Studies of how we know in ethics divide into cognitivism and non-cognitivism; this is similar to the contrast between descriptivists and non-descriptivists. Non-cognitivism is the claim that when we judge something as right or wrong, this is neither true nor false. We may for example be only expressing our emotional feelings about these things.[9] Cognitivism can then be seen as the claim that when we talk about right and wrong, we are talking about matters of fact.
The ontology of ethics is about value-bearing things or properties, i.e. the kind of things or stuff referred to by ethical propositions. Non-descriptivists and non-cognitivists believe that ethics does not need a specific ontology, since ethical propositions do not refer. This is known as an anti-realist position. Realists on the other hand must explain what kind of entities, properties or states are relevant for ethics, how they have value, and why they guide and motivate our actions.[10]
Normative ethics is the study of ethical action. It is the branch of ethics that investigates the set of questions that arise when considering how one ought to act, morally speaking. Normative ethics is distinct from meta-ethics because it examines standards for the rightness and wrongness of actions, while meta-ethics studies the meaning of moral language and the metaphysics of moral facts.[8] Normative ethics is also distinct from descriptive ethics, as the latter is an empirical investigation of people's moral beliefs. To put it another way, descriptive ethics would be concerned with determining what proportion of people believe that killing is always wrong, while normative ethics is concerned with whether it is correct to hold such a belief. Hence, normative ethics is sometimes called prescriptive, rather than descriptive. However, on certain versions of the meta-ethical view called moral realism, moral facts are both descriptive and prescriptive at the same time.[11]
Traditionally, normative ethics (also known as moral theory) was the study of what makes actions right and wrong. These theories offered an overarching moral principle one could appeal to in resolving difficult moral decisions.
At the turn of the 20th century, moral theories became more complex and are no longer concerned solely with rightness and wrongness, but are interested in many different kinds of moral status. During the middle of the century, the study of normative ethics declined as meta-ethics grew in prominence. This focus on meta-ethics was in part caused by an intense linguistic focus in analytic philosophy and by the popularity of logical positivism.
In 1971 John Rawls published A Theory of Justice, noteworthy in its pursuit of moral arguments and eschewing of meta-ethics. This publication set the trend for renewed interest in normative ethics.
Virtue ethics describes the character of a moral agent as a driving force for ethical behavior, and is used to describe the ethics of Socrates, Aristotle, and other early Greek philosophers. Socrates (469–399 BC) was one of the first Greek philosophers to encourage both scholars and the common citizen to turn their attention from the outside world to the condition of humankind. In this view, knowledge bearing on human life was placed highest, while all other knowledge were secondary. Self-knowledge was considered necessary for success and inherently an essential good. A self-aware person will act completely within his capabilities to his pinnacle, while an ignorant person will flounder and encounter difficulty. To Socrates, a person must become aware of every fact (and its context) relevant to his existence, if he wishes to attain self-knowledge. He posited that people will naturally do what is good, if they know what is right. Evil or bad actions are the result of ignorance. If a criminal was truly aware of the intellectual and spiritual consequences of his actions, he would neither commit nor even consider committing those actions. Any person who knows what is truly right will automatically do it, according to Socrates. While he correlated knowledge with virtue, he similarly equated virtue with joy. The truly wise man will know what is right, do what is good, and therefore be happy.[12]:32–33
Aristotle (384–323 BC) posited an ethical system that may be termed "self-realizationism." In Aristotle's view, when a person acts in accordance with his nature and realizes his full potential, he will do good and be content. At birth, a baby is not a person, but a potential person. To become a "real" person, the child's inherent potential must be realized. Unhappiness and frustration are caused by the unrealized potential of a person, leading to failed goals and a poor life. Aristotle said, "Nature does nothing in vain." Therefore, it is imperative for people to act in accordance with their nature and develop their latent talents in order to be content and complete. Happiness was held to be the ultimate goal. All other things, such as civic life or wealth, are merely means to the end. Self-realization, the awareness of one's nature and the development of one's talents, is the surest path to happiness.[12]:33–35
Aristotle asserted that man had three natures: vegetable (physical/metabolism), animal (emotional/appetite) and rational (mental/conceptual). Physical nature can be assuaged through exercise and care, emotional nature through indulgence of instinct and urges, and mental through human reason and developed potential. Rational development was considered the most important, as essential to philosophical self-awareness and as uniquely human. Moderation was encouraged, with the extremes seen as degraded and immoral. For example, courage is the moderate virtue between the extremes of cowardice and recklessness. Man should not simply live, but live well with conduct governed by moderate virtue. This is regarded as difficult, as virtue denotes doing the right thing, to the right person, at the right time, to the proper extent, in the correct fashion, for the right reason.[12]:35–37
The Stoic philosopher Epictetus posited that the greatest good was contentment and serenity. Peace of mind, or Apatheia, was of the highest value; self-mastery over one's desires and emotions leads to spiritual peace. The "unconquerable will" is central to this philosophy. The individual's will should be independent and inviolate. Allowing a person to disturb the mental equilibrium is in essence offering yourself in slavery. If a person is free to anger you at will, you have no control over your internal world, and therefore no freedom. Freedom from material attachments is also necessary. If a thing breaks, the person should not be upset, but realize it was a thing that could break. Similarly, if someone should die, those close to them should hold to their serenity because the loved one was made of flesh and blood destined to death. Stoic philosophy says to accept things that cannot be changed, resigning oneself to existence and enduring in a rational fashion. Death is not feared. People do not "lose" their life, but instead "return", for they are returning to God (who initially gave what the person is as a person). Epictetus said difficult problems in life should not be avoided, but rather embraced. They are spiritual exercises needed for the health of the spirit, just as physical exercise is required for the health of the body. He also stated that sex and sexual desire are to be avoided as the greatest threat to the integrity and equilibrium of a man's mind. Abstinence is highly desirable. Epictetus said remaining abstinent in the face of temptation was a victory for which a man could be proud.[12]:38–41
Modern virtue ethics was popularized during the late 20th century in large part as a response to G. E. M. Anscombe's "Modern Moral Philosophy". Anscombe argues that consequentialist and deontological ethics are only feasible as universal theories if the two schools ground themselves in divine law. As a deeply devoted Christian herself, Anscombe proposed that either those who do not give ethical credence to notions of divine law take up virtue ethics, which does not necessitate universal laws as agents themselves are investigated for virtue or vice and held up to "universal standards," or that those who wish to be utilitarian or consequentialist ground their theories in religious conviction.[13] Alasdair MacIntyre, who wrote the book After Virtue, was a key contributor and proponent of modern virtue ethics, although MacIntyre supports a relativistic account of virtue based on cultural norms, not objective standards.[13] Martha Nussbaum, a contemporary virtue ethicist, objects to MacIntyre's relativism, among that of others, and responds to relativist objections to form an objective account in her work "Non-Relative Virtues: An Aristotelian Approach."[14] Complete Conduct Principles for the 21st Century [15] blended the Eastern virtue ethics and the Western virtue ethics, with some modifications to suit the 21st Century, and formed a part of contemporary virtue ethics.[15]
Hedonism posits that the principal ethic is maximizing pleasure and minimizing pain. There are several schools of Hedonist thought ranging from those advocating the indulgence of even momentary desires to those teaching a pursuit of spiritual bliss. In their consideration of consequences, they range from those advocating self-gratification regardless of the pain and expense to others, to those stating that the most ethical pursuit maximizes pleasure and happiness for the most people.[12]:37
Founded by Aristippus of Cyrene, Cyrenaics supported immediate gratification or pleasure. "Eat, drink and be merry, for tomorrow we die." Even fleeting desires should be indulged, for fear the opportunity should be forever lost. There was little to no concern with the future, the present dominating in the pursuit for immediate pleasure. Cyrenaic hedonism encouraged the pursuit of enjoyment and indulgence without hesitation, believing pleasure to be the only good.[12]:37
Epicurean ethics is a hedonist form of virtue ethics. Epicurus "presented a sustained argument that pleasure, correctly understood, will coincide with virtue".[16] He rejected the extremism of the Cyrenaics, believing some pleasures and indulgences to be detrimental to human beings. Epicureans observed that indiscriminate indulgence sometimes resulted in negative consequences. Some experiences were therefore rejected out of hand, and some unpleasant experiences endured in the present to ensure a better life in the future. To Epicurus the summum bonum, or greatest good, was prudence, exercised through moderation and caution. Excessive indulgence can be destructive to pleasure and can even lead to pain. For example, eating one food too often will cause a person to lose taste for it. Eating too much food at once will lead to discomfort and ill-health. Pain and fear were to be avoided. Living was essentially good, barring pain and illness. Death was not to be feared. Fear was considered the source of most unhappiness. Conquering the fear of death would naturally lead to a happier life. Epicurus reasoned if there was an afterlife and immortality, the fear of death was irrational. If there was no life after death, then the person would not be alive to suffer, fear or worry; he would be non-existent in death. It is irrational to fret over circumstances that do not exist, such as one's state in death in the absence of an afterlife.[12]:37–38
State consequentialism, also known as Mohist consequentialism,[17] is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the basic goods of a state.[17] The Stanford Encyclopedia of Philosophy describes Mohist consequentialism, dating back to the 5th century BC, as "a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare."[18] Unlike utilitarianism, which views pleasure as a moral good, "the basic goods in Mohist consequentialist thinking are ... order, material wealth, and increase in population".[19] During Mozi's era, war and famines were common, and population growth was seen as a moral necessity for a harmonious society. The "material wealth" of Mohist consequentialism refers to basic needs like shelter and clothing, and the "order" of Mohist consequentialism refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability.[20]
Stanford sinologist David Shepherd Nivison, in The Cambridge History of Ancient China, writes that the moral goods of Mohism "are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth ... if people have plenty, they would be good, filial, kind, and so on unproblematically."[19] The Mohists believed that morality is based on "promoting the benefit of all under heaven and eliminating harm to all under heaven." In contrast to Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweigh the importance of individual pleasure and pain.[21]
Consequentialism refers to moral theories that hold that the consequences of a particular action form the basis for any valid moral judgment about that action (or create a structure for judgment, see rule consequentialism). Thus, from a consequentialist standpoint, a morally right action is one that produces a good outcome, or consequence. This view is often expressed as the aphorism "The ends justify the means".
The term "consequentialism" was coined by G.E.M. Anscombe in her essay "Modern Moral Philosophy" in 1958, to describe what she saw as the central error of certain moral theories, such as those propounded by Mill and Sidgwick.[22] Since then, the term has become common in English-language ethical theory.
The defining feature of consequentialist moral theories is the weight given to the consequences in evaluating the rightness and wrongness of actions.[23] In consequentialist theories, the consequences of an action or rule generally outweigh other considerations. Apart from this basic outline, there is little else that can be unequivocally said about consequentialism as such. However, there are some questions that many consequentialist theories address:
One way to divide various consequentialisms is by the types of consequences that are taken to matter most, that is, which consequences count as good states of affairs. According to utilitarianism, a good action is one that results in an increase in a positive effect, and the best action is one that results in that effect for the greatest number. Closely related is eudaimonic consequentialism, according to which a full, flourishing life, which may or may not be the same as enjoying a great deal of pleasure, is the ultimate aim. Similarly, one might adopt an aesthetic consequentialism, in which the ultimate aim is to produce beauty. However, one might fix on non-psychological goods as the relevant effect. Thus, one might pursue an increase in material equality or political liberty instead of something like the more ephemeral "pleasure". Other theories adopt a package of several goods, all to be promoted equally. Whether a particular consequentialist theory focuses on a single good or many, conflicts and tensions between different good states of affairs are to be expected and must be adjudicated.
Utilitarianism is an ethical theory that argues the proper course of action is one that maximizes a positive effect, such as "happiness", "welfare", or the ability to live according to personal preferences.[24] Jeremy Bentham and John Stuart Mill are influential proponents of this school of thought. In A Fragment on Government Bentham says 'it is the greatest happiness of the greatest number that is the measure of right and wrong' and describes this as a fundamental axiom. In An Introduction to the Principles of Morals and Legislation he talks of 'the principle of utility' but later prefers "the greatest happiness principle".[25][26]
Utilitarianism is the paradigmatic example of a consequentialist moral theory. This form of utilitarianism holds that what matters is the aggregate positive effect of everyone and not only of any one person. John Stuart Mill, in his exposition of utilitarianism, proposed a hierarchy of pleasures, meaning that the pursuit of certain kinds of pleasure is more highly valued than the pursuit of other pleasures.[27] Other noteworthy proponents of utilitarianism are neuroscientist Sam Harris, author of The Moral Landscape, and moral philosopher Peter Singer, author of, amongst other works, Practical Ethics.
There are two types of utilitarianism, act utilitarianism and rule utilitarianism. In act utilitarianism the principle of utility is applied directly to each alternative act in a situation of choice. The right act is then defined as the one which brings about the best results (or the least amount of bad results). In rule utilitarianism the principle of utility is used to determine the validity of rules of conduct (moral principles). A rule like promise-keeping is established by looking at the consequences of a world in which people broke promises at will and a world in which promises were binding. Right and wrong are then defined as following or breaking those rules.[28]
Deontological ethics or deontology (from Greek δέον, deon, "obligation, duty"; and -λογία, -logia) is an approach to ethics that determines goodness or rightness from examining acts, or the rules and duties that the person doing the act strove to fulfill.[29] This is in contrast to consequentialism, in which rightness is based on the consequences of an act, and not the act by itself. In deontology, an act may be considered right even if the act produces a bad consequence,[30] if it follows the rule that "one should do unto others as they would have done unto them",[31] and even if the person who does the act lacks virtue and had a bad intention in doing the act.[citation needed] According to deontology, we have a duty to act in a way that does those things that are inherently good as acts ("truth-telling" for example), or follow an objectively obligatory rule (as in rule utilitarianism). For deontologists, the ends or consequences of our actions are not important in and of themselves, and our intentions are not important in and of themselves.
Immanuel Kant's theory of ethics is considered deontological for several different reasons.[32][33] First, Kant argues that to act in the morally right way, people must act from duty (deon).[34] Second, Kant argued that it was not the consequences of actions that make them right or wrong but the motives (maxime) of the person who carries out the action.
Kant's argument that to act in the morally right way, one must act from duty, begins with an argument that the highest good must be both good in itself, and good without qualification.[35] Something is 'good in itself' when it is intrinsically good, and 'good without qualification' when the addition of that thing never makes a situation ethically worse. Kant then argues that those things that are usually thought to be good, such as intelligence, perseverance and pleasure, fail to be either intrinsically good or good without qualification. Pleasure, for example, appears to not be good without qualification, because when people take pleasure in watching someone suffer, they make the situation ethically worse. He concludes that there is only one thing that is truly good:
Nothing in the world—indeed nothing even beyond the world—can possibly be conceived which could be called good without qualification except a good will.[35]
Associated with the pragmatists, Charles Sanders Peirce, William James, and especially John Dewey, pragmatic ethics holds that moral correctness evolves similarly to scientific knowledge: socially over the course of many lifetimes. Thus, we should prioritize social reform over attempts to account for consequences, individual virtue or duty (although these may be worthwhile attempts, provided social reform is provided for).[36]
Role ethics is an ethical theory based on family roles.[37] Unlike virtue ethics, role ethics is not individualistic. Morality is derived from a person's relationship with their community.[38] Confucian ethics is an example of role ethics.[37] Confucian roles center around the concept of filial piety or xiao, a respect for family members.[39] According to Roger Ames and Henry Rosemont, "Confucian normativity is defined by living one's family roles to maximum effect." Morality is determined through a person's fulfillment of a role, such as that of a parent or a child. Confucian roles are not rational, and originate through the xin, or human emotions.[38]
Anarchist ethics is an ethical theory based on the studies of anarchist thinkers. The biggest contributor to the anarchist ethics is the Russian zoologist, geographer, economist and political activist Peter Kropotkin. The anarchist ethics is a big and vague field which can depend upon different historical situations and different anarchist thinkers, but as Peter Kropotkin explains, "any “bourgeois” or “proletarian” ethics rests, after all, on the common basis, on the common ethnological foundation, which at times exerts a very strong influence on the principles of the class or group morality." Still, most of the anarchist ethics schools are based on three fundamental ideas, which are: "solidarity, equality and justice". Kropotkin argues that Ethics is evolutionary and is inherited as a sort of a social instinct through History, and by so, he rejects any religious and transcendental explanation of ethics.[40] Kropotking suggests that the principle of equality which lies at the basis of anarchism is the same as the Golden rule:
"This principle of treating others as one wishes to be treated oneself, what is it but the very same principle as equality, the fundamental principle of anarchism? And how can any one manage to believe himself an anarchist unless he practices it? We do not wish to be ruled. And by this very fact, do we not declare that we ourselves wish to rule nobody? We do not wish to be deceived, we wish always to be told nothing but the truth. And by this very fact, do we not de- clare that we ourselves do not wish to deceive anybody, that we promise to always tell the truth, nothing but the truth, the whole truth? We do not wish to have the fruits of our labor stolen from us. And by that very fact, do we not declare that we respect the fruits of others' labor? By what right indeed can we demand that we should be treated in one fashion, reserving it to ourselves to treat others in a fashion entirely different? Our sense of equality revolts at such an idea." [41]
This article or section possibly contains previously unpublished synthesis of published material that conveys ideas not attributable to the original sources. Relevant discussion may be found on the talk page. (July 2009) |
The 20th century saw a remarkable expansion and evolution of critical theory, following on earlier Marxist Theory efforts to locate individuals within larger structural frameworks of ideology and action.
Antihumanists such as Louis Althusser and Michel Foucault and structuralists such as Roland Barthes challenged the possibilities of individual agency and the coherence of the notion of the 'individual' itself.[clarification needed] As critical theory developed in the later 20th century, post-structuralism sought to problematize human relationships to knowledge and 'objective' reality. Jacques Derrida argued that access to meaning and the 'real' was always deferred, and sought to demonstrate via recourse to the linguistic realm that "there is nothing outside context" ("il n'y a pas de hors-texte" is often mistranslated as "there is nothing outside the text"); at the same time, Jean Baudrillard theorised that signs and symbols or simulacra mask reality (and eventually the absence of reality itself), particularly in the consumer world.
Post-structuralism and postmodernism argue that ethics must study the complex and relational conditions of actions. A simple alignment of ideas of right and particular acts is not possible. There will always be an ethical remainder that cannot be taken into account or often even recognized. Such theorists find narrative (or, following Nietzsche and Foucault, genealogy) to be a helpful tool for understanding ethics because narrative is always about particular lived experiences in all their complexity rather than the assignment of an idea or norm to separate and individuated actions.
Zygmunt Bauman says Postmodernity is best described as Modernity without illusion, the illusion being the belief that humanity can be repaired by some ethic principle. Postmodernity can be seen in this light as accepting the messy nature of humanity as unchangeable.
David Couzens Hoy states that Emmanuel Levinas's writings on the face of the Other and Derrida's meditations on the relevance of death to ethics are signs of the "ethical turn" in Continental philosophy that occurred in the 1980s and 1990s. Hoy describes post-critique ethics as the "obligations that present themselves as necessarily to be fulfilled but are neither forced on one or are enforceable" (2004, p. 103).
Hoy's post-critique model uses the term ethical resistance. Examples of this would be an individual's resistance to consumerism in a retreat to a simpler but perhaps harder lifestyle, or an individual's resistance to a terminal illness. Hoy describes Levinas's account as "not the attempt to use power against itself, or to mobilize sectors of the population to exert their political power; the ethical resistance is instead the resistance of the powerless"(2004, p. 8).
Hoy concludes that
The ethical resistance of the powerless others to our capacity to exert power over them is therefore what imposes unenforceable obligations on us. The obligations are unenforceable precisely because of the other's lack of power. That actions are at once obligatory and at the same time unenforceable is what put them in the category of the ethical. Obligations that were enforced would, by the virtue of the force behind them, not be freely undertaken and would not be in the realm of the ethical. (2004, p.184)
In present-day terms the powerless may include the unborn, the terminally sick, the aged, the insane, and non-human animals. It is in these areas that ethical action in Hoy's sense will apply. Until legislation or the state apparatus enforces a moral order that addresses the causes of resistance these issues will remain in the ethical realm. For example, should animal experimentation become illegal in a society, it will no longer be an ethical issue on Hoy's definition. Likewise one hundred and fifty years ago, not having a black slave in America would have been an ethical choice. This later issue has been absorbed into the fabric of an enforceable social order and is therefore no longer an ethical issue in Hoy's sense.
Applied ethics is a discipline of philosophy that attempts to apply ethical theory to real-life situations. The discipline has many specialized fields, such as engineering ethics, bioethics, geoethics, public service ethics and business ethics.
This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (May 2009) |
Applied ethics is used in some aspects of determining public policy, as well as by individuals facing difficult decisions. The sort of questions addressed by applied ethics include: "Is getting an abortion immoral?" "Is euthanasia immoral?" "Is affirmative action right or wrong?" "What are human rights, and how do we determine them?" "Do animals have rights as well?" and "Do individuals have the right of self determination?"[8]
A more specific question could be: "If someone else can make better out of his/her life than I can, is it then moral to sacrifice myself for them if needed?" Without these questions there is no clear fulcrum on which to balance law, politics, and the practice of arbitration — in fact, no common assumptions of all participants—so the ability to formulate the questions are prior to rights balancing. But not all questions studied in applied ethics concern public policy. For example, making ethical judgments regarding questions such as, "Is lying always wrong?" and, "If not, when is it permissible?" is prior to any etiquette.
People in-general are more comfortable with dichotomies (two opposites). However, in ethics the issues are most often multifaceted and the best proposed actions address many different areas concurrently. In ethical decisions the answer is almost never a "yes or no", "right or wrong" statement. Many buttons are pushed so that the overall condition is improved and not to the benefit of any particular faction.
Bioethics is the study of controversial ethics brought about by advances in biology and medicine. Bioethicists are concerned with the ethical questions that arise in the relationships among life sciences, biotechnology, medicine, politics, law, and philosophy. It also includes the study of the more commonplace questions of values ("the ethics of the ordinary") that arise in primary care and other branches of medicine.
Bioethics also needs to address emerging biotechnologies that affect basic biology and future humans. These developments include cloning, gene therapy, human genetic engineering, astroethics and life in space,[42] and manipulation of basic biology through altered DNA, XNA and proteins,e.g.- "three parent baby,where baby is born from genetically modified embryos, would have DNA from a mother, a father and from a female donor.[43] Correspondingly, new bioethics also need to address life at its core. For example, biotic ethics value organic gene/protein life itself and seek to propagate it.[44] With such life-centered principles, ethics may secure a cosmological future for life.[45]
Business ethics (also corporate ethics) is a form of applied ethics or professional ethics that examines ethical principles and moral or ethical problems that arise in a business environment, including fields like Medical ethics. It applies to all aspects of business conduct and is relevant to the conduct of individuals and entire organizations.
Business ethics has both normative and descriptive dimensions. As a corporate practice and a career specialization, the field is primarily normative. Academics attempting to understand business behavior employ descriptive methods. The range and quantity of business ethical issues reflects the interaction of profit-maximizing behavior with non-economic concerns. Interest in business ethics accelerated dramatically during the 1980s and 1990s, both within major corporations and within academia. For example, today most major corporations promote their commitment to non-economic values under headings such as ethics codes and social responsibility charters. Adam Smith said, "People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices."[46] Governments use laws and regulations to point business behavior in what they perceive to be beneficial directions. Ethics implicitly regulates areas and details of behavior that lie beyond governmental control.[47] The emergence of large corporations with limited relationships and sensitivity to the communities in which they operate accelerated the development of formal ethics regimes.[48]
Relational ethics are related to an ethics of care.[49]:62–63 They are used in qualitative research, especially ethnography and autoethnography. Researchers who employ relational ethics value and respect the connection between themselves and the people they study, and "between researchers and the communities in which they live and work" (Ellis, 2007, p. 4).[50] Relational ethics also help researchers understand difficult issues such as conducting research on intimate others that have died and developing friendships with their participants.[51][52] Relational ethics in close personal relationships form a central concept of contextual therapy.
In Moral Machines: Teaching Robots Right from Wrong, Wendell Wallach and Colin Allen conclude that issues in machine ethics will likely drive advancement in understanding of human ethics by forcing us to address gaps in modern normative theory and by providing a platform for experimental investigation.[53] The effort to actually program a machine or artificial agent to behave as though instilled with a sense of ethics requires new specificity in our normative theories, especially regarding aspects customarily considered common-sense. For example, machines, unlike humans, can support a wide selection of learning algorithms, and controversy has arisen over the relative ethical merits of these options. This may reopen classic debates of normative ethics framed in new (highly technical) terms.
Military ethics are concerned with questions regarding the application of force and the ethos of the soldier and are often understood as applied professional ethics.[54] Just war theory is generally seen to set the background terms of military ethics. However individual countries and traditions have different fields of attention.[55]
Military ethics involves multiple subareas, including the following among others:
Political ethics (also known as political morality or public ethics) is the practice of making moral judgements about political action and political agents.[56]
Public sector ethics is a set of principles that guide public officials in their service to their constituents, including their decision-making on behalf of their constituents. Fundamental to the concept of public sector ethics is the notion that decisions and actions are based on what best serves the public's interests, as opposed to the official's personal interests (including financial interests) or self-serving political interests.[57]
Publication ethics is the set of principles that guide the writing and publishing process for all professional publications. In order to follow the set of principles, authors should verify that the publication does not contain plagiarism or publication bias.[58] As a way to avoid misconduct in research these principles can also be applied to experiments which are referenced or analyzed in publications by ensuring the data is recorded, honestly and accurately.[59]
Plagiarism is the failure to give credit to another author’s work or ideas, when it is used in the publication.[60] It is the obligation of the editor of the journal to ensure the article does not contain any plagiarism before it is published.[61] If a publication which has already been published is proven to contain plagiarism, then the editor of the journal can proceed to have the article retracted.[62]
Publication bias occurs when the publication is one-sided or "prejudiced against results".[63] In best practice, an author should try to include information from all parties involved, or affected by the topic. If an author is prejudiced against certain results, than it can "lead to erroneous conclusions being drawn.”[64]
Misconduct in research can occur when information from an experiment is falsely recorded or altered.[65] Falsely recorded information occurs when the researcher "fakes" information or data, which was not used when conducting the actual experiment.[65] By faking the data, the researcher can alter the results from the experiment to better fit the hypothesis they originally predicted. When conducting medical research, it is important to honor the healthcare rights of a patient by protecting their anonymity in the publication.[58]
Moral psychology is a field of study that began as an issue in philosophy and that is now properly considered part of the discipline of psychology. Some use the term "moral psychology" relatively narrowly to refer to the study of moral development.[66] However, others tend to use the term more broadly to include any topics at the intersection of ethics and psychology (and philosophy of mind).[67] Such topics are ones that involve the mind and are relevant to moral issues. Some of the main topics of the field are moral responsibility, moral development, moral character (especially as related to virtue ethics), altruism, psychological egoism, moral luck, and moral disagreement.[68]
Evolutionary ethics concerns approaches to ethics (morality) based on the role of evolution in shaping human psychology and behavior. Such approaches may be based in scientific fields such as evolutionary psychology or sociobiology, with a focus on understanding and explaining observed ethical preferences and choices.[69]
Descriptive ethics is on the less philosophical end of the spectrum, since it seeks to gather particular information about how people live and draw general conclusions based on observed patterns. Abstract and theoretical questions that are more clearly philosophical—such as, "Is ethical knowledge possible?"—are not central to descriptive ethics. Descriptive ethics offers a value-free approach to ethics, which defines it as a social science rather than a humanity. Its examination of ethics doesn't start with a preconceived theory, but rather investigates observations of actual choices made by moral agents in practice. Some philosophers rely on descriptive ethics and choices made and unchallenged by a society or culture to derive categories, which typically vary by context. This can lead to situational ethics and situated ethics. These philosophers often view aesthetics, etiquette, and arbitration as more fundamental, percolating "bottom up" to imply the existence of, rather than explicitly prescribe, theories of value or of conduct. The study of descriptive ethics may include examinations of the following:
Philosophy portal |
"he advocated a form of state consequentialism, which sought to maximize three basic goods: the wealth, order, and population of the state
The goods that serve as criteria of morality are collective or public, in contrast, for instance, to individual happiness or well-being
Wikimedia Commons has media related to Ethics. |
Look up ethics in Wiktionary, the free dictionary. |
Wikiversity has learning materials about Ethics |
Wikisource has original works on the topic: Ethics |
Library resources about Ethics |
|
|
|
|
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
リンク元 | 「ethical」「倫理学」「倫理」 |
拡張検索 | 「bioethics」「ethics committee」「bioethicsl」 |
.