Sunday, April 21, 2013

Act and Rule Utilitarianism

**This is from guest blogger, Aaron.**


In his article, “Extreme and Restricted Utilitarianism,” J.J.C. Smart argues for what he calls extreme utilitarianism. General utilitarianism, according to Smart, holds that an action is made good by its consequences, and extreme utilitarianism holds that each individual action should be assessed on its own basis. Commonsense rules of morality will generally apply, but they are to be viewed as defeasible rules of thumb. If in a certain instance it turns out it would be better to break some rule, the rule should be broken. Restricted utilitarianism, on the other hand, holds that the rules are strict, not rules of thumb. They must always be upheld. What makes this view utilitarian is that the rules should be judged according the consequences that would follow from adhering to them (Shafer-Landau, Ethical Theory, 475-76).
In both cases, the utilitarian view needs a criterion by which to assess value so that practical judgments can be made. While this aspect of utilitarianism is not the main focus of Smart’s paper, he mentions in the second to last paragraph that “human happiness and misery…[should be] the objects of our pro-attitudes and anti-attitudes” (480). He then asserts that he views ethics as a subset of answers to the question, “what actions are ration?” Presumably, the answer is supposed to be that the rational actions, in terms of ethics, are those that most promote human happiness. I will provide two reasons I find this account unconvincing.

The first reason to think that goodness is not maximizing human happiness is that sometimes, one can feel good for having done the right thing even though one has not maximized overall human happiness. Suppose, for instance, that there is a very depressed person attempting to commit suicide, and suppose also that I simply cannot stand being around him. Furthermore, due to some shared social circles, if this person continues to live, I must continue to interact with him. Thus, we may suppose that if this person dies, overall human happiness will be maximized, even if we factor in any feelings of guilt I may have afterward for letting him die, and we may suppose that I am aware of all this. Given the circumstances, I cannot escape the feeling that if I were to save the depressed person, I would, to some extent, feel good about what I had done. However, this cannot be because human happiness will be maximized—I already know will not be. Rather, the good feeling must come from knowing that I have done the right thing. If you find this case convincing, then you must agree either that you and I are hardwired to be immoral (since, if morality is determined by human happiness, the morally good thing to do is let the depressed person commit suicide), or that utilitarianism is wrong.

The second reason I do not like Smart’s characterization of goodness as human happiness, and of ethical action as those that are most rational, is that it is difficult to see what answer could be given to the question, “why should happiness be prioritized?” Sure, the utilitarian can say that people like being happy, but what does that prove? Action, it seems, requires a goal before it can be rational. Thus, we should ask why human happiness is right goal for ethical action. Why not something else? Why shouldn’t we say that, for instance, humans are sufficiently valuable to be worth making sacrifices for? Consider a starving child near death in a third world country, and suppose you go to that country to aid in relief work. Suppose that you are then faced with the choice to either help the child, or not. Suppose she is close to death, and you know that if you save her she will lead a difficult life. Moreover, taking the time to save her now will be costly in terms of energy, and will bring you a great deal of discomfort both now and in the future—now, because you must work to help her, and in the future because you will worry over her well-being. Thus, you have every reason to believe that saving the child will not increase human happiness. Let us even suppose this assessment is correct! Should you leave the child to die of starvation, or even, to cut short her unhappiness, should you kill the child (assuming the psychological damage you will suffer as a result is sufficiently small)? Surely, to borrow Smart’s term, this would be a monstrous act. It appears, therefore, that what is rational given the goal of maximizing human happiness may not be rational given the goal of avoiding monstrous acts.
 
Thus, it seems to me that extreme utilitarianism fails on at least two counts. The first is that, at least sometimes, it is the fact that one has done something good that causes happiness, not happiness that retroactively causes what one has done to be good. The second reason is that attempting to maximize human happiness allows one to justify truly monstrous actions.

No comments: