Last week I graded papers from my Philosophy and Public Issues class. Many students attacked Utilitarianism. This is a moral theory set out by philosophers like Jeremy Bentham and John Stuart Mill. You can check Mill's seminal work, Utilitarianism, here. The theory is that what one ought to do is determined by the amount of happiness or pleasure that one can bring about. More precisely, the right action is the the one that maximizes total aggregate happiness.
The most common objection from my students had to do with a claimed impossibility of measuring happiness. In a similar vein, some students claimed that we cannot predict whether an action will result in pleasure or pain.
This doesn't strike me as a devastating criticism. It seems to me that the Utilitarian could admit that we may sometimes be "in the dark" about whether a given action will result in making people happy/sad. In addition, she could admit that we often don't know the extent to which our actions will make people happy/sad. As I mentioned in class when this objection was raised, the Utilitarian might simply respond by insisting that we should do the best we can when it comes to calculating/predicting overall happiness. In addition, it seems to me that we're really not that bad at anticipating consequences. We often know which actions will result in overall positive consequences and which will result in overall negative consequences; and we're fairly good at predicting how much pleasure or pain a given action is likely to produce.
What would be helpful are examples that shed some light on this supposed problem. Unfortunately, the majority of the papers on this topic that I received didn't contain such examples (or, if they did, the examples weren't exactly convincing). Consider this example:
My wedding anniversary is a few months away and (being the obsessive-planner that I am) I'm thinking of a gift or something nice to do for my wife. I can't predict with absolute certainty what will make her most happy. Further, I can't predict what gift will result in the highest total aggregate happiness. But I have a good sense of what she likes and doesn't like; and what sorts of things are apt to produce greater aggregate happiness. I know that taking her on a trip to Vegas would not make her very happy. She'd prefer that we spend that kind of money on something else... indeed, just about anything else. (And Singer's arguments seem to compel me to avoid this option.) I've thought of at least ten things that she'll probably really appreciate and enjoy; and these things would also produce a great deal of happiness in others.
Now my task is to decide which of these should be my gift. Here's where the objection to Utilitarianism seems to raise its head. I have no way of determining the exact amount of happiness each of these options will produce and so I supposedly have no way of weighing my options. But is this correct? I don't think so. First, I can rule out various possible gifts (e.g., the trip to Vegas). Second, I can predict which of the various options are most likely to result in the most overall happiness. Third, once I have a list of "good options," I can randomly pick from the ones at the top of the list (supposing that they're indistinguishable with respect to likely consequences). This reply strikes me as fairly plausible. It's probably what I'll end up doing. Note that I'm not paralyzed by the various options before me. I do have a clue about what would be best. What I ought to do is do as well as I can in choosing wisely.
You might object that the case of an anniversary gift is not a moral decision, but is rather a practical decision. But this is beside the point of the present post. All I'm saying is that we often CAN predict how much happiness will result from a possible action. In fact, I think we are relatively good at it if we are careful and fair-minded. Of course, this is not to say that we can always predict how much happiness will result from an action. We are ignorant of quite a bit. But what we ought to do is try to do the best we can at making such predictions. Further, in cases where the consequences of various possible actions are indistinguishable, one can simply pick randomly between the ones that are most likely to result in the best consequences for all. Finally, it's worth noting that a Utilitarian could say the same sorts of things about cases that involve more obviously moral decisions.
There are a multitude of other objections to Utilitarianism (some of which I find especially troubling). But, as you can tell, I'm not at all persuaded by this one.
I wonder what readers of this blog make of all of this. Am I missing something? Are there examples that make a stronger case for the impossibility of calculating/comparing happiness?
11 comments:
I think your response to this objection is spot on. I also think the same response tells against other, supposedly more sophisticated, objections to utilitarianism which are also to do with our limited capacities to assess outcomes.
For instance, one putative objection is that the repercussions of our acts extend infinitely into the future, and since we can't predict the far future, utilitarian criteria are ineffective. But, as you would no doubt point out, we don't need to be able to see far into the future if we can do our best, and have a reasonable go at predicting what will happen in the near and relevant future.
I wonder if a version of your response could also be developed against the objection that utilitarianism is overly demanding?
Thanks for commenting, Toby.
I do think that the response I gave can be applied to other objections leveled against utilitarianism. I'm not too confident about it handling what might be called the "too demanding" objection. But this depends upon how exactly we unpack the objection. If you mean utilitarianism is too demanding in so far is it requires me to make very complicated calculations, then the reply seems to do the trick. But there's a different objection having to do with utilitarianism being too demanding in the sense that it requires us to do things that are intuitively asking too much of us, morally speaking. Suppose I have to choose between saving my mother or saving ten other people. The utilitarian would say that I ought to save the ten people and let my mother die. Some object that this verdict is to demand too much of a person. This is, of course, a different kind of objection in that calculation is not the culprit. Rather, I just don't want to do what the utilitarian is saying I ought to do. This is the kind of "too demanding" objection that I think is not handled by the reply I give in the post.
I don't know how much the utilitarian should fret about the second version of the "too demanding objection." She might simply say: Tough! It's the right thing to do! Sometimes doing what is moral is hard, unpleasant for you, etc. This might be the right answer for the case I give. But there are other cases like kidnapping a person to harvest his/her organs. This bleeds into issues about justice and fairness. And perhaps I'll need to write a post on these kinds of objections and cases soon. I'll just leave this thread for the objection at hand. As we all know, it's easy to pursue interesting tangents.
Thanks again for the thoughtful comment, Toby.
I think that when it comes to predicting outcomes I agree that we can not precisely predict the exact amount of happiness the action will produce. I think that each person can look to their own emotions about how they will feel based on the consequences to determine how much happiness the consequences will cause. I know that our emotions don't determine what is moral or what isn't moral but to determine how much happiness if one is open minded and looks at how ourselves and how other will feel based on the outcomes we will be able to determine approximately how much happiness will be caused. I think that some decisions will be harder to determine how much happiness will be caused because there will be so many people involved and all their emotions come into play. I think that any reasonable person can determine to a degree how much happiness each person involved will get from the actions though.
Judging someone's happiness is not difficult. like you said, you can weigh certain stuff out that would either make someone pleased or displeased. For instance, if a cop where to walk up to your window, you know everything that would make the cop angry or happy. When you don't know what will make someone happy, all you can do is do your best.
I would say in most cases a decision can be properly made by taking in all factors. By taking in all factors you can come to consensus on how measure happiness in different situations.There are not too many cases where the situation is so extreme that happiness cannot be measured. If we have a way of measuring our own happiness, then we have a way of measuring what makes others happy.
I think people who say that it's hard to measure happiness are thinking about serial killers or other people who have uncommon pleasures. They forget to think about the harm that those people are causing. Because no human out weighs another human, there is no way someone could get more happiness over causing pain to someone else.
I agree with what the blog and what Roland had to say about the topic. It is not hard to calculate what sort of pleasure or pain that would come out of an action. Just by looking at the all the contributing factors that are invloved in the situation one can figure out what the proper path one should take. It may be hard to know the exact amount of happiness that may occur after the action but knowing whether the action will result in happiness or saddness is not hard to measure or predict.
Along with what Alyssa said it is a very hard subject. How can I know how much happiness is going to come from my actions. Also it takes a whole lot of thought and effort to figure out all the people involved in your act. Quite frankly, I am a lazy person in the sense that I would rather do something that brought instant gratification to myself or someone else I know. I don't take the time to weigh out everybody that is involved happiness. It is also very hard and time consuming to think about the long term effects and happiness or sorrow that may come from one's actions. As I said when I make a decision to do something I do not do all of this. And according to utilitarianism that makes me morally a bad person? There is something very uneasy about that. It is a really complicated and thing because what I am thinking about right now is say someone is in a gang or mob of some sort.. their mission is to kill a person.. yes the person that they kill and their family is going to experience pain.. but every member of the mob is going to be happy because their leader would then be happy, also all of the members families would be happy because they would probably be in the presence of a happier person.. so does this mean that someone who is in a mob is morally a better person than me because they brought a greater difference of pain and happiness then I did through a small trivial act that I participated in. I see a lot of flaws in utilitarianism because of this. How can we weigh out the exact amount of pain and pleasure that has been brought about to each and every person from our actions.
I feel that happiness is based on perceptions.. perceptions have to do with each individual person, so how am I or anyone for that matter ever to know what someone will find happiness in and to what extent?
Jesse... sorry for the slow reply. I agree with your assessment. I did mean the 'too demanding' objection to be about the over-demandingness of a utilitarian morality, i.e. that it seems to require us to be moral saints (http://en.wikipedia.org/wiki/Demandingness_objection). I see what you mean about your response not being directly relevant, but I think it's analogous. The utilitarian bites the bullet in both cases. She says, "Yes, my theory does demand that we make a great deal of calculation, consistently act in a saintly way, try to predict far into the future, etc. But it's not an objection to my theory that we find it very hard to do those things. That only implies that ethics is hard to live up to. Deal with it".
But a new worry occurs to me in that regard. Your response to the objection about predicting outcomes is putting your normative eggs in an empirical basket. If it's true that we can usually see far enough into the future, and assess consequences accurately enough, so that we can choose the right option, then all well and good. But is it true? That is an empirical question. Actually, I suspect that some of the most difficult ethical issues are characterised precisely by the fact that we find it very _difficult_ to discern what will bring about the best consequences. Yet these are surely the cases where we want our normative theory to lend a hand. (In the easy cases, we rarely need to make explicit reference to normative theory.) If utilitarianism demands that we make certain calculations which in practice are very difficult or impossible, that may not be evidence that utilitarianism is false, but it does suggest it's not very useful. And usefulness is surely a criterion by which we want to assess our moral theories.
Toby-Thanks for the follow-up comment.
I think your point is spot on about the empirical question. And it is certainly true that we are in the dark about a great deal--especially about consequences in the very distant future.
I'm not dissatisfied by what utilitarians often say to this point: Just do the best you can. If one accepts that one can only be morally obligated to do what one can do, then we cannot morally require someone to accurately predict the distant future. It thus seems that the objection involving our inability to predict the future fails to pack any real punch.
You say (rightly!) that we want our moral theory to help us out of a moral pickle. You then suggest that if we're not very good at predicting the exact nature of consequences, then utilitarianism is an unsatisfying/unhelpful theory. Perhaps the utilitarian could say that her theory is helpful in so far as it gets us to ask the right kinds of questions and to weigh certain features of possible actions in certain ways. It might not be a simple matter (in many cases) but if one is willing and able to put in the work, then the right answer can be found.
Are alternative theories on much better footing here? There is a similar objection to Aristotelian theories involving a failure to be "action guiding." Kantians seem to also suffer from a kind of difficulty in some cases.
Why not say something like this: Moral issues aren't easy and having a moral theory doesn't always simplify (in the sense of expediting) a decision. Maybe moral theories are useful in the sense that they provide us the tools for getting the right answer. It might take a great deal of work; but if you know where to look, then you're on your way.
I'm not really concerned with this one either--what really troubled me with the utilitarian approach is the fact that the minority would always get the bad end of the stick--even if their happiness weighed more than the majority.
Say that people are in an office--
and everyone wanted to go to a ball game instead of working--and one person was dedicated to work--the one person could take in those extra shifts--but lets say they have a family at home deserving of their love--the person who had the loving family waiting for them would get crapped on the most--and therefore would suffer the most--making them more deserving.
This kind of thinking doesn't
sit right with me--they should
modify the theory to make it more
plausable so everyone is equal--
and not just the majority.
I also don't like how Mill's theory doesn't account for the future--like you were saying--things can change--and change is good. Things can change for the better--my child is happy. If I would have aborted because back then for deserving reasons--she wouldn't be here today happy. Things got better--and I thought about the future--and the fact that I couldn't predict it. I'm glad I decided to have my daughter and give birth. =)
She is much happier now--and has tons of people who love her.
If Henri Bergson had anything to add it might be that your choice ought to be novel - something she's always wanted but never had/done; unless, of course, she hates surprises... but if that were the case you might have just simply asked her.
Post a Comment