We talked about Utilitarianism in class yesterday and I thought it'd be worth continuing the discussion of one of the main objections leveled against it. Utilitarianism is a moral theory set out by Jeremy Bentham and John Stuart Mill. You can check Mill's seminal work, Utilitarianism, here. The theory is that what one ought to do is determined by the amount of happiness or pleasure that one can bring about. More precisely, the right action is the the one that maximizes total aggregate happiness.
The most common objection from my students had to do with a claimed impossibility of measuring happiness. In a similar vein, some students claimed that we cannot predict how much happiness or pain an action will produce.
As I said in class, these don't strike me as a devastating criticisms. It seems to me that the Utilitarian could admit that we may sometimes be "in the dark" about whether a given action will result in making people happy/sad. In addition, she could admit that we often don't know the extent to which our actions will make people happy/sad. As we discussed, the Utilitarian might simply respond by insisting that we should do the best we can when it comes to calculating/predicting overall happiness. In addition, it seems to me that we're really not that bad at anticipating consequences. We often know which actions will result in overall positive consequences and which will result in overall negative consequences; and we're fairly good at predicting how much pleasure or pain a given action is likely to produce.
What would be helpful are examples that shed some light on this supposed problem.
My wife's birthday is a few months away and (being the obsessive-planner that I am) I'm thinking of a gift or something nice to do for her. I can't predict with absolute certainty what will make her most happy. Further, I can't predict what gift will result in the highest total aggregate happiness. But I have a good sense of what she likes and doesn't like; and what sorts of things are apt to produce greater aggregate happiness. I know that taking her on a trip to Vegas would not make her very happy. She'd prefer that we spend that kind of money on something else... indeed, just about anything else. I've thought of at least ten things that she'll probably really appreciate and which will bring her joy; and these things would also produce a great deal of happiness in others.
Now my task is to decide which of these should be my gift. Here's where the objection to Utilitarianism seems to raise its head. I have no way of determining the exact amount of happiness each of these options will produce and so I supposedly have no way of weighing my options. But is this correct? Not quite. First, I can rule out various possible gifts (e.g., the trip to Vegas). Second, I can predict which of the various options are most likely to result in the a great deal of overall happiness. Third, once I have a list of "good options," I might randomly pick from the ones at the top of the list (supposing that they're indistinguishable with respect to likely consequences). This reply strikes me as fairly plausible. It's probably what I'll end up doing. Note that I'm not paralyzed by the various options before me and, more importantly, what I end up getting for her birthday will very probably end up being a good gift if I follow this procedure. So I do have a clue about what would be best. What I ought to do is do as well as I can in choosing the gift which is most likely to produce the best consequences.
You might object that the case of a birthday gift is not a moral decision, but is rather a practical decision. But this is beside the point of the present post. All I'm saying is that we are often relatively good at predicting overall consequences if we are careful and fair-minded. Of course, this is not to say that we can always predict how much happiness will result from an action or that our predictions will necessarily be correct. We are ignorant of quite a bit. But, so the line goes, what we ought to do is try to do the best we can at making such predictions. Finally, it's worth stressing that a Utilitarian could say the same sorts of things about cases that involve more obviously moral decisions.
There are a multitude of other objections to Utilitarianism (some of which I find especially troubling). But, as you can tell, I'm not at all persuaded by this one. I wonder what readers of this blog make of all of this. Am I missing something? Are there examples that make a stronger case for the impossibility of calculating/comparing happiness?
1 comment:
Continue the discussion of buying gifts. From Utilitarianism perspective, the intentions of buying gift to make your wife happy are a good thing, and you reasonably expect that she will be very happy and very thankful because you know her preference.
One objection is that you still don’t know what is her reaction if you take her to the trip to Vegas. You knew her would not be happy to Vegas, but why she not very happy? Is she worried about money? Is she worried about strip club? Is she worried about gambling? Are there any bad memories if she had been there before? There must be something makes her reject the trip. You just need to find out why. If she doesn’t like gambling, you just need not go near slot machine. If she had some bad memories, you just need take her and make more good memories. Because, besides gambling and clubbing, there are also very beautiful views, such as Seven magic mountain, Antelope canyon, and Hoover Dam. What I am saying is that you never know how the ending be like if you don’t take her to Vegas, and the consequence might be much better than a gift.
Also, a concrete gift present is what she can predict, but a trip to Vegas is what she can never guess and may produce more happiness.
Post a Comment