Last week I graded papers from my Philosophy and Public Issues class. Many students attacked Utilitarianism. This is a moral theory set out by philosophers like Jeremy Bentham and John Stuart Mill. You can check Mill's seminal work, Utilitarianism, here. The theory is that what one ought to do is determined by the amount of happiness or pleasure that one can bring about. More precisely, the right action is the the one that maximizes total aggregate happiness.
The most common objection from my students had to do with a claimed impossibility of measuring happiness. In a similar vein, some students claimed that we cannot predict whether an action will result in pleasure or pain.
This doesn't strike me as a devastating criticism. It seems to me that the Utilitarian could admit that we may sometimes be "in the dark" about whether a given action will result in making people happy/sad. In addition, she could admit that we often don't know the extent to which our actions will make people happy/sad. As I mentioned in class when this objection was raised, the Utilitarian might simply respond by insisting that we should do the best we can when it comes to calculating/predicting overall happiness. In addition, it seems to me that we're really not that bad at anticipating consequences. We often know which actions will result in overall positive consequences and which will result in overall negative consequences; and we're fairly good at predicting how much pleasure or pain a given action is likely to produce.
What would be helpful are examples that shed some light on this supposed problem. Unfortunately, the majority of the papers on this topic that I received didn't contain such examples (or, if they did, the examples weren't exactly convincing). Consider this example:
My wedding anniversary is a few months away and (being the obsessive-planner that I am) I'm thinking of a gift or something nice to do for my wife. I can't predict with absolute certainty what will make her most happy. Further, I can't predict what gift will result in the highest total aggregate happiness. But I have a good sense of what she likes and doesn't like; and what sorts of things are apt to produce greater aggregate happiness. I know that taking her on a trip to Vegas would not make her very happy. She'd prefer that we spend that kind of money on something else... indeed, just about anything else. (And Singer's arguments seem to compel me to avoid this option.) I've thought of at least ten things that she'll probably really appreciate and enjoy; and these things would also produce a great deal of happiness in others.
Now my task is to decide which of these should be my gift. Here's where the objection to Utilitarianism seems to raise its head. I have no way of determining the exact amount of happiness each of these options will produce and so I supposedly have no way of weighing my options. But is this correct? I don't think so. First, I can rule out various possible gifts (e.g., the trip to Vegas). Second, I can predict which of the various options are most likely to result in the most overall happiness. Third, once I have a list of "good options," I can randomly pick from the ones at the top of the list (supposing that they're indistinguishable with respect to likely consequences). This reply strikes me as fairly plausible. It's probably what I'll end up doing. Note that I'm not paralyzed by the various options before me. I do have a clue about what would be best. What I ought to do is do as well as I can in choosing wisely.
You might object that the case of an anniversary gift is not a moral decision, but is rather a practical decision. But this is beside the point of the present post. All I'm saying is that we often CAN predict how much happiness will result from a possible action. In fact, I think we are relatively good at it if we are careful and fair-minded. Of course, this is not to say that we can always predict how much happiness will result from an action. We are ignorant of quite a bit. But what we ought to do is try to do the best we can at making such predictions. Further, in cases where the consequences of various possible actions are indistinguishable, one can simply pick randomly between the ones that are most likely to result in the best consequences for all. Finally, it's worth noting that a Utilitarian could say the same sorts of things about cases that involve more obviously moral decisions.
There are a multitude of other objections to Utilitarianism (some of which I find especially troubling). But, as you can tell, I'm not at all persuaded by this one.
I wonder what readers of this blog make of all of this. Am I missing something? Are there examples that make a stronger case for the impossibility of calculating/comparing happiness?