In front of you sits a button. If you press this button, an innocent person will be subjected to the worst possible forms of torture constantly for the next year. (There will be no “resistance” built up over time – the torture will be just as bad on day 364 as it was on day 1.) If you don’t press it, N people will receive a slight pin-prick in their arm, just sharp enough to be noticeably unpleasant.
Clearly if N is something small, like, say, 10, you should choose the pin-pricks rather than the torture. But what about for very very large N? Is there any N so large that you’d switch to the torture option? And if so, what is it?
I’m going to take it as axiomatic that no value of N is high enough that it’s worth inflicting the year long torture. Even if you were faced with the choice between year long torture for one and momentary pin-prick for a literal infinity of people, the right choice would be the pin-pricks.
I feel safe in taking this as axiomatic for three main reasons. Firstly, it’s the obvious answer to the average person, who hasn’t spent lots of time thinking about normative ethics. (This is just in my experience of posing the problem to lay-people.)
Secondly, even among those that choose the torture, most of them do so reluctantly, citing an allegiance to some particular moral framework. Many of them say that they think that it’s technically what they should do, but that in reality they probably wouldn’t, or would at least be extremely hesitant to.
Thirdly, my own moral intuitions deliver a clear and unambiguous judgement on this case. I would 100% choose an infinity of people having the pin-pricks, and wouldn’t feel the tiniest bit of guilt about it. My intuition here is at the same level of obviousness as something like “Causing unhappiness is worse than causing happiness” or “Satisfying somebody’s preferences is a moral good, so long as those preferences don’t harm anybody else.”
With all that said, there are some major problems with this view. If you already know what these problems are, then the rest of this post will probably not be very interesting to you. But if you feel convinced that the torture option is unambiguously worse, and don’t see what the problem could be with saying that, then read on!
First of all, pain is on a spectrum. This spectrum is not continuous, but nonetheless there’s a progression of pains from “slight pin-prick” to “year long torture” such that each step is just barely noticeably worse than the previous one.
Second of all, for each step along this progression, there’s a trade-off to be made. For instance, a pin-prick for one person is better than a very slightly worse pin-prick for one person. And a pin-prick for each of one million people is worse than a slightly more painful pin-prick for one person. So there’s some number of people N that will cause you to change your choice from “pin-pricks for N people” to “slightly worse pin-prick for one person.”
Let’s formalize this a little bit. We’ll call our progression of pains p1, p2, p3, …, pn, where p1 is the pain of a slight pin-prick and pn is the pain of yearlong torture. And we’ll use the symbol < to mean “is less bad than”. What we’ve just said is that for each k, (pk for one person) < (pk+1 for one person) AND (pk for one million people) > (pk+1 for one person). (The choice of million really doesn’t matter here, all that we need is that there’s some number for which the slighter pain becomes worse. One million is probably high enough to do the job.)
Now, if (pk for N) < (pk+1 for 1), then surely (pk for 2N) < (pk+1 for 2). The second is just the first one, but two times! If the tradeoff was worth it the first time, then the exact same tradeoff should be worth it the second time. But now what this gives us is the following:
p1 for 1,000,000n > p2 for 1,000,000n-1 > … > pn-1 for 1,000,000 > pn for 1
In other words, if we are willing to trade off at each step along the progression, then we are forced on pain of inconsistency to trade off between the slight pin prick and the yearlong torture! I.e. there has to be some number (namely 1000000n) for which we choose the torture for 1 over the pin-prick for that number of people.
Writing this all out:
- There’s a progression of pains p1, p2, …, pn-1, pn such that for each k, (pk for one million people) > (pk+1 for one person).
- If (p for n) > (q for m), then (p for k⋅n) > (q for k⋅m).
- > is transitive.
- Therefore, (p1 for 1,000,000n) > (pn for 1)
If you accept the first three premises, then you must accept the fourth. And the problem is, all three premises seem very hard to deny!
For premise 1: It seems just as clearly immoral to choose to inflict a pain on a million people rather than inflict a slightly worse pain on one person as it does to choose to inflict torture on one rather than a pin-prick on an arbitrarily large number of people. Or to make the point stronger, no sane moral system would say that it’s worse to inflict a pain on one person than a very slightly less bad pain on an arbitrarily large number of people. We have to allow some tradeoff between barely different pains.
Premise 2: if you are offered a choice between two options and choose the first, and then before you gain any more information you are offered the exact same choice, with no connection between the consequences of the first choice and the consequences of the second choice, then it seems to me that you’re bound by consistency to choose the first once more. And if you’re offered that choice k times, then you should take the first every time.
(Let me anticipate a potential “counter-example” to this. Suppose you have the choice to either get a million dollars for yourself or for charity. Then you are given that same choice a second time. It’s surely not inconsistent to choose the million for yourself the first time and the million for charity the second time. I agree, it’s not inconsistent! But this example is different than what we’re talking about in Premise 2, because the choice you make the second time is not the same as the choice you made the first. Why? Because a million dollars to a millionaire is not the same as a million dollars to you or me. In other words, the goodness/badness of the consequences of the second choice are dependent on the first choice. In our torture/pin-prick example, this is not the case; the consequences of the second choice are being enacted on an entirely different group of people, and are therefore independent of the first choice.)
Premise 3: Maybe this seems like the most abstract of the three premises, and hence potentially the most easy to deny. But the problem with denying premise 3 is that it introduces behavioral inconsistency. If you think that moral badness is not transitive, then you think there’s an A, B, and C such that you’d choose A over B, B over C, but not A over C. But if you choose A over B and B over C, then you have in effect chosen A over C, while denying that you would do so. In other words, a moral system that denies transitivity of badness cannot be a consistent guide of action, as it will tell you not to choose A over C, while also telling you to take actions that are exactly equivalent to choosing A over C.
And so we’re left with the conclusion that pin-pricks for 1,000,000n people is worse than torture for one person for a year.
Okay, but what’s the take-away of this? The take-away is that no consistent moral system can agree with all of the following judgements:
- Barely distinguishable pains are morally tradeoffable.
- There’s a progression of pains from “momentary pin-prick” to “torture for a year”, where each step is just barely distinguishably worse from the last.
- Torturing one person for a year is worse than inflicting momentary pin-pricks on any number of people.
You must either reject one of these, or accept an inconsistent morality.
This seems like a big problem for anybody that’s really trying to take morality seriously. The trilemma tells us in essence that there is no perfect consistent moral framework. No matter how long you reflect on morality and how hard you work at figuring out what moral principles you should endorse, you will always have to choose at least one of these three statements to reject. And whichever you reject, you’ll end up with a moral theory that is unacceptable (i.e. a moral theory which commits you to immoral courses of action).
I mean, I’m not sure how much of a sacrifice it is to accept a moral system that delivers unintuitive results only when there are so many people involved in the problem that they can’t physically fit into the observable universe.