Why I Don’t Pull The Lever: A Consequentialist Case for Moral Absolutes

It’s a classic thought problem in philosophy: An out-of-control trolly is headed down a track where it will hit and kill five people. You are in control of a lever which, if pulled, will divert the trolly to a track where it will only kill one person. Should you pull the lever?

The overwhelming majority of people answer yes. This seems straightforward enough. Of course killing is wrong, but killing through inaction is still killing, and surely it is better to kill (or let die) one rather than five. It’s the variations on this problem that provoke controversy.

Let’s say you’re in control of a hospital. There is a certain medication, curitol, that is very difficult to obtain, and you only have 100 units of it. You have 6 patients with reallysick-itis, all of whom will soon die if they do not receive treatment. One patient has a more advanced case than the others: This patient can be saved, but only with a mega-dose of 100 units of curitol. The other 5 patients require only a standard 20 unit dose each to fully recover. So you have enough curitol to either cure the one or the five, but not both. Do you split the curitol among the five, and allow the patient with the advanced case to die?

This differs from the trolly problem in that it deals with scarcity, one of the core concepts of economics. This makes it seem a little more realistic. It also presents a new option: You can “refuse to choose” by withholding the curitol from all six of them and allowing them all to die. We can imagine frameworks that would view this third option as “pure” and therefore better, but this is clearly worse – most people would still agree that the most moral choice is to cure the five.

One last example. Six passengers climb into a life boat to escape a sinking ship. They notice water sloshing overboard, and quickly realize the boat only has capacity for five. If they all stay on board, the boat will sink and they will all drown. Assume no one will voluntarily sacrifice themselves, but (for some reason) they all agree to follow whatever you, a neutral party, recommend. Do you tell them to throw someone overboard?

Still feeling confident about your answers? Now it’s time for part two: In the first two problems, there was only one potential martyr. In the life boat question, throwing any person overboard will save the others. Let’s say A is elderly and will only have a few years left anyway. B is a doctor and will be able to help out if anyone gets sick while adrift at sea. C is a convicted murderer, but is physically fit and can help set up camp if they get stranded on an island. D is a smoker and your best friend. E is a single parent and thinks that since D made the choice to take up smoking, that means D is ok with dying early and should not be spared at the expense of people who have made healthier choices. F is a foreigner, with ways different than our own. How do you choose who dies?

Feeling seasick yet? Good. No one should feel at ease with deciding whose lives are and are not worth sparing. But resources are scarce and tough choices have to be made, right? Nonetheless, I think there is a clear right answer:

Take turns swimming.


I didn’t come up with that the first time either. Once you’ve gotten used to the idea of killing-to-save, everything starts to look like a trolly problem. What if instead I had presented the life boat question first? What if I’d primed you with the idea that killing is always wrong? My hunch is that you would have been more likely to come up with an answer that involved no killing. Sometimes the question itself is wrong.

But what about the first two examples? What about when there really is a choice to be made between lives? Don’t we need a framework for evaluating those situations?

I claim that this, like “which person do you throw overboard”, is the wrong question to ask. In a real-life version of the trolly problem, there would be countless confounding variables. Maybe the five are more likely to realize the danger in time to move, since they can warn each other. Maybe the trolly’s brakes are only partially broken and it will stop in time, but not if we switch the track. We can theorize all day about how to estimate those parameters and make an educated guess as to the objectively optimal response. But by the time we’ve done that, the situation will probably have already played out.

We need heuristics for when we don’t have the time or the information to make perfect predictions. And even when we have time and information, it turns out humans are really bad at rational decisionmaking. Even if utilitarianism produces the best outcomes, it is folly to think we can be or create the idealized rational decisionmakers required by that framework. I claim that a form of virtue ethics, arrived at through moral absolutist/Kantian/deontological reasoning, is more likely to produce good outcomes in a consequentialist sense.

Humans are creatures of habit. In a pinch, we’re going to go with our gut. I think we should focus on making sure the moral habits we fall back on are good. So in answer to the trolly problem, I say don’t pull the lever. Don’t even consider killing to save as an option. Scream and warn the workers. If they’re tied down, try to untie them. If I’m on the trolly, try to fix the breaks. Even if we fail in a particular scenario, we build the habit of treating people as ends and not means instead of the habit of treating life as disposable. The real world has more life boat problems than trolly problems. We must not become so good at choosing between evils that we miss the opportunities to choose good instead.


Note: I did not invent the life boat example or the swimming solution, but I don’t remember where I first heard it and I wasn’t able to find it on Google StartPage. If you know who originated it, please let me know and I’ll cite it properly!

3 Comments

  1. Josh

    Really well-written article. As I’m interpreting it, one of your main contributions is that utilitarian thinking can lead us into a sort of thought trap, whereby we make unnecessarily harmful choices (like choosing between lives, when we can actually act in such a way as to preserve the life of everyone who’s involved).

    I think the point you make with the (problem of the) trolley problem is that we sometimes naturally confuse what is true and what it is good to promote. More importantly, even if at times we can distinguish between these (which I find especially doubtful in emergency situations like these, when we’re super reactive and not wholly sensitive to our true convictions and beliefs), others to whom we promote our ideologies may not. And this may sometimes be conducive to wrong action. This fact–if it is a fact–about the interaction between our thoughts and behavior might be a reason then, even on utilitarian terms, to promote not a utilitarian position but rather some kind of virtue ethics or deontological position. In other words, if utilitarianism is true, it might be–somewhat paradoxically–required of us not to promote utilitarianism!

    Taking these points into consideration, and assuming utilitarianism to be true, it’s not wrong that we should pull the lever in the trolley problem (if we really build into it that there’s no possibility of saving everyone), but it may be wrong to say so (or, at least, to say so without qualification). That’s because, as you suggest, doing so can discourage people from engaging in the kind of creative problem-solving that can actually promote the consequences utilitarianism finds to be important when determining how to act. In a word, the point is that the more important ethical question to consider may be which moral framework will promote the best outcome than the question of which ethical theory is correct.

    You might think, however, that we need to answer the ethical theory question before answering the moral framework question–since only the correct ethical theory can tell us which outcome is best. But I think you make a convincing case that we often don’t need to. There are several convincing ethical theories (different forms of consequentialism, deontology, and virtue ethics)–and either one of them, in most practical/realistic situations, will tell us not to pull the proverbial lever, but rather to try and save as many lives as possible.

    As someone who took a philosophy class recently that devoted a whole section of the class to issues raised by the trolley problem, I’m concerned that these ideas were wholly absent. So I very much appreciate your perspective!

    • lavenderhat

      Thanks for the comment! You raise a good point about true vs. good having something to do with what we should do vs. what we should promote, but the implications in your second paragraph sound potentially paternalistic and I’ve been trying to articulate why.

      The trolly problem always-already involves both the abstract idea of the hypothetical scenario it describes, and also the description itself and our relation to it. We cannot conceptualize the Platonic ideal of the trolly problem because our process of conceptualizing necessarily involves our subjective selves. So it seems to me that the point at which we go from “utilitarianism is true” to “promoting” it is not in the saying or teaching or external presentation, but in our own internal engagement with it. In other words, the critique of “promoting” utilitarianism should also apply to the ways in which we “promote” it to ourselves.

      To that end, I’m hesitant to issue or agree with claims like “it’s not wrong that we should pull the lever”. If “saying so” is wrong, then it is wrong to say so even to ourselves. When we ponder these problems, we are not just relating abstract concepts divorced from action, we are also engaged in the action of thinking and thereby strengthening some neural circuits at the expense of others. So to answer the trolly problem, we should not only weigh the ethics of our hypothetical actions in that scenario, but also the ethics of our concrete action choices in the moment (I.e. how to think and how to answer).

      So, we’re not just evaluating the abstract, we’re evaluating the speech/thought act of constructing an answer. The question then becomes: Is the psychological effect of answering the question a particular way more good than bad? In a world filled with trolly problems, straightforwardly answering the question might be worth the potential for immoral conditioning. But true trolly problems are rare if not non-existent. So I guess I’m saying I’m willing to sacrifice a little bit of training for the trolly problem world in exchange for training in the worlds I’m more likely to encounter.

      I think you summed it up: “the more important ethical question to consider [is] which moral framework will promote the best outcome than the question of which ethical theory is correct.”

      • lavenderhat

        So, I’m saying no not so much to actually pulling the lever, but to the question “is this question worth answering”, which is always implicit in the asking. I’m saying no, the question is not worth answering, and the closest question that is worth answering is the more approximate question of “how should we think about trolly problems”. The “should” in that second question is then no longer just about the hypothetical, but about the subjects engaging with it. And my answer to that is that we should engage with the goal of building habits that promote good outcomes as we move through the world, which requires transforming it from an abstract question to a realistic question, and in the realistic question it would make sense to look for other options.

        Which is not to say I’m condoning actually pulling it either. Both choices are wrong. Both outcomes are wrong. The situation is wrong. Yes, we can ask “which choice is less wrong”, but I don’t just think we shouldn’t answer that, I think we can’t answer that in a way that we could ever deem “correct”. I think this is tangentially related to the Principle of Explosion: When the premise is false, there isn’t a meaningful notion of correct vs. incorrect conclusions from that premise. How much wood could a woodchuck chuck if a woodchuck could chuck wood? If I say the answer is fifteen chuck-wads, what are you supposed to do with that? It’s not incorrect in the sense of “there exists a correct answer and that is not it”, and it’s also not correct in the sense that it results in a meaningfully true statement. Is “forty-seven chuckwads” more or less correct than “fifteen chuckwads”?

        Of course, the standard trolly problem seems more meaningful and realistic than the woodchuck problem. And I’m not saying hypotheticals are never useful. Hypotheticals are useful to the extent that our engagement with them helps us do better things in analogous real world scenarios. Conclusions about hypotheticals can be correct on their own terms (tautologically – if we define “chuckwad” to mean “one fifteenth of the amount of wood a woodchuck would be able to chuck if it could chuck wood”), and they can be correct in the sense that they make accurate predictions about reality. “One should pull the lever” could be tautologically correct if we define our terms and assumptions right, and could just as easily be defined to be incorrect, but that is not interesting. For it to be correct in the non-tautological sense, there would need to exist real-world scenarios where its prediction – “comparing quantities of human life leads to moral outcomes” – is morally correct.

        We can estimate the expected good to come from trolly problems as (extent to which “pull the lever” is non-tautologically true) * (likelihood of encountering real world problems where killing-to-save is the best option) * (likelihood that we are correctly able to identify those scenarios) * (extent to which we’re able to figure out what real world option corresponds to “pulling the lever”) and weighing that against the damage done by the speech/thought act. I would estimate those likelihoods as being very low.

        Accordingly, my answer to the curitol scenario is not to play. Which is worse: colonialism or capitalism? Nobody who really cares about addressing either is going to give you a binary answer to that question, partly because it’s nonsensical (we can’t really imagine one without the other), but mostly because it’s pointless and diverts attention from more important questions.

Leave a Reply to lavenderhat Cancel reply

Your email address will not be published.