Consequentialism is the ethical view that the thing that makes an action right or wrong is its consequences. One version of consequentialism, as detailed by Jeremy Bentham and J.S. Mills, is Utilitarianism. Utilitarians reduce ethics to almost a kind of math, where every action is evaluated in terms of the net amount of pleasure or pain it produces. We can imagine a sort of formula, where utility=pleasure-pain. The goal of the utilitarian is to maximize utility. It is important to note that the pleasure and pain of every individual is taken into account equally; I cannot place my own utility or even the utility of my loved ones above that of a perfect stranger.
For example, say that I’m driving my car when I see a woman in the road. To my horror, I discover that my brakes are not working properly. I have the choice to either swerve into a field of tall grass on my right or continue on my current path, hitting the woman. If I hit the woman, it would cause her a lot of pain, and may even end her life. If I swerve into the ditch, I will be able to coast to a stop. My car may sustain a bit of damage, I may be jostled around a bit, and I may be late to my next appointment. But none of those things are enough to justify my not swerving into the field, saving the woman. The net utility if I swerve is much higher than the net utility if I do not. It seems that the right thing to do would be to veer right.
Although it gets the right answer in cases like these, there are several objections to utilitarianism and other consequentialist theories, which brings me to the comic. Utilitarianism demands that we take the utility of every single person into account. The problem is that there is no restriction on how far into the future we must consider when making ethical decisions. Suppose that I swerve my car to the right in the example above, but unbeknownst to me, there was a child lying in the tall grass and I hit her with my car, thus killing her. As it turns out, had this child lived, she would have become a Nobel Prize winning scientist for discovering an economical alternative to fossil fuels. She would have had two children, who would have gone on to cure cancer and solve world hunger. Her grandchildren and great-grandchildren would have been similarly gifted, and on for generations. If you like, add in to the example that the woman I saved by swerving to the right turned out to be a mass murderer. Now, by Utilitarian standards, it would seem as though my action was hugely, massively wrong. By swerving to the right and trying to save the woman, I caused immense amounts of pain and prevented untold amounts of pleasure. But we do not want to say that my action was wrong – after all, I did what I thought was best, given the information I had at the time. This is called the Distant Future Problem.
For another example, consider Henry III, who had seven children in the 11th century, A.D. It is not unreasonable to think that some of his descendants are still around today. We cannot possibly say for sure who falls into his bloodline: Shakespeare, Hitler, J.K. Rowling, myself… who knows, really? It seems utilitarians must be willing to say that Henry III ought to have considered the net utility produced by all those people and all their offspring, on into the distant future. This seems ridiculous, since we would not want to attribute a wrong action to Henry III simply by virtue of the actions of his great-great-great-great grandchildren.
For more on the distant future problem,