In my view, Utilitarianism is the greatest tragedy to befall mankind in a great many centuries.
Utilitarianism is a bludgeon by which the conniving and unholy are allowed to make hyper-rationalized decisions that disregard people’s autonomy to reach some consequence that has only been guessed at. Utilitarianism is a tool used by people who are purely material in their interests and are indeed full of pride in their decisions. How prideful does one need to be in order to believe they know what result will cause maximum good and also believe they know how to reach this end without error. Nobody can know the consequence of their actions, and furthermore, nobody can know for sure what will do the maximum good for the maximum amount of people. While Kant was broadly correct in saying humankind can use its intellect and rationale to solve the problem of what is a categorical imperative, humans are absolutely not able to do the complicated ethical arithmetic as is required for Utilitarianism, nor is it something that is ethical, in my view, to attempt.
Allow me to clarify in more amiable terms. While Utilitarianism itself is simply “common sense”, surely any ethical theory should simply minimize suffering and maximize good in the world. There is an issue with this, however. Not only does it require a definition of both suffering and goodness, which on its own might be soluble, but it also requires omniscience. Omniscience is not achievable, nor should it be sought. In order to maximize the “goodness efficiency” of our actions, we have to be sure that our actions result in goodness. The worse abusers of Utilitarianism are perhaps living in the world of neocolonial foreign policy, where actions like the execution of innumerable drone strikes are carried out to eliminate certain enemy figures. The problem, as has emerged as this drone warfare has become more ubiquitous, is that you tend to end up killing a very large proportion of civilians. This has not deterred these powers from the execution of these drone strikes. Why not? Because, don’t you know, it’s for the greater good.
This contemporary Utilitarianism is clearly extremely abusable. Even if one country truly is acting in the “greater good”, one could see how easy it would be for some power to lie to it’s people, or even less insidious than that, simply live in a constant state of ignorance by believing their actions to always be ultimately more good than evil. It is all arbitrary of course, you never know what would have happened if you did not carry out these drone strikes (or whatever other utilitarian-justified action it is), and in the process, you have killed potentially thousands of innocents in order to execute dozens of enemies. Where is the anchor? What is the ratio at which this becomes inappropriate? Nobody knows. Nobody can ever know; it is not within our capabilities as human beings. We cannot ever perform perfectly accurate “ethical arithmetic” nor can we ever build machines to evaluate this ethical arithmetic because with an ethic like Utilitarianism, at least the way it is presently practiced, there is no clear or real markers of what is good or bad other than the ham-fisted metric of human life. Some utilitarian theories are all about risk assessment. These are plainly ill-formed, as while you might be overall saving the most lives or whatever metric you prefer, you are acting as if you can ever know into the far future. Risk assessment can only see so far, and if your only metric is the greatest good then you need to know that stopping some atrocity now won’t just lead to a much worse atrocity a millennium from now. And if you attach some time limit on when your actions consequences should be evaluated, then we are just getting into truly silly and arbitrary nonsense.
Finally, Utilitarianism makes a ridiculous and dangerous claim that something is moral as long as it benefits the majority of the population. This I see as a purely aristocratic claim, and one which, under the right circumstances, could be used to justify institutions such as certain forms of limited slavery. This is already the case to a certain extent when we look at the results of particularly potent and corporatist forms of Capitalism whereby we see the ravages of poverty taking place on a minority of the population. Is it really ethical to let these people suffer? I would argue no, and I believe this is why many nation-states institute systems of welfare to fulfill a moral responsibility to care for these populations. The same exact case exists in our modern world with racial and ethnic minorities. Modern society finds it extremely appalling when governments explicitly favor one racial group over another, as it should, so why does society still buy into the insidiousness of Utilitarianism?
Thus, I say this: Utilitarianism is for the godless (who have truly no faith in any natural or divine command law) that are too invested in their own existence and institutions to accept the far more sensible and less hypocritical ethic of existentialism. To summarize, they are too materialistic to have faith in a religious imperative, too self-absorbed to follow Kant’s categorical imperatives, and too cowardly to accept Existentialism. It is truly the morality of the delusional and confused. This is an extreme position, I understand, but it is a position I will not take one step back on unless proven otherwise in a general case.
Something I have considered often given its relevance with contemporary technology and culture is the now famous trolley problem. It seems to be a problem that is perhaps only relevant in a society dominated by utilitarian moral institutions. The question can be briefly restated as this: if a trolley is coming along a track and there are, say, five people tied to this track but you can divert the trolley to another track with one person tied to it – do you pull divert the trolley? The utilitarian solution to this problem is simply to pull the lever and save the greater number of lives (and even with this example, I would like to make sure I note that this is a gross oversimplification of the sort of, in my opinion, degenerate ethical arithmetic so ubiquitous within Utilitarianism).
I propose a novel solution to this problem. While there are certainly Kantian arguments to be made for not pulling the lever, and certainly some religious arguments for both options, I will attempt to syncretize the two to create a cohesive central theory of ethical responsibility that will be the core of the ethical system defined within this essay. The solution is this: there are situations where we understand that somebody is understood by all parties to be more destined for a certain fate than others. It is why we are implicitly more shocked by civilian deaths than we are by the deaths of combatants. These combatants are still humans are they not? And we know they are basically the same as civilians; so why are their deaths seemingly less appalling? I call this dynamic “acceptio fati”, Latin for “acceptance of fate”. I’d like to distinguish this from the Christian (and even pre-Christian) concept of predestination that ascribes some sort of vague destiny which we follow in, or at least that our fates are God’s will or accepted by God. In this case I am describing a different sort of fate or predestination that may or may not be associated with divinity but can be plainly understood by the human mind. Soldiers sign up for the military with some understanding that they might be hurt during a conflict. Or, even if they were conscripted, while we can debate the ethics of the actual conscription, there is still a greater expectation of harm coming to the conscripts by the general public and also the conscripts themselves. In the case of the “trolley problem”, those who are on the track that the trolley is already travelling on have a greater expectation of their own fate than the person on the other track. A religious person might even argue that clearly it is God’s will that these people die rather than the other person. I don’t think this is an at all particularly compelling argument on its own, but perhaps this illuminates the relationship between acceptio fati and religious perceptions of fate.
I should make it very clear, however, that acceptio fati does not mean one should simply not interfere in the world’s affairs. It simply applies to situations in which the circumstances would require some ethical arithmetic or dishonesty. Another nuance of acceptio fati is that it only applies when the imminent fate can be clearly and implicitly seen by all reasonable involved parties. You cannot teach people to have acceptio fati even if you can indoctrinate an individual to accept some fate. Acceptio fati is always implicitly understood, if it cannot be, then it is not acceptio fati. Furthermore, acceptio fati does not apply to situations of direct malice. For example, living peacefully in a warzone is not acceptio fati for dying as collateral damage in that war. There is no malice that can be accepted by anybody, not even the perpetrator.
To help illustrate all of this, let us examine the motivations and possible perspectives of every person standing witness to the trolley problem playing out.
Firstly, the primary victims or the victims who shall remain the victims if there is not interference. These are the more numerous people who are tied to the track that the trolley is already travelling along before any interference could take place. They are likely aware of their grim situation, they see the trolley coming, and they are feeling one of two things: either they are mentally preparing for their doom, or they are looking for something, anything, to save their lives. Let us say they see the other person on the track and understand that in order for their life to be saved, the other person would have to die. At this junction, they may simply accept their death. However, they may will it to be so that the trolley controller switched the trolley’s track and ends to life of the secondary victim to save the life of this primary victim. Make no mistake, no matter the primary victim’s motivations, he is willing murder.
Secondly, the secondary victim is the person on the track which the trolley can be switched to but is not presently on. They make one of three decisions: they understand the complicated circumstances and await to see their fate, they are desperate that the lever is not touched, and finally the hope to sacrifice themselves to save the lives of the primary victims. The third situation is one situation in which it would be absolutely appropriate to pull the lever. This sacrifice is no different to some wartime sacrifice, if not by proxy.
Third, the controller is the person who has access to the controls to the trolley. They are the person that the primary moral imperative rests upon. They are unlikely to have any spiteful thoughts of self-preservation on the matter, so in most cases they are essentially innocent. While generally (keeping in mind the exception discussed in the last paragraph) the correct choice is to not pull the lever and allow acceptio fati to run its course, given that this is not an obvious ethical quandary, their good intention may absolve them – though pulling the lever against the will of the secondary victim is still a strange and cruel sort of murder, and I would expect them to feel a more severe sort of guilt than if they had not pulled the lever. Thus, acceptio fati is naturally evident.
Fourth, the witness is any persons who view the situation in its totality, at least as is visible to a mortal, rather than what is visible to an omniscient being like God. A large group of witnesses will not be happy with either solution. I believe you will find that the general public, perhaps having been indoctrinated with Utiliarianism or perhaps just too attached to the simplicity of arithmetic, will be more outraged by the controller not pulling the lever. However, I believe, more pressingly, that the general public will find itself increasingly spiritually sick viewing the metaphorical pulling of the lever – especially in repetition. There is something we all intrinsically understand that is dirty about human involvement in predestination or fate (in reality what is dirty is interference in acceptio fati), whether Grace and predestination are real things or not.
Fifth, God will know the intentions and desires of all involved. If we accept that such a God exists – or even if we don’t, then it is indeed interesting to explore God or some other omniscient entity’s perspective. Primarily, a God interested in humankind’s use of free will and knowledge of Good and Evil will be concerned with the victims. It takes very little to drive a person into spite out of self-preservation. Indeed, this is one unfortunately true axiom of Aquinas’ theory of natural law, whereby seemingly most people sometimes will favor self-preservation above all other priorities. If our non-consequentialist Christian God sees something truly repulsive in a material trolley problem, it will be in the wills of the victims. However, I would not discount the weight of pulling the lever. I am no trained Christian theologian, but as I have pointed out, there is a great pride in believing oneself to know what is of the most good, and to use this knowledge to interfere in what is imminent or fate. These scenarios may seem unrealistic, but indeed there will come a day where technology like self-driving cars will be commonplace and we will hear in the news of real-life situations where computers had to make these trolley decisions. Simply put, I predict we will come to find that refusing to pull the lever will generate more immediate societal outrage, while repeatedly pulling the lever will slowly wear on people’s souls to read about. It does not feel good to know that you could be summarily executed by a car at any moment, simply to save the lives of people who may be performing risky behavior in the streets or indeed, a passenger in another self-driving car. It is my fear that both our obsession with Utilitarianism and the current moment of hyper-rationalism will ultimately result in companies being forced to choose to be the lever-switchers; this will be a small but deeply dark feature of our technological advancement.
More important than any of this, however, is a concept that is almost universal in religion but “irrational”. This is the idea that the value of human life is infinite. I believe this to be a sort of natural law; it is something that we all understand implicitly in some respects, but it is not remotely the case in a purely rational conceptual space. If you will, this is the inherent understanding that humans have of their own “soul”. It is something we believe we all have either metaphysically or metaphorically, and it is something that we generally believe to be distinctly detached from our materialistic understanding of self. While the scope of this essay is not quite broad enough to describe and define the idea of the human soul in depth and across cultures, I hope you will accept this assertion until I write such a comprehensive definition. The soul’s value is infinite exactly because it is distinctly the antithesis of our material selves. Without material, there is no such thing as discrete or calculable value. Thus, because human life has this sort of infinite value, you cannot weigh, compare, or mathematize it. This also means that no matter a human’s economical or intellectual value, they are equal in value to even the most “worthless” human (no human is worthless, but I mean to say worthless in a materialist’s terms).
Humans are not tomatoes in a marketplace. Humans are not simply wealth, growth, capital, nor labor. Humans are divine, for they are their souls. Human lives are not to ever be exchanged, not even for the so-called “greater good”. Utilitarianism rejects all religious and moral imperatives which might suggest these things, on the basis of pure rational scientism. Rationale can never explain all.