I made a post ages ago about why Utilitarianism is dumb, but looking back at it I think I can do a little bit better. I made a common argument against Utilitarianism in the post, that it leads to many counter-intuitive outcomes. But, now that I have a bit more experience with Utilitarians, it is clear to me that this is not a problem for most of them.
In case you’re not in the know, Utilitarianism is an ethical system which, simply put, demands the follower make decisions based entirely on which will produce maximum utility (overall happiness or satisfaction). Bentham, when creating utilitarianism, implies in his writing that utility is near-synonymous with pleasure or happiness. Most Effective Altruists are Utilitarians. Most Utilitarians today are secular or practice some vague religiosity (ex: Deism) and I think their reasoning for being utilitarian is basically “because I like it”. If you ask an Atheist why they act morally, they’ll usually say “most people don’t need a god to tell them to act moral, they’re just decent heckin’ human beans”, and I guess that’s what goes through an Effective Altruist’s mind. They just find being a decent human bean “cool” or “aesthetic” on some strange deep level. A lot of them, though, insist that other people be Utilitarians as well, so they have to feign the belief that this is actually a morality or ethical system, that acting in concert with the utilitarian principle is “right” in an objective way. If they were to admit that it is a sort of instinctual or aesthetic mental convulsion, it would invalidate the entire game.
When a Utilitarian argues against the other big moral systems (deontology, virtue ethics mainly) they are usually arguing that these systems rely on complicated arbitrations or rules that can easily be interpreted many different ways with no clear way to compare these interpretations and how true they are. Utilitarianism is simple, and only relies on one rule, and the premise of it makes sense. We have all seemingly experienced being more happy at one time than another, and feeling more pain at one time than another. We have all also experienced longer-lasting pains and shorter-lasting pains, and the same for pleasures. So, surely pain and pleasure can be imagined as a quantity that we can figure out through the process of what Bentham describes as “felific” or “hedonic” calculus.
The big issue with this is, “degree of pleasure” or “degree of pain” is unable to be measured. So, any utilitarian calculation is essentially winging it. Most Utilitarians accept this as a flaw and simply say “it’s still the best of the worst”, and maybe even try to use proxies. But, this is assuming that happiness is actually a quantifiable thing, just a quantity that we don’t have the instruments to measure. This is asserted without any evidence. Yes, we say that we “feel happy” or we “feel sad” or we “believe something”, but in a non-idealistic, non-dualistic universe this is meaningless and poorly defined. Subjective experience, and by extension the set of emotions lumped into the category of “folk psychology”, by no means are required to have an exact physical reference or measurable element.
In fact, there is some reason to think that happiness either cannot be reliably known, or cannot correspond to a measurable neurological phenomena, if you accept that all phenomenal information underdetermines noumenal information. Simply put, the data we get from observations of an event cannot fully determine the true nature of the event. I have an article tangentially related to this topic which maybe you consider reading?
The World as Probability
I’m currently mulling over the possibility of switching my focus from pure math to statistics, which gives me a lot of time to think about the differences between probability and every other field of math.
And you know what? While I’m shilling, why not a Boob Break for old time’s sake!
Shameless self-promotions aside, if we are unable to determine the nature of a mental event (i.e. know happiness/sadness based on feeling) then certain assumptions based on intuition which utilitarianism relies on (ex: happiness having “intensity” or “magnitude”) would be unfounded. This preposition is odd in itself, it would suggest that there is a possibility that sadness can actually feel happy, that there is a happiness that exists separate from the sensation of happiness. If we are able to determine the nature of a mental event, then it cannot correspond to a measurable or an observable because it isn’t underdetermined. The only solution is to adopt mind-body dualism or idealism, and in both of these circumstances the case for the other ethical systems grows immensely. There is ultimately nothing which makes the sense of happiness more “real” or “grounded” or objective than our sense of beauty, or justice, or excellence, so the deontologist could always say he too is a utilitarian insofar as he is trying to “maximize justice and minimize injustice”, or the Aristotelian could say he is trying to “maximize virtue and minimize decadence”. Even if we were to isolate some sort of pattern of neurons representative of the sensation of happiness in humans, there’s no reason to believe we could find an analog to this in all sentient life. This also applies to imperfect corollaries of happiness, like the often misunderstood “pleasure hormones” of dopamine and serotonin. This is the case in other animals, but it is especially the case in something like an artificial intelligence.
The Buddhists, and perhaps a large share of Hindus, have a belief system which at first glance resembles Utilitarianism, in which the goal is to minimize suffering. However, the Buddhist principle of suffering-avoidance is not consequentialist. Or at least, not entirely so. To cause another being to suffer, or to act without compassion, reaffirms a clouding sort of self-belief that Buddhists want to avoid. To cause ones-self to suffer is simply unskillful and irrational. The only permanent and meaningful end to suffering is through ascension, and the most compassionate thing you can do is guide all sentient life to ascension for this reason.
Unless a Utilitarian is also a believer in reincarnation, the Buddhist conception of happiness is highly problematic, because it implies that you can’t finish life with “net happiness”. As life goes on you will only experience varying degrees of temporary relief from dukkha — the constant state of unrest and discomfort brought about by desire and contingence — so if it were not for the principle of reincarnation and if one were to believe in the sole principle of minimizing the amount of suffering in life, the correct choice of action would be to commit suicide or in the Utilitarian case to bring about the extinction of all sentient life. There are two alternative outlooks. Case A, is that happiness can potentially exceed suffering, and Case B is that happiness necessarily exceeds suffering, that states of extreme pain are still states of net happiness. While Case A is possible, it requires a leap of faith as there isn’t really any way (at least not that I can think of) to gain any knowledge of where the boundary lies between net pleasure and net pain over any given amount of time. Because of this, it would be very difficult to decide if a certain life-creating or life-preserving practice is actually ethical, for example the raising of livestock for eventual consumption, or the abortion of children with disabilities or who would be raised in unpleasant households. It really all boils back down to the issue of quantifying happiness in a non-Idealistic reality. Case B is, in many ways, much cleaner. It solves the issue of negative utilitarianism’s death drive, because even a very painful life is preferable to death. Case B would result in certain choices which would completely go against beliefs that characterize the Utilitarians of today, so much so that I hesitate to call it a form of utilitarianism at all.
Some modern Utilitarians seek to go around the issues and implications of Hedonic calculus entirely, by replacing “happiness” or “pleasure” with “preference”. An action is good when it is the thing the most people prefer, and an action is bad when it upsets the most people’s preferences. Surely, though, if this is going to be considered utilitarian, there must be some importance kept on the notion of intensity of preference. But, because desire is a folk-psychological construct, it has the same pitfalls of happiness Utilitarianism.
Utilitarianism can still be pretty complete in a very limited capacity, where you can use it to judge two more or less identical events which differ on one element of the felific basis. For example, running over 5 people with a train versus running over one person with a train, or shooting someone based on getting heads in a coinflip versus shooting someone based on rolling a 1 on a D20. Because of this, it is not necessarily a bad system for a closed environment, but it isn’t an ethical system. It’s more befit for a social contract, and one for a small, specialized circle in particular. But what about general ethical questions? Is the joy derived from people seeing Jeremy Bentham’s mummified head at London University greater than the joy that would be derived from the sexual pleasure produced by allowing perverted students to simulate oral sex with his remains? I don’t know, it’s hard to say because there are many small children who are possibly scared by seeing an old bald guys rotten head with glass eyes stuffed in. Overall, Utilitarianism is a half-measure. It is a false equivalence between “realness” and “baseness” which is illegitimate in major ontologies. It’s almost the exact opposite of the Platonic notion, where the “reals” are most recognized by intelligent life in the form of beauty and order and whatnot, and nearly invisible to dull creatures who are quite familiar with happiness or desire. “Rationalist” Atheist Techbros like it not only because they don’t want to come off as bad people, but also because they do not want to surrender to Eliminative Materialism. They’re still in the “love is just a bunch of chemicals” phase of physicalism.
It’s still pretty fun to think about some of the odd hypotheticals of Utilitarianism, even if I think it is dumb. If some sort of “hell” exists where beings suffer eternally, or if a “heaven” exists where beings experience eternal pleasure, would utilitarianism be useless? After all, infinity minus a very large number is still infinity. There would only really have to be one being in hell, or heaven, or there would have to be one immortal being whose pattern of suffering and pleasure ever-so-slightly erred a bit to one side. This is essentially the issue with Utilitarianism in Buddhism, suffering in this world is necessarily infinite so trying to end it through ordinary means is impossible. Not a problem, of course, if you’re limiting your utilitarian outlook to a certain group in certain circumstances. Hey, maybe you could even put differing degrees of emphasis on groups that you are closer to. This is starting to sound… Heckin’ bigoted! Instead let’s just donate more money to create 1 morbillion Africans. We will even launder money if we have to. This is, uhh… Le rationalist. Elite Human Capital is when you launder money so that you can marry a woman who looks like the dobby and donate 6 gorillion water collection buckets to Bantus.
Easy top 3 boob break
Breasts… honkers… booba