Skip to content

Instantly share code, notes, and snippets.

@ClayShentrup
Created April 16, 2025 19:08
Show Gist options
  • Save ClayShentrup/a2e5143679ce255b58372b437411789b to your computer and use it in GitHub Desktop.
Save ClayShentrup/a2e5143679ce255b58372b437411789b to your computer and use it in GitHub Desktop.
what is ethics?

what is ethics? remarkably few people seem to fully understand the subject, including most moral philosophers. thus, as you might imagine, this piece is going to be written at a fairly deep level, suitable for those with some background in ethics or utilitarian thought. that said, we’ll generally aim to use terms and concepts that any smart and attentive individual could understand. the goal is to illuminate these ideas as clearly as possible without diluting their intellectual depth.

when we see a bird tending to its egg at great personal cost, or stags engaging in ritualized dominance contests rather than fighting to the death, or humans instinctively rushing to help someone having a medical emergency—these are all just biological phenomena, which we can understand in the same way we understand why moths are attracted to light or why we crave sugar.

to see how fundamentally subjective ethics is, let's start with a classic ethical thought experiment known as the trolley problem. the basic scenario involves a runaway trolley that is about to kill five people tied to the track. you have the option to pull a lever and divert the trolley onto a side track, where it will kill only one person. essentially, the dilemma boils down to a simpler question: should you sacrifice one person to save multiple lives - even just two? most people recoil at the idea of actively causing someone's death, even if it would save more lives. they might be even more horrified at the suggestion of systematic organ harvesting - killing one healthy person to save two dying patients.

yet consider this thought experiment: imagine three people stranded on a remote lunar station, completely cut off from outside help. they have already eaten their meals for the day when they discover that two of the meals were contaminated with a slow-acting toxin that will cause kidney failure. the station's medical robot can perform a kidney transplant to save the poisoned individuals, but only by sacrificing the third person. the group has an opportunity to voluntarily consent to a rule requiring the spared individual to give up their life to save the other two. this is obviously the rational choice, as it changes each person’s odds of dying from 2/3 to 1/3.

this illustrates how rational self-interest, informed by the preference sovereignty principle, leads to equitable decision-making under uncertainty. when individuals account for the possibility that they could be in any position, they naturally favor policies that maximize overall well-being.

as the economist john harsanyi pointed out, an ideal ethical framework becomes indistinguishable from pure selfishness when viewed from behind a veil of ignorance about one's identity. when we don't know which person we'll be, self-interest leads us to choose systems that maximize expected utility across all participants. this reasoning would also mean signing up for an organ harvesting policy, where a single healthy individual could be sacrificed to save multiple lives. while this might feel counterintuitive, it aligns with maximizing expected utility under a veil of ignorance.

but in real life, we often find ourselves past the veil of ignorance. consider our lunar base example - someone who discovers they're not one of the people facing organ failure might suddenly choose to renege on the organ harvesting pact. we see this same dynamic play out in policy debates about wealth redistribution. many policies that would be rational to support from behind the veil of ignorance (not knowing which family you'll be born into) become personally disadvantageous once you discover you're wealthy. this tension between what we'd choose ex ante versus ex post helps explain why actual policies often diverge from what purely rational actors would choose under uncertainty about their identity.

to understand why ethics works this way, we need to understand where it came from. four billion years ago, molecular replicators emerged with two crucial properties: they could make copies of themselves, and those copies could contain mutations. everything else - from the beaks of finches to our deepest moral intuitions - flows from this fundamental reality.

when we talk about evolution, people often think about it in terms of organisms or species. you'll hear things like "wolves developed pack behavior to help the species survive." but this is completely backwards. evolution doesn't work at the level of species, or even primarily at the level of individual organisms. as biologist richard dawkins brilliantly articulated in the selfish gene, genes are the fundamental unit of selection. they exist in proportion to their ability to get themselves copied.

and when we say genes "try" to get themselves copied or act "selfishly," we're using a helpful metaphor, but we shouldn't take it literally. genes aren't conscious agents making decisions. those that happen to have properties that lead to more copies of themselves become more prevalent in the population - it's pure mathematics and chemistry, not conscious intent. but the end result is that genes behave as if they were trying to maximize copies of themselves.

having explored the evolutionary origins of apparent altruism, we can now turn to its implications for ethical reasoning. understanding how genes shape behavior lays the groundwork for a more precise understanding of ethics as fundamentally subjective and rooted in rational self-interest.

this brings us to two fundamental types of apparent altruism in nature. the first is kin altruism. when we see a bird tending to its egg, we're witnessing genes protecting probable copies of themselves in that egg. the gene isn't being altruistic - it's protecting what are likely to be copies of itself.

the second type is reciprocal altruism, which is fundamentally an instance of the iterated prisoner's dilemma. in our ancestral environment, we lived in small groups where we repeatedly interacted with the same people. in that context, helping others often meant helping yourself, because they would likely help you in return. our brains evolved to facilitate this cooperation through emotions like gratitude, guilt, and moral outrage.

this framework of genetic self-interest elegantly resolves traditional problems with utilitarianism. the "repugnant conclusion" suggests that if we're maximizing average utility, we should kill anyone less happy than average. the "mere addition paradox" suggests that if we're maximizing total utility, we should fill the universe with as many people as possible, even if they're only moderately happy. but the selfish-gene framework avoids these paradoxes entirely. we're simply rational to be as selfish as possible - it's just that this selfishness often manifests as apparent altruism toward our family and those we're likely to repeatedly encounter.

consider this: even on an island inhabited entirely by psychopaths—individuals who are completely selfish and lack any capacity for empathy—you would still find laws against theft and murder, a court system, and even a system of taxation. why? because these things benefit the psychopaths themselves. a law against theft reduces the risk of their property being stolen. a court system ensures that disputes don’t escalate into costly, endless vendettas. a redistributive tax system that funds basic infrastructure, law enforcement, and national defense protects them from external threats and ensures that society doesn’t collapse into chaos, which would be bad for everyone, including the most ruthless.

psychopath island demonstrates that even in a society of purely self-interested individuals, cooperation and shared rules naturally arise. this is because what affects others ultimately affects us. we don't support laws and social systems out of altruism, but because they maximize our own well-being when we account for the fact that we live among other people who can also harm or benefit us.

this same logic explains why even rationally selfish individuals might support systems like taxation and wealth redistribution. while a wealthy individual may oppose a social safety net if they are certain they’ll never need it, such certainty is rare. and even if one’s position in society is known, policies that reduce inequality and instability ultimately protect everyone from costly conflicts and societal collapse. redistributive systems are simply a more efficient way of avoiding outcomes like a violent, desperate uprising—avoiding bloodshed and preserving stability benefits everyone, selfish or not. it's the same dynamic as when peasants might rationally support storming the castle and redistributing the monarchy's wealth. none of this requires true altruism - just rational self-interest playing out under different levels of uncertainty about our position.

our evolutionary adaptations for reciprocal altruism face a challenge: we now live in a completely different environment - one with millions of people, most of whom we'll never meet again. yet we still feel compelled to help strangers, donate to charity, and act "morally" even when there's no clear benefit to ourselves. in a fascinating conversation between biologist richard dawkins and ethicist peter singer, dawkins made a brilliant observation about this phenomenon. he noted that people still copulate while using prophylactics, which from a purely genetic perspective is a complete waste of time and energy. similarly, we have what he called a "lust to be nice" - an irrational drive to help others even when there's no possibility of reciprocation.

singer and dawkins note how this extends even to our treatment of animals. just as we developed general-purpose circuits for empathy and cooperation that now misfire with strangers, we've developed moral intuitions that can extend beyond species boundaries. this isn't because there's some cosmic moral truth about animal rights; it's because our evolved capacity for moral concern can be triggered by any being capable of suffering. the fact that many people care deeply about animal welfare while readily accepting human suffering in distant countries further demonstrates how our moral intuitions are shaped by evolutionary history rather than logical consistency.

this is exactly how cuckoo birds make their living. they exploit genes in other bird species that essentially say "if there's an egg in your nest, sit on it." these genes were selected for because they usually helped copies of themselves within those eggs. but cuckoos exploit this generosity, laying their eggs in other birds' nests. the host birds then waste their resources caring for eggs that don't even contain copies of their genes.

we should expect such maladaptive behaviors to gradually be weeded out of the gene pool through natural selection. time spent helping strangers who can never reciprocate is time we're not spending making more children or accumulating resources to ensure our existing children's survival. it's analogous to how we might expect evolution to eventually select against people wasting time on non-reproductive sex - though of course, such evolutionary changes take many generations to occur.

now that we understand ethics as a product of genetic selection manifesting as subjective preferences, we can more clearly understand how to meaningfully debate ethical issues. this is where the distinction between intrinsic and instrumental ethics becomes crucial. intrinsic ethics are our basic preferences - what we ultimately want or value. these are purely subjective. instrumental ethics, on the other hand, are about how to achieve instrumental preferences - the intermediate goals we need to accomplish to satisfy our intrinsic preferences - and these can be evaluated objectively, even though there's often uncertainty about outcomes.

this framework helps us understand and evaluate different ethical systems. take rule ethics and virtue ethics - these aren't fundamental measures of what's ethical, they're simply heuristic tools that can satisfy true ethical outcomes to varying degrees. virtues are effectively just rules themselves, and both can be evaluated based on their utility efficiency. just as we saw with the lunar base example earlier, a rule or virtue is "ethical" only to the extent that informed people would choose it for themselves. our tendency to rely on such heuristics makes evolutionary sense - every second spent calculating optimal decisions is a second you could be eaten by a predator. our brains evolved to balance decision quality against computational cost.

this brings us to consequentialism, which gets closer to the truth by focusing on outcomes. of course people care about consequences - it doesn't matter if you were killed intentionally or accidentally, you're still dead. the only extent to which intent matters is practical: someone who intentionally kills might be more likely to be a repeat offender, and thus the correctional system should treat them accordingly. but even this requires nuance. a 90-year-old myopic, demented driver who accidentally plows into a school crosswalk may be a bigger threat than a violent 20-year-old with a penchant for fist fights that have never resulted in more than bruises or bloody noses.

the key is that we want to evaluate any policy based on its total utility effects. consequences are fundamentally what matters. it's not about "moralizing" accidents versus intentional acts - it's about understanding their utility implications. we can treat a malevolent "evil" person the same way we'd treat a defective household robot that accidentally goes on a killing spree. conscious intent is irrelevant; only outcomes matter.

this also helps us understand the is-ought problem identified by david hume - the idea that you can't derive statements about what ought to be from statements about what is. but this entire problem dissolves once we realize there is no such thing as "ought" or "should" in any objective sense. when someone says "you shouldn't kill people," they're not making some profound metaphysical claim about the universe. they're simply expressing a preference - "i prefer a world where people don't kill each other."

now you may have noticed that i've entirely sidestepped a realm of ethical thought in my biological excursion: that of the notion of a deity and the resultant theistic ethics. some argue that ethics must come from god. but this leads to an insurmountable dilemma. either ethical principles are inherent and god is merely their enforcer/messenger (making god a pawn to these principles), or god arbitrarily decided on them. in either case - whether ethics stems from arbitrary properties of nature or arbitrarily from god as an intermediary - this has nothing to do with what we actually mean by "good" or "bad."

consider this: even if the universe or god declared that killing babies was good, this wouldn't change the fact that most humans are hardwired to be extremely averse to infanticide. we would still shudder at the thought, regardless of such pronouncements. the entire notion of objective ethics, whether derived from physics or deity, is incongruous with everything about the human conception of ethics.

this brings us to emotivism, a philosophical approach that gets tantalizingly close to the truth. in a fascinating debate between author and public intellectual Sam Harris and emotivist philosopher Alex O'Connor about objective morality, o'connor argues that when we make ethical statements, we're essentially just expressing emotions at each other. when we say "murder is wrong," we're expressing something akin to saying "boo murder" - a command or expression of distaste rather than a truth claim.

emotivism comes tantalizingly close to the truth. its recognition that ethical claims often express preferences or emotions is a step in the right direction. however, it stops short of providing actionable guidance for real-world decision-making. the preference sovereignty principle builds on this by formalizing how preferences, when informed and rational, can lead to a coherent ethical framework that respects individual sovereignty while addressing collective outcomes.

harris, in contrast, struggles because he's not distinguishing between intrinsic and instrumental ethics. he correctly points out that there are objective facts about how to achieve certain states of consciousness or well-being, but he misses that the underlying preferences for those states are purely subjective.

this framework helps us understand moral disagreement in a new light. when people disagree about ethics, they're often not really disagreeing about moral facts - they're either disagreeing about empirical facts about what policies will lead to better outcomes, or they're expressing genuinely different preferences shaped by their genes and experiences.

in the end, ethics isn't about discovering eternal truths or divine commands. it's a biological phenomenon that emerges from genetic selection and manifests as subjective preferences. once we see this clearly, the traditional philosophical debates dissolve into straightforward questions of genetic self-interest operating behind a partial veil of ignorance. it's not that these questions aren't complex - they often are - but we're finally asking them in the right way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment