Elif Özdemir

89 karmaJoined Pursuing an undergraduate degreeİstanbul, Türkiye

Comments
12

I am not entirely certain, but maybe we might also consider the following hypothesis: This qualitative shift in experience morally corresponds to the tradability status of the experience itself. At lower levels of intensity, the exchange value remains high; however, as we ascend the levels, this value drops logarithmically (an approach that would also align closely with my model). This means that averting even a minute amount of a higher-level pain requires sacrificing an exponentially larger quantity of a lower-level one. But at a certain critical threshold (the point of systemic collapse where the subject entirely loses its rationality) this tradability factor effectively hits zero. In such a model, while Level 3 and Level 4 pain might possess vastly different coefficients due to the hidden tradability factor, they remain theoretically comparable. However, once we reach Level 5, we encounter a state of incomparability.

I think you meant 44 ºC instead of 43 ºC. Level 2 starts at 44 ºC.

Yes, thank you!

People would not distinguish between 45 ºC for 3 min 0.1 s (maximum pain level of 3), and 45 ºC for 2 min 59.9 s (maximum pain level of 2).

Actually, this comes down to how we define the boundaries. If the difference is truly imperceptible, it remains within the same region where numerical comparisons are perfectly valid. However, my claim is based on a discontinuous graph. Just as water undergoes a qualitative jump at 100 °C and turns into steam, I believe consciousness undergoes a phase transition at specific thresholds which creates a qualitative leap in the nature of pain.

Thank you, those were thought-provoking questions. They really helped me dive deeper into this topic, as I’ve been wanting to do. Here is my elaboration on this:

Do you think pain A having a higher intensity than B implies that averting an infinitesimal duration of pain A with an infinitesimal probability is better than averting an astronomically long time of pain B with certainty?

I believe this thought experiment fails to test our core question which is "Should moral prioritization be determined by the aggregation of suffering (intensity × duration/number of individuals), or does the intensity of pain possess an inherent lexical priority that outweighs duration or population size regardless of their scale?”. 

The reason it fails is its reliance on the false assumption that intensity and duration are independent variables. In biological systems, these two variables are deeply coupled. A stimulus that causes moderate discomfort over 3 seconds can escalate into profound agony if sustained for a minute. Conversely, an 'extreme' stimulus experienced for a mere 0.0001 seconds may fail to even trigger a affective state at all.

To truly test the lexical priority of intensity, we must use a thought experiment that isolates the intensity from the 'compounding effect' of time. To do this I suggest a reincarnation model: Let’s say you must choose between an infinite number of lifetimes where a speck of dust irritates your eye for 1 minute, versus a single 1 minute lifetime of extreme agony. Which one would you prefer? The mathematical disutility of the former is technically infinite. Yet, I would choose that option without a second thought and the calculation would carry no weight.

The claim that mathematical dominance constitutes an inherent "better" or "worse" is actually an example of begging the question. By assuming that a larger sum of utility automatically translates into a moral obligation, one presupposes the very value judgment they claim to derive. Numerical superiority is the moral victory only within the framework of total utilitarianism and if one operates within a different ethical framework, these mathematical results may carry no moral weight at all. To bridge the gap between the "is" of mathematical aggregation and the "ought" of moral action, a separate, qualitative value judgment is required. 

This distinction is clearly illustrated by the analogy of heat capacity versus temperature. A massive iceberg possesses a far greater total heat capacity than a single cup of boiling water due to the sheer volume of its molecules. However, it does not follow that the iceberg is "superior"  unless one has already decided that total heat capacity (aggregate sum) is the metric that matters. If your moral compass is calibrated to temperature (intensity), the iceberg’s mass is irrelevant; no amount of ice can ever surpass the boiling water's temperature.

But how can I justify that"temperature" is what matters rather than "heat capacity"? I believe that to establish a grounded ethical code, we must align our moral framework with the actual preferences of the subjects in whom valence is embodied. I also believe that if we do so, we will find that our moral imperative is not the maximization of a mathematical aggregate, but the maximization of what truly carries weight for the experiencing subjects themselves. 

For the subject, while low-level experiences are inherently tradable by their very nature (functioning as manageable costs for a rational agent) high-intensity states aren't just 'more' of the same unit of pain; they represent a different kind of reality that demands absolute priority over any trade. (more on this later) This shift marks a morally significant qualitative change that aggregation inherently ignores.

I always felt that the true strength of utilitarianism lies in its commitment to grounding moral value in the most objective reality of the universe: the experienced valence of sentient beings. However, by prioritizing the "mathematical aggregate" over the nature of the experience, I believe that total utilitarianism drifts into a state of theoretical hallucination, detached from the very subjects it claims to represent.

If the ith pain intensity starts at T_(i - 1), and ends at T_i, what would change so much from temperature T_i_before = T_i - 0.001 ºC to T_i_after = T_i + 0.001 ºC that makes you prioritise averting pain at T_i_after infinitely more than pain at T_i_before? 

Consider the engineering analogues: a bridge rated for exactly 10,000 tons does not merely become "slightly more strained" when the 10,001st ton is added; it undergoes a catastrophic structural collapse. Similarly, an electrical circuit rated for 15 amps will heat up intensely at 14.99 amps, but at 15.01 amps, the fuse blows and the circuitry melts.

Biological systems similarly operate through this 'all-or-nothing' dynamic. Take protein denaturation as an example: a protein can withstand thermal stress and keep its function until it hits a specific temperature. However, even a tiny increase as little as 0.001 °C past this tipping point causes its internal bonds to snap at the same time. This results in a sudden, irreversible loss of activity, much like an egg white irreversibly turning solid after a certain point. Likewise, in toxicology, a body can manage a certain amount of poison, but once a specific dose is reached, the body's natural defenses are suddenly defeated. At this stage, the system is no longer just struggling more; it is undergoing a collapse, which is why preventing that final tiny increase is more critical than any increase before it.

Coming back to your question: on a physiological level, we can ground the radical shift between Ti_before​ and Ti_after​ at the 44 °C threshold (the scientifically proven point where the activation energy peak for TRPV1 receptors is reached). At Ti_before​, the thermal energy is high but remains below the critical barrier; the nociceptors are under severe stress, yet the ion channels stay closed. 

At Ti_after​, however, the system crosses the precise energetic tipping point where these receptors snap open simultaneously. This causes a sudden, massive influx of ions and the signal sent to the brain changes completely. It stops being a manageable warning that says, 'It is getting very hot,' and becomes a definitive alarm screaming, 'Tissue is being destroyed.' 

So, the microscopic gap between these two temperatures represents the exact moment heat alarm receptors are triggered, causing the system to send not just a signal of a small increase in warmth, but an emergency alarm of actual tissue damage.

From an evolutionary perspective, it would be counter-intuitive for the nervous system to function as a linear thermometer. Instead, it is optimized to prioritize a hierarchy of survival-critical thresholds. Consequently, it is much more biologically grounded to suggest that conscious experience operates through discrete qualitative phases, where each phase represents a distinct functional mode with corresponding affective experiences hardwired to trigger specific behavioral responses.

Consider someone holding their hand in hot water for 1 min. If you think there are only 5 pain intensities, what would be the range of temperature for each pain intensity?

By modelling pain as a hierarchical control system designed by evolution to force a specific behavioral response (specifically to signal how much agency, mental energy, and priority the brain must allocate to a threat), I would propose a taxonomy along these lines:

Level 1: Ignorable (40°C – 43°C)

  • Biological Response: Low-level nociception filtered by the thalamus; does not interrupt primary behavioral goals or internal maintenance.
  • Affective Experience: The mind acknowledges it but grants no priority; attention might remain elsewhere.
  • Tradability Status: Easily traded for even the smallest reward (e.g., "I'll keep my hand here just to see what happens").

Level 2: Manageable (44°C – 47°C)

  • Biological Response: The brain uses adrenaline to handle the heat, managing the situation while calculating the cost.
  • Affective Experience: Annoying and sharp. It takes effort to keep the hand submerged, but it does not hijack your thoughts. The "Self" remains an active agent.
  • Tradability Status: If you expect a reward you would be okay to endure this.

Level 3: Dominant (47°C – 50°C)

  • Biological Response: Inflammatory response begins. Forced reprioritization occurs as the organism is signaled to stop to prevent permanent tissue loss.
  • Affective Experience: Significant distress. You are likely sweating. Your brain is screaming that this is a "crisis" and the world is split between the task and the pain.
  • Tradability Status: A heavy burden. You would only keep your hand in only if the incentive is exceptionally valuable or the objective is critical for survival.

Level 4: Invasive (50°C – 53°C)

  • Biological Response: This temperature causes second-degree burns quickly. Total limbic system hijack occurs, and the body will likely trigger an involuntary withdrawal reflex.
  • Affective Experience: Extreme desperation. Future and past vanish; there is only the agonizing "Now." The "Self" is a prisoner fighting the pain.
  • Tradability Status: Agency is barely holding on; rational calculations become nearly impossible due to the systemic emergency.

Level 5: Annihilating (>53°C)

  • Biological Response: Hardware failure. Cognitive fragmentation occur as biological limits are exceeded (e.g., rapid protein denaturation).
  • Affective Experience: The subject is erased. No "decision" can be made because the "Self" that makes decisions has been replaced by raw, overwhelming signals.
  • Tradability Status: As the intensity has achieved absolute priority, any other consideration is impossible.

I believe, the most morally significant element of this taxonomy lies in the 'Tradability Status' assigned to each phase as it tracks the gradual dissolution of an agent's capacity to engage in value exchange. At lower levels, pain is merely a "cost" that a rational subject might willingly pay to secure a greater "utility." However, as we move through the phases, we witness a profound shift: pain ceases to be a negotiable variable and instead becomes a priority that overrides all other possible values.

Do you believe empirical studies of people's preferences would find a few temperatures (4 if you believe in 5 pain intensities) with this property, where people would prefer averting any time in pain at temperature T_i_after over an arbitrarily long time in pain at temperature T_i_before?

Like I mentioned, while individuals demonstrate a willingness to trade intensity for duration at lower levels of pain, for instance, accepting 43∘C water for a longer duration over 44∘C for a shorter one, this linear trade-off curve fractures sharply as it approaches the threshold of systemic breakdown. As intensity increases, our willingness to trade it for duration diminishes.

However, I want to take a step further than suggesting a logarithmic preference curve. I contend that if we strip away all confounding factors, intensity possesses a lexical superiority over duration in our preferences. Here’s what I mean:

Let’s analyze why someone might prioritize ending a long-duration mild pain (like mild chronic pain) by undergoing a short-duration intense pain (like a painful invasive surgery). I claim that this choice is not driven by the total number of seconds endured. Instead, the persistent nature of chronic pain degrades the system's integrity in a way that fundamentally alters the intensity of the total experience.

Consider these factors;

1-) As I stated earlier, duration is not just a multiplier; it is a catalyst for a qualitative shift. ( In the context of my Five-Phase Model, after a certain threshold, a prolonged Level 3 experience does not simply "add up" to a large Level 3 sum, it undergoes a functional priority shift, effectively transforming into a Level 4 state for example.)

2-) There is a profound opportunity cost of utility. Persistence of pain acts as a "utility block," preventing the subject from accessing or enjoying positive states throughout the entire duration. (Furthermore, the realization that this obstacle is enduring creates a secondary layer of profound psychological disutility, distinct from the sensory pain itself.)

Therefore, I believe that the "value" we assign to duration is actually a value judgment about the quality of the state the subject is forced to inhabit, rather than a preference for a smaller numerical sum.

This perspective suggests that in a perfectly isolated system (where the "self" is reset and no systemic degradation carries over) the accumulation of duration might never alter the preference for intensity. If we strip away the biological "wear and tear" and the psychological erosion that typically accompanies time, we are left with a pure hierarchy of states. To test this claim, consider the reincarnation thought experiment I provided at the beginning of this comment.

I agree pain intensities cannot be arbitrarily close. However, consider N = 100. Would you prefer averting, for example, i) 0.1 s of pain of intensity level 100 with probability 10^-100 over ii) 10^100 years of pain of intensity level 99 with probability 1. The expected pain of i) is 3.12*10^208(=10^100*99*1/(0.1/60^2/24/365.25*100*10^-100)) times that of i). 

Regarding this question: If we assume a lexical priority between these categories, we should prefer to avert option (i) regardless of the numerical magnitude of duration or probability. 

The first reason this conclusion might feel counterintuitive is the potential to overlook our assumption of duration and intensity being independent variables, the importance of which I emphasized in my previous comment.

The second potential reason is that we cannot imagine 100 distinct categories of pain as our cognitive hardware is not built for that level of resolution. Actually, the logic remains identical to our example using levels 4 and 5. 

(My choice of a 5-category model was not arbitrary. I believe the number of distinct 'negative states of being' an organism can experience is likely capped within this range and is unlikely to exceed 6-8. Beyond a certain point, adding more 'precision' to pain provides no survival advantage. So, this limited number of categorical urgency states is more than sufficient for effective decision-making.)

However, if a biological system truly possessed that many categories, this wouldn't conflict with our intuitions, as the transition from 99 to 100 would be perceived as a stark, undeniable shift in experience (which ties back to my definition of a phase transition, where each jump represents a fundamental shift in the organism's functional priorities.) 

Actually, my argument doesn't hinge on N being theoretically finite.

My central claim is this: Level 5 pain is not simply 'Level 4 pain + 1 unit.' They represent fundamentally different categories of experience. 

The issue with this example is that it still treats N and N−1 as points on the same scalar continuum, which fails to capture the categorical distinction at the core of my proposition.

In my argument, if Level 5 pain is an 'apple,' Level 4 pain is an 'orange.' And when we are dealing with apples and oranges, numerical superiority alone is insufficient for a value comparison. If we say that apples have lexical priority over oranges, the initial coefficients (duration and probability) become irrelevant. 

While mathematical models can theoretically increase N to infinity, the subjective reality of suffering is biologically capped because there is a physiological ceiling of the nervous system. 

Sensory receptors and neural circuits have finite firing rates. Once the system reaches "saturation," the neural "bandwidth" is fully occupied. At this point, any further increase in the external stimulus intensity fails to translate into a higher degree of perceived pain because the biological hardware simply cannot transmit signals any faster or more intensely.

Furthermore, the body possesses intrinsic protective mechanisms that act as a natural circuit breaker for the conscious mind. When pain reaches a critical threshold that threatens systemic homeostasis, it often triggers a "shutdown" mechanism, such as fainting, metabolic shock, or a dissociative state. These responses serve as a natural limit to subjective experience.


 

Thank you for the thought experiment! Here’s my current thinking about it:

The premise of "99.99999999% of X" assumes that pain exists on a perfectly smooth, linear scale that can be infinitely divided. However, from a functional perspective, it would be evolutionarily absurd for every infinitesimal change in stimulus to have a unique affective counterpart. If the brain had to run a different neurological "program" for 90∘C versus 90.00001∘C for example, the computational overhead would be catastrophic.

Instead, it feels more neurobiologically grounded to model pain as operating through discrete phase transitions. This perspective would highlight the fact that the nervous system cares about categorical urgency rather than focusing on the infinite precision of a scalar value.

I think a five-phase discrete model like this could be used (where each jump represents a fundamental shift in the organism's functional priorities):

  1. Background Noise: A state similar to the "Solid" phase of homeostasis; data is processed but remains subconscious.
  2. Informative Discomfort: A low-energy awareness that suggests minor behavioral adjustments.
  3. Behavioral Re-prioritization: A phase where the signal energy demands the abandonment of non-essential tasks
  4. Urgent System Override: A high-intensity state that prioritizes immediate survival over higher reasoning.
  5. Systemic Collapse (Agony): A critical point similar to a "Plasma" state; a catastrophic failure where the system breaks down, resulting in cognitive fragmentation.

In this model, the ethics of phase transitions are governed by lexical priority, meaning that even an infinite amount of Level 2 discomfort can never add up to a Level 5 phase. However, lexical priority only prevents us from trading a lower level for a higher one; it doesn't forbid arithmetic within the same phase. Therefore, when sensations occupy the same level, quantitative calculations become perfectly permissible.

Coming to your question, I believe that 99.99999999% of X and X are functionally identical, as the infinitesimal difference between them becomes irrelevant to the organism's affective state. Let’s say X refers to the highest phase of Systemic Collapse. Because the system has already crossed the final 'boiling point' into the catastrophe phase, it is already operating in a state of maximum functional disruption, making X and 99.99999999% of X biologically indistinguishable. Since these pains occupy the same biological orbital, we can apply a quantitative comparison.

To find the total disutility (U), let’s use the formula U=I×T, where I is the intensity and T is the duration. In my model, we define the highest level of systemic collapse as Level 5 so let’s assign X a value of 5, and 99.999999% of X becomes 4.999999. 

Option A (X for 1 second):
UA​=5×1 second=5 units

Option B (99.999999% of X for 10^100 years):
UB​=4.999999×(3.15×10^107 seconds)≈1.57×10^108 units

Even without the probability factor, the math is undeniable: UB​≫UA​. Therefore, averting B is the rational conclusion. By preventing 10^100 years of systemic collapse, we are eliminating an extreme amount of suffering. (Rather than accounting for an intensity difference that remains below the threshold of neural resolution.)

But let’s consider these 2 pains:

  • A. Duration of 1 s, level 5 intensity, and probability of 10^-100.
  • B. Duration of 10^100 years, level 4 intensity, and probability of 1.

As long as we treat this as a purely abstract thought experiment where variables are independent, (viewing 10^100 years not as a single continuous experience, but as a series of completely independent events of Level 4 pain) increasing the duration becomes equivalent to increasing the population size. In this framework, my answer would be similar to the 'trillion dust specks' problem: I would choose to avert Option A as I believe that even an infinitesimal chance of a staggering amount of Level 5 pain outweighs the certainty of a small amount of Level 4 pain due to lexical priority.

However I want to note that in a biological context, duration and intensity are inextricably linked. Pain has a cumulative nature. As duration extends, the collapse of cognitive and emotional resilience removes the brain's natural filters and causes long-term stimuli to neurologically evolve into a higher-intensity experience (neurons become increasingly reactive or neuroplasticity lowers the pain threshold to turn even minor stimuli into intense suffering etc.). 

If my conclusion feels counterintuitive, the discrepancy between our biological intuitions and abstract logical conclusions is likely the reason. In the real world, the variables in the ‘Total Pain = Intensity × Time’ equation are interdependent (and we tend to imagine these scenarios by projecting our lived biological experiences onto them) however, in this thought experiment, they are treated as independent factors.

Good questions, thank you!

I think most people would consider a catastrophe all life on Earth dying painlessly, even though no one would experience anything in the process. 


I believe, the reason people find the idea of a painless extinction so tragic is that they fundamentally confuse non-existence with a vacuum. This is a massive Category Error. 

​In physics, a vacuum is still a "something", it has a metric, a coordinate system, and energy fields. It is a physical state. But non-existence isn't a "state" you fall into; it is the total deletion of all states. We fail to grasp this because we can't imagine a "total shutdown" without projecting a background stage (like darkness or silence) to hold it. 

​And that is the reason why people usually argue, "But think of all the music, the sunsets, and the joy we’d lose!" which is Circular Reasoning. We only value those things because we’re already here and wired to "thirst" for them. If the world ends painlessly, that thirst vanishes along with the water. No one is left behind to feel "deprived." You can't have a "loss" without a "loser" to experience it. 

​The trick our mind plays on us is the Phantom Observer effect. When you imagine the world ending, you’re secretly picturing yourself standing in the void, looking at a blank space and feeling sad. But in a total extinction, the observer is deleted too. You’re not "missing" the party; the party, the guests, and the very concept of "missing out" are all wiped from the map. 

​​TL;DR: People fear "nothingness" because they perceive it as a cold, hollow state. But once you grasp that nothingness isn’t a state of lack, but rather the "loss" losing its host, the tragedy evaporates. It is not a "loss of value"; it is the deletion of the very coordinate system where value exists. 


 If not, would you want the painless death of people who have a probability of experiencing more than 1 min of excruciating pain in their real future higher than 1 in 1 trillion, 10^-12, to eliminate the risk of them experiencing excruciating pain?

So, regarding this question, while I realize this is a total non-starter in any public discourse and feels counter-intuitive at first glance, my answer would be yes. I believe it only sounds radical because we are biologically programmed to protect the 'coordinate system' at all costs. However, once you see that non-existence isn't 'losing the water' but 'deleting the thirst, you weigh the reality of suffering against the 'neutrality' of non-existence and the conclusion becomes unavoidable. It is hard to internalize this reasoning because our minds are designed to value things within the system but once you step outside that 'coordinate system,' you see that.

Even if there were a 'Super-Observer' in the universe who experienced the sum of every independent event, an infinite sum of mild annoyances might still fail to add up to a single instance of torture.

 

The denier of replacement must think that there’s a pain at some amount of intensity so that any number of pains at lower intensity is less bad than that single pain at the higher level of intensity.

 

In fact, such a claim is highly plausible. Sometimes, even if you have a trillion small things, their addition is not enough to create a higher level of intensity. We see this phenomenon everywhere in nature. In physics, for example, you can gather a trillion low-frequency radio waves, but they will never have the power to displace an electron like a single gamma ray can. In thermodynamics, a trillion raindrops at 20°C will never "add up" to the scorching heat of a single 10,000°C plasma bolt. We might similarly suggest that a trillion small bad feelings can never equal the horror of one true moment of agony. Simply increasing the quantity of something does not necessarily change its fundamental quality.

In my opinion, the core flaw of the "Replacement Argument" lies in there, in its assumption that suffering is a perfectly linear and infinitely additive variable. Under this purely quantitative view, if we say ϵ represent an infinitesimal unit of discomfort, the theory dictates that an infinite accumulation of these trivial annoyances must eventually outweigh a singular state of profound agony, expressed mathematically as:

However, this continuous model might be fundamentally misrepresenting the physiological realities of sentience. Our brains are not simple 19th-century sliders; they do not process information on a linear scale. Instead, they are hyperoptimized data processing machines designed by evolution to sort signals into tiered categories of "minor significance" versus "catastrophic priority." 

It would be quite grounded in neurobiological facts to view the difference between a trillion small discomforts and a single moment of true agony as a massive "state transition" or a "quantum leap" in importance. Mechanistically speaking, a dust speck triggers low-threshold Aβ fibers that signal the thalamus. As the brain’s gatekeeper, the thalamus identifies these as low-priority "background noise" and filters most of them out. The signals that do survive are processed as minor sensory inputs that lack the biological weight required to engage the brain's survival systems. Torture, conversely, triggers a completely different set of high-threshold nociceptors (Aδ and C fibers). This recruitment ignites the "agony circuits" (the Anterior Cingulate Cortex and the Insular Cortex), triggering a systemic breakdown of the psychological and physiological self. 

This is not merely "intense touch"; it is a fundamentally different state of being. Firing a dust signal a trillion times is never equivalent to firing an agony signal once; we cannot stack low intensity inputs to force a high intensity neurological state. Because evolution has built a sharp "cliff" between these levels of importance, we can never simply add up low priority signals to create a high priority emergency. 

Ultimately, the idea that agony possesses a unique intensity that no amount of lower-level pain can ever reach might not only be plausible but analytically necessary if we adopt the view that 'suffering' is not a uniform currency, but a series of discrete state transitions. And as I explained, this model would be far more congruent with evolutionary biology as our neural architecture is hardwired for survival-critical prioritization, rather than the mere arithmetic summation of inputs.

I think by decoupling moral philosophy from the actual mechanics of the nervous system, we risk creating a "theoretically consistent" but biologically impossible ethics. Think of it like this: I can create a fictional physics where gravity works in reverse. My math for calculating orbital mechanics in that universe will be perfectly "internally consistent," but I’ll still never launch a rocket in THIS one. 

Ethics should be treated like a branch of physics (specifically the physics of affective experience), not just a branch of math. In other words, our "moral arithmetic" must be built on the actual hardware of the brain, not on abstract lines that stretch to infinity and we should view affective neuroscience as our "Law Book’’ in the process. 

 

Additional Thought:

While scope neglect is real, I think it is not the reason why we reject the utilitarian calculus. We reject it because we recognize qualitative lexicality. On an experience level we know that certain states are not merely quantitative intensifications of the same feeling but belong to an entirely different ontological order. 

Aggregative utilitarians talk about 'Total Badness' as if there’s a giant, cosmic Excel sheet in the sky:) But one might simply reject these frameworks in favor of a person-affecting view, which I find far more intuitive.

Suffering is subject-dependent; it exists only within a conscious vessel. A trillion dust specks in a trillion different eyes are a trillion isolated events. They never 'meet' to form a collective mountain of pain.

1-) In Case A (Torture), one consciousness experiences 100% of the agony. 

2-)In Case B (Dust Specks), no single consciousness experiences more than a 0.000001% discomfort.

If no single observer in the universe experiences a 'catastrophe,' can we truly say a catastrophe has occurred? In my opinion, by aggregating across separate minds, we create a 'phantom suffering' that no one actually feels. There is no 'Super-Observer' in the universe who feels the sum of those trillion specks:)

Additional Thought: 

We can also apply John Rawls’s 'Veil of Ignorance' to test whether a trillion dust specks are truly worse than a single case of torture. Imagine you are behind a curtain, about to be born into the world, but you have no idea which 'conscious vessel' you will inhabit. You are given two choices:

  • World A: One person is subjected to 100% catastrophic torture while the other trillion minus one people live happily.
  • World B: A trillion people each experience a 0.000001% dust speck in their eye.

If the 'Total Badness' of a trillion specks were truly greater than torture, a rational person behind the Veil would have to choose World A to avoid the 'larger' catastrophe. I don't know about you but I would never take that gamble.

Load more