Hide table of contents

Introduction

At the GCP Workshop last weekend, we discussed what’s known as the “Transmitter Room Problem.” The transmitter room problem is a thought experiment developed by Scanlon:

“Suppose that Jones has suffered an accident in the transmitter room of a television station. Electrical equipment has fallen on his arm, and we cannot rescue him without turning off the transmitter for fifteen minutes. A World Cup match is in progress, watched by many people, and it will not be over for an hour. Jones’s injury will not get any worse if we wait, but his hand has been mashed and he is receiving extremely painful electrical shocks.”[1]

To make the argument stronger we extend the example  to a hypothetical “Galactic Cup” where the number of potential viewers is arbitrarily large. Moreover, we assume Jones to suffer as much pain as a human can survive, to consider the most extreme case of the example.

If this individual is spared, the broadcast must be stopped, causing countless viewers across the galaxy to miss the event and feel upset. The question is whether, at some unimaginably large scale, the aggregated minor distress of countless viewers could outweigh one person’s extreme, concentrated suffering. 

This dilemma is deeply counterintuitive. Below are two different approaches we’ve seen proposed elsewhere, and one which we found during our discussion that we'd like thoughts on.

 

1. Biting the Bullet


In an episode (at 01:47:17) of the 80,000 Hours Podcast, Robert Wiblin discusses this problem and argues that the suffering of the individual might be permissible when weighed against the aggregate suffering of an immense number of viewers. He suggests that the counterintuitive nature of this conclusion arises from our difficulty in intuitively grasping large numbers. Wiblin further points out that we already accept analogous harms in real-world scenarios, such as deaths during stadium construction or environmental costs from travel to large-scale events. In this view, the aggregate utility of the broadcast outweighs the extreme disutility experienced by the individual.

 

2. Infinite Disutility


Another perspective posits that extreme suffering of a single person can generate infinite disutility.[2] For instance, the pain of a single individual experiencing every painful nerve firing at once can be modeled as negative infinity in utility terms. Under this framework, no finite aggregation of utilities induced by mild discomfort among viewers could counterbalance the individual’s suffering. While this approach sidesteps the problem of large numbers, it introduces a new challenge: it implies that two individuals undergoing such extreme suffering are no worse off than one, as both scenarios involve the same negative infinity in utility. It might also be prudent to "save" the concept of negative utility for the true worst case scenarios. 

A toy model of what a utility function could look like. As suffering approaches 200 (arbitrarily chosen), utility approaches negative infinity.
A toy model of what a utility function could look like. As suffering approaches 200 (arbitrarily chosen), utility approaches negative infinity. 

 

3. The Light Cone Solution


This approach begins by assuming that the observable universe (“our light cone”) is finite with certainty.[3]  Even the Galactic Cup cannot reach an infinite audience due to constraints such as the finite lifespan of the universe and the expansion of space, which limits the number of sentient beings within our causal reach. Given these boundaries, the number of potential viewers is finite, albeit astronomically large. 

By assigning sufficient negative (but now finite) utility to the individual’s extreme suffering, this perspective ensures that it outweighs the aggregate discomfort of any audience whose size remains within physical limits. This approach avoids saying that multiple people in extreme suffering would be no worse than an individual under such conditions, since sufficiently large but finite negative utility of one individual can trump the mild discomfort of the largest physically possible audience. As the negative utility of this individual is finite, multiple individuals under the same conditions would be worse than one. 

Considering that the observable universe being finite only solves the problem for the specific transmitter room problem, one can imagine a variation in which this broadcast doesn’t only affect people watching at the given moment. For the Light Cone Solution to hold, one must then give some reason why time is also finite, or why sentient beings can’t reach infinite numbers given infinite time. This calls into relevance the question of whether Heat Death is real, or other hypotheses for the universe becoming uninhabitable. 

To sum up, assuming that the total number of sentient beings will be finite, both in time and space, and assuming that utility is sufficiently concave in suffering might lead to interesting conclusions relevant for EA. Regarding near-termist EA, the insight how the weighting of the intensity of suffering matters for cost-effectiveness analysis has probably been discussed elsewhere in detail. Regarding long-termism, the conclusion seems to be that one ought to prioritize existence and value of the long term future compared to avoiding suffering of small or medium intensity, but it is at least possible that there is a moral imperative to focus on avoiding large suffering today as compared to making the future happen. Alternatively, the proposed way out of the transmitter room problem made me find S-risks more relevant as compared to x-risks. 

 

Open questions

We’d be interested to hear if this solution to the transmitter room problem has been discussed somewhere else, and would be thankful for reading tips Furthermore, we’re curious to hear others’ thoughts on these approaches or alternative solutions to the Transmitter Room Problem. Are there other perspectives we’ve overlooked? How should we weigh extreme suffering against dispersed mild discomfort at astronomical scales? 

Acknowledgements

Thanks to Jian Xin Lim for loads of great comments and insight.  

  1. ^

      Scanlon, T.M. (1998). What We Owe to Each Other. Belknap Press. 

  2. ^

     To fix ideas, consider we analyse how bad a possible world by a (possibly weighted) sum of utilities. Then utilities are the measure how relevant suffering of an individual is for overall welfare. I assume one could weaken a number of assumptions and the argument might still work, but this would go beyond the scope of this post. 

  3. ^
Comments6


Sorted by Click to highlight new comments since:

I accept the bullet biting response. I think someone who doesn't should say the utility of the observers may outweigh Jones' utility but that you should save Jones for some deontic reason (which is what Scanlon says), or maybe that many small bits of utility spread across people don't sum in a straightforward way, and so can't add up to outweigh Jones' suffering (I think this is incorrect, but that something like it is probably what's actually driving the intuition). I think the infinite disutility response is wrong, but that someone who accepts it should probably adopt some view in infinite ethics according to which two people suffering infinite disutility is worse than one--adopting some such view may be needed to avoid other problems anyway.

The solution you propose is interesting, but I don't think I find it plausible:

1. If Jones' disutility is finite, presumably there is some sufficiently large number of spectators, X, such that their aggregate utility would outweigh his disutility. Why think that, in fact, the physically possible number of observers is lower than X?

2. Suppose Jones isn't suffering the worst torment possible, but merely "extremely painful" shocks, as in Scanlon's example. So the number of observers needed to outweigh his suffering is not X, but the lower number Y. I suppose the intuitive answer is still that you should save him. But why think the physically possible number of observers is below Y?

3. Even if, in fact, the physically possible number of observers is lower than X, presumably the fundamental moral rules should work across possible worlds. And anyway, that seems to be baked into the thought experiment, as there is in fact no Galactic Cup. But even if the physically possible number of observers is in fact lower than X, it could be higher than X in another possible world.

4. Even if the possible number of observers is in fact finite, presumably there are possible worlds with an infinite number of possible observers (the laws of physics are very different, or time is infinite into the future, or there are disembodied ghosts watching, etc.). If we think the solution should work across possible worlds, the fact that there can only be a finite number of observers in our world is then irrelevant.

5. You assume our lightcone is finite "with certainty." I assume this is because of the expected utility concern if there is some chance that it turns out not to be finite. But I think you shouldn't have epistemic certainty that there can only be a finite number of observers. 

6. The solution seems to get the intuitive answer for a counterintuitive reason. People find letting Jones get shocked in the transmitter case counterintuitive because they think there is something off about weighing one really bad harm against all these really small benefits, not because of anything having to do with whether there can only be a finite number of observers, and especially not because of anything having that could depend on the specific number of possible observers. Once we grant that the reason for the intuition is off, I'm not sure why we should trust the intuition itself.

*I think your answer to 1-3 may be that there is no set-in-stone number of observers needed to outweigh Jones' suffering: we just pick some arbitrarily large amount and assign it to Jones, such that it's higher than the total utility possessed by however many observers there might happen to be. I am a realist about utility in such a way that we can't do this. But anyway, here is a potential argument against this:

Forget about what number we arbitrarily assign to represent Jones' suffering. Two people each suffering very slightly less than Jones is worse than Jones' suffering. Four people each suffering very slightly worse than them is worse than their suffering. Etc. If we keep going, we will reach some number of people undergoing some trivial amount of suffering which, intuitively, can be outweighed by enough people watching the Galactic Cup--call that number of observers Z. The suffering of those trivially suffering people is worse than the suffering of Jones, by transitivity. So the enjoyment of Z observers outweighs the suffering of Jones, by transitivity. And there is no reason to think the actual number of possible observers is smaller than Z.

Thanks for such an in depth reply! I have two takes on your points but before that I want to give the disclaimer that I'm a mathematician, not a philosopher by training. 

First, we're not saying that the lightcone solution implies we should always save Jones. Indeed, there could still be a large enough number of viewers. What we are saying is this: previously, you could say that for any suffering S Jones is experiencing, there is some number of viewers X whose mild annoyance A would in aggregate be greater than S. What's new here is the upper bound to X, so A*X > S could still be true (and we let Jones suffer), but it can't necessarily be made true for any Y by picking a sufficiently large X. 

As to your point about there being different number of viewers X in different worlds, yep I buy that! I even think it's morally intuitive that if more suffering A*X is caused by saving Jones then we have less reason to do so. This for me isn't a case of moral rules not holding across worlds because the situations are different, but we're still making the same comparison (A*X vs Y). I'll caveat this by saying that I've never thought too hard about moral consistency across worlds. 

Mogensen and Wiblin discuss this problem in this podcast episode, fwiw. That's all I know, sorry.

Btw, if you really endorse your solution (and ignore potential aliens colonizing our corner of the universe someday, maybe), I think you should find deeply problematic GCP's take (and the take of most people on this Forum) on the value of reducing X-risks. Do you agree or do you believe the future of our light cone with humanity around doing things will not contain any suffering (or anything that would be worse than the suffering of one Jones in the “Transmitter Room Problem”)? You got me curious.

I'm not sure I follow. Are you saying that accepting that there is a finite amount of potential suffering in our future would imply x-risk reduction being problematic? 

Sorry, that wasn't super clear. I'm saying that if you believe that there is more total suffering in a human-controlled future than in a future not controlled by humans, X-risk reduction would be problematic from the point of view you defend in your post.

So if you endorse this point of view, you should either believe x-risk reduction is bad or that there isn't more total suffering in a human-controlled future. Believing either of those would be unusual (although this doesn't mean you're wrong) which is why I was curious.

Executive summary: The Light Cone Solution proposes a resolution to the Transmitter Room Problem by asserting that the universe's finiteness imposes limits on the aggregation of mild discomfort, ensuring that extreme suffering of an individual should take priority over collective but minor distress. 

Key points:

  1. Biting the Bullet Approach: Some argue that extreme suffering can be outweighed by the aggregate discomfort of a vast number of individuals, though this is counterintuitive and relies on our difficulty grasping large numbers.
  2. Infinite Disutility Approach: This perspective suggests that extreme suffering has infinite negative utility, making it impossible to be counterbalanced by any finite aggregation of minor discomfort, though it raises issues like treating one and two extreme cases as equally bad.
  3. The Light Cone Solution: This approach assumes a finite observable universe, ensuring that even an arbitrarily large audience remains finite, meaning extreme suffering can still be assigned sufficiently negative finite utility to outweigh dispersed mild discomfort.
  4. Implications for Effective Altruism: The solution suggests that prioritizing the avoidance of extreme suffering may be a more immediate moral imperative compared to ensuring a valuable long-term future.
  5. Unresolved Questions: The argument depends on the assumption that sentient beings remain finite in number over time, raising questions about the universe’s future habitability and whether suffering should take priority over future-oriented existential risks.
  6. Call for Further Discussion: The post invites input on whether this resolution has been discussed elsewhere and welcomes alternative perspectives on balancing extreme suffering against dispersed minor discomfort.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3