Thanks for commenting!
I find this argument unconvincing. The vast majority of 'simulations' humans run are very unlike our actual history. The modal simulated entity to date is probably an NPC from World of Warcraft, a zergling from Starcraft or similar. This makes it incredibly speculative to imagine what our supposed simulators might be like, what resources they might have available and what their motivations might be.
Agreed, but could you explain why that would be an objection to Brian's argument?
Also the vast majority of 'simulations' focus on 'exciting' moments - pitched Team Fortress battles, epic RPG narratives, or at least active interaction with the simulators. If you and your workmates are just tapping away in your office on your keyboard doing theoretical existential risk research, the probability that someone like us has spent their precious resources to (re)create you seem radically lowered than if you're (say) fighting a pitched battle.
I do not know, because I agree with your 1st paragraph about it being quite hard to predict future simulated entities based on past history.
Thanks for sharing!
At this moment, the problem of shrimp production is greater in scale–i.e., number of individuals affected–than the problem of insect farming, fish captures, or the farming of any vertebrate for human consumption. Thus, while the case for shrimp sentience is weaker than that for vertebrates and other decapods, the expected value of helping shrimp and prawns might be higher than the expected value of helping other animals.
I Fermi estimated, using Rethink's median welfare ranges, and holding the ratio between welfare per unit time and that range constant across farmed animals of different species (and defined based on data from the WFP for broilers in conventional scenarios), that the annual badness of the lives of all farmed shrimps and praw is 7.48 times the annual goodness of all human lives. In comparison, I concluded the lives of all farmed fish and chickens are 1.52 and 1.74 as bad as human lives are good. Of course, all these numbers have huge uncertainty!
Thanks for sharing, Lizka! It is nice to have a better sense of what the EA Forum is doing behind the scenes.
Hi Christian,
You say:
A key insight for funders who value cost-effectiveness is that the negative effects of large-scale nuclear wars are disproportionately worse than the negative effects of more limited nuclear exchanges. In other words, nuclear wars are not created equal and the costs of nuclear war increase super-linearly with the size of nuclear war.
Are you thinking about size of nuclear war in terms of offensive nuclear detonations? If these are proportional to the soot injected into the stratosphere, it looks like the climatic effects would increase roughly linearly with the size of the nuclear war according to Fig. 5b of Xia 2022:
To be precise, from the data on Table 1, the linear regression with null intercept of the the number of people without food in year 2 on the soot injected into the stratosphere has a coefficient of determination (R^2) of 96.8 %. So I wonder whether this is compatible with your claim about superlinearity.
PS: you might want to reply to this comment.
Thanks for updating such a valuable resource! Reading the guide in early 2019 was my introduction to EA, and it made me change a lot my career and life plans!
Agreed, but if the lifespan of the only world is much shorter due to risk of simulation shut-down, the loss of value due to extinction is smaller. In any case, this argument should be weighted together with many others. I personally still direct 100 % of my donations to the Long-Term Future Fund, which is essentially funding AI safety work. Thanks for your work in this space!