Meta: I count 25 questionmarks in this "quick" poll, and a lot of the questions appear to be seriously confused. A proper response here would take many hours.
Take your scenario number 5, for instance. Is there any serious literature examining this? Are there any reasons why anyone would assign that scenario >epsilon probability? Do any decisions hinge on this?
>The mean estimate was [that bees suffer] around 15% as intensely as people.
To clarify. Does this mean that when comparing:
The estimate implies that some people feel that the beehive is a worse moral problem? This strongly contradicts my moral intuitions.
This seems to be of questionable effectiveness. Brief answers/challenges:
Evaluations are key input to ineffective governance. The safety frameworks presented by the frontier labs are "safety-washing", more appropriately considered roadmaps towards an unsurvivable future.
Disagreement on AI capabilities underpin performative disagreements on AI Risk. As far as I know, there's no recent published substantial such disagreement - I'd like sources for your claim, please.
We don't need more situational awareness of what current frontier models can and cannot do in order to respond appropriately. No decision-relevant conclusions can be drawn from evaluations in the style of Cybench and Re-Bench.
I'm also practicing how to give good presentations and introductions to AI Safety. You can see my YouTube channel here:
You might also be interested in one of my older presentations, number 293, which is closer to what you are working on.
Feel free to book a half-hour chat about this topic with me on this link:
>PauseAI suffers from the same shortcomings most lobbying outfits do...
I'm confused about this section: Yes, this kind of lobbying is hard, and the impact of a marginal dollar is very unclear. The acc-side also have far more resources (probably; we should be vary of this becoming a Bravery Debate).
This doesn't feel like a criticism of PauseAI. Limited tractability is easily outweighed by a very high potential impact.
=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 aren't well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?=
For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to "What percentage of the worlds population am I willing to kill in expectation?". Answers such as "10^6 humans" and "10^9 humans" are both monstrous, even though your poll would rate them very differently.
These possible answers don't become moral even if you think that it's really positive that humans don't have to work any longer. You aren't allowed to do something worse than the Holocaust in expectation, even if you really really like space travel or immortality, or ending factory farming, or whatever. You aren't allowed to unilaterally decide to roll the dice on omnicide even if you personally believe that global warming is an existential risk, or that it would be good to fill the universe with machines of your creation.