Thanks for the comment, Jeff!
How extraordinary does the evidence need to be?
I have not thought about this in any significant detail, but it is a good question! I think David Thorstad's series exagerating the risks has some relevant context.
You can easily get many orders of magnitude changes in probabilities given some evidence. For example, as of 1900 on priors the probability that >1B people would experience powered flight in the year 2000 would have been extremely low, but someone paying attention to technological developments would have been right to give it a higher probability.
Is that a fair comparison? I think the analogous comparison would involve replacing terrorist attack deaths per year by the number of different people travelling by plane per year. So we would have to assume, in the last 51.5 years, only:
Then we would ask about the probability that every single human (or close) would travel by plane next year, which would a priori be astronomically unlikely given the above.
I've written something up on why I think this is likely: Out-of-distribution Bioattacks. Short version: I expect a technological change which expands which actors would try to cause harm.
I am glad you did. It is a useful complement/follow-up to my post. I qualitatevely agree with the points you make, although it is still unclear to me how much higher the risk will become.
(Thanks for sharing a draft with me in advance so I could post a full response at the same time instead of leaving "I disagree, and will say why soon!" comments while I waited for information hazard review!)
You are welcome, and thanks for letting me know about that too!
Thanks for helping organise the donation events, Lizka!
In agreement with my comment last year, I made 97 % of my year donations a few months ago to the Long-Term Future Fund (LTFF). However, I am now significantly less confident about existential risk mitigation being the best way to improve the world:
I said 97 % above rather than 100 % because I have just made a small donation to the EA Forum Donation Fund[1], distributing my votes fairly similarly across the LTFF, Animal Welfare Fund, and Rethink Priorities. LTFF may still be my top option, so I might have put all votes on LTFF (related dialogue). On the other hand:
Side note. No donation icon showed up after my donation. Not sure whether this is supposed to or not. Update: you have to DM @EA Forum Team.
That makes a lot of sense, Keller! In addition, donation splitting seems to make the most sense within cause areas, but diminishing returns here can be mitigated by donating to funds (e.g. Animal Welfare Fund) instead of particular charities (e.g. The Humane League).
Nice post, Jack!
Relatedly, Benjamin Todd did an analysis which supports the importance of cause prioritisation. It concludes differences in cost-effectiveness within causes are smaller than previously thought:
Overall, my guess is that, in an at least somewhat data-rich area, using data to identify the best interventions can perhaps boost your impact in the area by 3–10 times compared to picking randomly, depending on the quality of your data.
This is still a big boost, and hugely underappreciated by the world at large. However, it’s far less than I’ve heard some people in the effective altruism community claim.
One limitation of Ben's analysis is that it only looked into nearterm human-focussed interventions, as there were no good data in other areas.
Thanks for the post, Jack!
In Uncertainty over time and Bayesian updating, David Rhys Bernard estimates how quickly uncertainty about the impact of an intervention increases as the time horizon of the prediction increases. He shows that a Bayesian should put decreasing weight on longer-term estimates. Importantly, he uses data from various development economics randomized controlled trials, and it is unclear to me how much the conclusions might generalize to other interventions.
For me the following is the most questionable assumption:
Constant variance prior: We assume that the variance of the prior was the same for each time horizon whereas the variance of the signal increases with time horizon for simplicity.
[...]
If the variance of the prior grows at the same speed as the variance of the signal then the expected value of the posterior will not change with time horizon.
I think the rate of increase of the variance of the prior is a crucial consideration. Intuitively, I would say the variance of the prior grows at the same speed as the variance of the signal, in which case the signal would not be discounted.
Thanks for elaborating further!
I just don't think it makes any sense to have an aggregated total measure of "welfare". We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning.
I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?
It's moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.
I think respecting people's preferences is a great heuristic to do good. However, I still endorse hedonic utilitarianism rather than preference utilitarianism, because it is possible for someone to have preferences which are not ideal to maximise one's goals. (As an aside, Peter Singer used to be a preference utilitarian, but now is a hedonistic utilitarianism.)
Sure, the suffering is an additional evil, but the killing is an evil unto itself.
No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?
Rocks aren't sentient, they don't count.
I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.
What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.
No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.
Thanks for the clarifications!
Do you see ways for this sort of change to be decision relevant?
Nevermind. I think the model as is makes sense because it is more general. One can always specify a smaller probability of the intervention having no effect, and then account for other factors in the distribution of the positive effect.
However, there are costs to greater configurability, and we opted for less configurability here. Though I could see a reasonable person having gone the other way.
Right. If it is not super easy to add, then I guess it is not worth it.
Thanks for clarifying, Michael! Your approach makes sense to me. On the other hand, the value of EA Funds' Global Health and Development Fund in its current form remains unclear to me.
Thanks for writing this, and mentioning my related post, Jeff!
I think this depends on how fast safety measures like the ones you mentioned are adopted, and how the offense-defence balance evolves with technological progress. It would be great if Open Phil released the results of their efforts to quantify biorisk, whose one of the aims was: