Axby

25Joined Jan 2022

Posts
3

Sorted by New
1
Axby's Shortform
Axby
· 1y ago · 1m read

Comments
3

Axby12d10

Thanks for your comment, which helps me to zoom in on claims 4 and 5 in my own thinking. 

I was thinking of another point on intelligence fallibility, specifically whether intelligence really allows the AGI to fully shape the future to its will. Was thinking along the lines of Laplace's Demon which asks the question: if there is a demon which knows the position of every atom in the universe, and the direction which it travels in, will it be able to predict (and hence shape) the future? I think it is not clear that it will. In fact, Heisenberg's uncertainty principle suggests that it will not (at least at the quantum level). Similarly, it is not clear that the AGI would be able to do so even if it has complete knowledge of everything. 

Happy to comment on your post before/when you publish it!

Axby12d10

Thanks for the comment! Wonder if you, or @Derek Shiller knows of any research on the number or proportion of extinctions caused by humans? Thinking it would be a useful number to use as a prior! 

Axby1y30

I'm currently trying to develop an estimation of the effectiveness of pursuing a career in mitigating Global Catastrophic Biological Risks (GCBRs). As part of the EA Global in-depth reading  group, I read "Existential Risk and Cost-Effective Biosecurity" by Piers Millett and Andrew Snyder-Beattie (2017). The authors' estimates of the probability of GCBRs over the next century seemed very low (from 1.6 x 10^-6 to 0.02 over the next century, depending on which of the three methodologies used by the authors gives a better estimate)[1].

I could not seem to find any other sources that try to rigorously estimate the risks of GCBRs. Would forum users would be able to point me to some, please? [2] Thanks all in advance! 

 

 

[1] Even with the highest estimate of 0.02 of GCB events per century, longtermist assumptions (of 10^16 potential lives lost, as indicated by the author) are needed for GCBR-mitigation to be more cost-effective than Givewell's top charities (taken as $4500/life saved). I would prefer not to make longtermist assumptions, in line with Neel Nanda's (2021) call to "Simplify EA
Pitches to 'Holy Shit, X-Risk'". 

[2] With this input, I hope to write a more extensive post clarifying the priority areas in mitigating GCBRs (possibly pointing to risks of GCBRs from emerging  biotech that enable bioterrorism as a priority area; in contrast with a focus on state-led biowarfare). If we are focusing on emerging biotech rather than existing biotech, estimates of whether these technologies would even materialise would be an important consideration as well.