LD

Luke Dawes

105 karmaJoined

Comments
11

(See here for a draft I whipped up for this, and feel free to comment!) Hayden Wilkinson’s “In defence of fanaticism” argues that you should always take the lower-probability odds of a higher-value reward over the inverse in decision theory, or face serious problems. I think accepting his argument introduces new problems that aren’t described in the paper:

  1. It is implied that each round of Dyson’s Wager (e.g. for each person in the population being presented with the wager) has no subsequent effect on the probability distribution for future rounds, which is unrealistic and doesn’t. I illustrate this with a “small worlds” example.
  2. Fanaticism is only considered under positive theories of value and therefore ignores the offsetting principle, which assumes both the existence of and an exchange rate (or commensurability) between independent goods and bads. I’d like to address this in a future draft with multiple reframings of Dyson’s Wager under minimalist theories of value.

(See here for a draft I whipped for this, and feel free to comment!) An Earth-originating artificial superintelligence (ASI) may reason that the galaxy is busy in expectation, and that it could therefore eventually encounter an alien-originating ASI. ASIs from different homeworlds may find it valuable on first contact to verify whether they can each reliably enter into and uphold agreements, by presenting credible evidence of their own pro-social behaviour with other intelligences. If at least one of these ASIs has never met another, the only such agreement it could plausibly have entered into is with its progenitor species – maybe that's us. 

I'll post my ideas as replies to this, so they can be voted on separately.

This would be great to read, I walked away from at least one application process because I couldn't produce a decent WFM. I hope you write it!

Thanks for correcting me! I've reviewed my notes, and made some additional points to ensure I don't make the mistake again. 

Update: I just finished this book. It was as advertised: a concise, technical and sometimes challenging experience of moral philosophy, at the edge of my non-specialist understanding, but I really appreciated it. A couple of really important takeaways for me:

  1. The robustness of minimalist axiologies to various instantiations of the Repugnant Conclusion, especially under (non-sharp) lexicality. 
  2. A willingness to "bite the bullet" in certain cases, in particular the Archimedean minimalist 'Reverse Repugnant Conclusion' (i.e. it's better to add lots of bad lives, to slightly reduce the unbearable suffering of enough other bad lives) and the axiological 'perfection' of an empty world (matched only by one in which all lives are completely untroubled). 
  3. Relatedly, a willingness to "spit the bullet back out" where negative utilitarianism/minimalist views have been maligned, misrepresented or generally underdone, including by high-profile folks within EA whom I don't think have publicly changed their positions. 

Thank you for writing this, Teo, and well done again! I hope to write a longer-form summary of the ideas, both for myself and others, as I think there's a great deal of value here. 

I'm really excited to read this, Teo, congratulations on publishing it. 

Have just signed up, and looking forward to it! Thanks for organising. I hadn't come across the Foresight Institute before, even though I'd heard of the concept of existential hope, so I'll take a look at some of those resources, too. 

You're welcome, thanks for taking the time to read it! 

Hi, MvK, good choice. I'm already preparing an application! Thanks. 

Load more