Anonymous feedback me: https://www.admonymous.co/tetraspace
Also, lending is somewhat of a commitment mechanism: if someone gets or buys a book, they have forever which can easily mean it takes forever, but if they borrow it there's time pressure to give it back which means either read it soon or lose it.
For fiction, AI Impacts has an incomplete list here sorted by what kind of failure modes they're about and how useful AI Impacts thinks they are for thinking about the alignment problem.
As of this comment: 40%, 38%, 37%, 5%. I haven't taken into account time passing since the button appeared.
With 395 total codebearer-days, a launch has occurred once. This means that, with 200 codebearers this year, the Laplace prior for any launch happening is 40% (). The number of participants is about in between 2019 (125 codebearers) and 2020 (270 codebearers), so doing an average like this is probably fine.
I think there's a 5% chance that there's a launch but no MAD, because Peter Wildeford has publicly committed to MAD, says 5%, and he knows himself best.
I think the EA forum is a little bit, but not vastly, more likely to initiate a launch, because the EA Forum hasn't done Petrov day before and qualitatively people seem to be having a bit more fun and irreverance over here, so I'm giving 3% of the no-MAD probability to EA Forum staying up and 2% to Lesswrong staying up.
I looked up GiveDirectly's financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.
Another principle, conservation of total expected credit:
Say a donor lottery has you, who donates a fraction of the total with an impact judged by you if you win of , the other participants, who collectively donate a fraction of the total with an average impact as judged by you if they win of , and the benefactor, who donates a fraction of the total with an average impact if they win of . Then total expected credit assigned by you should be (followed by A, B and C), and total credit assigned by you should be if you win, if they win, and otherwise (violated by C).
I've been thinking of how to assign credit for a donor lottery.
Some ways that seem compelling:
Some principles about assigning credit:
Some actual uses of assigning credit and what they might say:
What were your impressions for the amount of non-Open Philanthropy funding allocated across each longtermist cause area?
I also completed Software Foundations Volume 1 last year, and have been kind of meaning to do the rest of the volumes but other things keep coming up. I'm working full-time so it might be beyond my time/energy constraints to keep a reasonable pace, but would you be interested in any kind of accountability buddy / sharing notes / etc. kind of thing?
Nowadays I would not be so quick to say that existential risk probability is mostly sitting on "never" 😔. This does open up an additional way to make a clock, literally just tick down to the median (which would be somewhere in the acute risk period).