For fiction, AI Impacts has an incomplete list here sorted by what kind of failure modes they're about and how useful AI Impacts thinks they are for thinking about the alignment problem.
As of this comment: 40%, 38%, 37%, 5%. I haven't taken into account time passing since the button appeared.
With 395 total codebearer-days, a launch has occurred once. This means that, with 200 codebearers this year, the Laplace prior for any launch happening is 40% (1−(1−1396)200). The number of participants is about in between 2019 (125 codebearers) and 2020 (270 codebearers), so doing an average like this is probably fine.
I think there's a 5% chance that there's a launch but no MAD, because Peter Wildeford has publicly committed to MAD, says 5%, and he knows himself best.
I think the EA forum is a little bit, but not vastly, more likely to initiate a launch, because the EA Forum hasn't done Petrov day before and qualitatively people seem to be having a bit more fun and irreverance over here, so I'm giving 3% of the no-MAD probability to EA Forum staying up and 2% to Lesswrong staying up.
I looked up GiveDirectly's financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.
Another principle, conservation of total expected credit:
Say a donor lottery has you, who donates a fraction p of the total with an impact judged by you if you win of X, the other participants, who collectively donate a fraction q of the total with an average impact as judged by you if they win of Y, and the benefactor, who donates a fraction 1−p−q of the total with an average impact if they win of 0. Then total expected credit assigned by you should be pX+qY (followed by A, B and C), and total credit assigned by you should be X if you win, Y if they win, and 0 otherwise (violated by C).
I've been thinking of how to assign credit for a donor lottery.
Some ways that seem compelling:
Some principles about assigning credit:
Some actual uses of assigning credit and what they might say:
What were your impressions for the amount of non-Open Philanthropy funding allocated across each longtermist cause area?
I also completed Software Foundations Volume 1 last year, and have been kind of meaning to do the rest of the volumes but other things keep coming up. I'm working full-time so it might be beyond my time/energy constraints to keep a reasonable pace, but would you be interested in any kind of accountability buddy / sharing notes / etc. kind of thing?
Simple linear models, including improper ones(!!). In Chapter 21 of Thinking Fast and Slow, Kahneman writes about Meehl's book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review, which finds that simple algorithms made by getting some factors related to the final judgement and weighting them gives you surprisingly good results.
The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between humans and algorithms has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy [...]
If they are weighted optimally to predict the training set, they're called proper linear models, and otherwise they're called improper linear models. Kahneman says about Dawes' The Robust Beauty of Improper Linear Models in Decision Making that
A formula that combines these predictors with equal weights is likely to be just as accurate in predicting new cases as the multiple-regression formula that was ptimal in the original sample. More recent research went further: formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling.
That is to say: to evaluate something, you can get very far just by coming up with a set of criteria that positively correlate with the overall result and with each other and then literally just adding them together.
How has the landscape of malaria prevention changed since you started? Especially since AMF alone has bought on the order of 100 million nets, which seems not insignificant compared to the total scale of the entire problem.
In the list at the top, Sam Hilton's grant summary is "Writing EA-themed fiction that addresses X-risk topics", rather than being about the APPG for Future Generations.
Miranda Dixon-Luinenburg's grant is listed as being $23,000, when lower down it's listed as $20,000 (the former is the amount consistent with the total being $471k).