Nowadays I would not be so quick to say that existential risk probability is mostly sitting on "never" 😔. This does open up an additional way to make a clock, literally just tick down to the median (which would be somewhere in the acute risk period).
Also, lending is somewhat of a commitment mechanism: if someone gets or buys a book, they have forever which can easily mean it takes forever, but if they borrow it there's time pressure to give it back which means either read it soon or lose it.
For fiction, AI Impacts has an incomplete list here sorted by what kind of failure modes they're about and how useful AI Impacts thinks they are for thinking about the alignment problem.
As of this comment: 40%, 38%, 37%, 5%. I haven't taken into account time passing since the button appeared.
With 395 total codebearer-days, a launch has occurred once. This means that, with 200 codebearers this year, the Laplace prior for any launch happening is 40% (). The number of participants is about in between 2019 (125 codebearers) and 2020 (270 codebearers), so doing an average like this is probably fine.
I think there's a 5% chance that there's a launch but no MAD, because Peter Wildeford has publicly committed to MAD, says 5%, an...
I looked up GiveDirectly's financials (a charity that does direct cash transfers) to check how easily it could be scaled up to megaproject-size and it turns out, in 2020, it made $211 million in cash transfers and hence is definitely capable of handling that amount! This is mostly $64m in cash transfers to recipients in Sub-Saharan Africa (their Givewell-recommended program) and $146m in cash transfers to recipients in the US.
Another principle, conservation of total expected credit:
Say a donor lottery has you, who donates a fraction of the total with an impact judged by you if you win of , the other participants, who collectively donate a fraction of the total with an average impact as judged by you if they win of , and the benefactor, who donates a fraction of the total with an average impact if they win of . Then total expected credit assigned by you should be (followed by A, B and C), and total credit...
I've been thinking of how to assign credit for a donor lottery.
Some ways that seem compelling:
Some principles about assigning credit:
What were your impressions for the amount of non-Open Philanthropy funding allocated across each longtermist cause area?
I also completed Software Foundations Volume 1 last year, and have been kind of meaning to do the rest of the volumes but other things keep coming up. I'm working full-time so it might be beyond my time/energy constraints to keep a reasonable pace, but would you be interested in any kind of accountability buddy / sharing notes / etc. kind of thing?
Simple linear models, including improper ones(!!). In Chapter 21 of Thinking Fast and Slow, Kahneman writes about Meehl's book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review, which finds that simple algorithms made by getting some factors related to the final judgement and weighting them gives you surprisingly good results.
...The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between humans and algorithms has not changed. About 60% of
How has the landscape of malaria prevention changed since you started? Especially since AMF alone has bought on the order of 100 million nets, which seems not insignificant compared to the total scale of the entire problem.
There is more malaria prevention happening now. When AMF started in 2004/05, 5 million LLINs were distributed globally by all contributors. It is now around 200 million nets per year.
There is a greater focus on data I am pleased to say with funders ever more focused on ensuring nationwide campaigns are well targeted and not wasteful.
More money has come into malaria prevention through a combination of greater awareness of the disease, its impact and what can be done about it, as well as, in our experience, donors having greater confidence that funds being g...
In the list at the top, Sam Hilton's grant summary is "Writing EA-themed fiction that addresses X-risk topics", rather than being about the APPG for Future Generations.
Miranda Dixon-Luinenburg's grant is listed as being $23,000, when lower down it's listed as $20,000 (the former is the amount consistent with the total being $471k).
Christiano operationalises a slow takeoff as
There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.
in Takeoff speeds, and a fast takeoff as one where there isn't a complete 4 year interval before the first 1 year interval.
The Double Up Drive, an EA donation matching campaign (highly recommended) has, in one group of charities that it's matching donations to:
StrongMinds is quite prominent in EA as the mental health charity; most recently, Founders Pledge recommends it in their report on mental health.
The International Refugee Assistance Project (IRAP) works in immigration reform, and is a recipient of grants from OpenPhilanthropy as well as recommended for individual donors by an OpenPhil member of sta...
The sum of the grants made by the Long Term Future fund in August 2019 is $415,697. Listed below these grants is the "total distributed" figure $439,197, and listed above these grants is the "payout amount" figure $445,697. Huh?
Two people mentioned the CEA not being very effective as an unpopular opinion they hold; has any good recent criticism of the CEA been published?
You mention the Jhanas and metta meditation as both being immensely pleasurable experiences. Since these come from meditation, they seem like they might be possible for people to do "at home" at very little risk (save for the opportunity costs from the time investment). Do you have any thoughts on encouraging meditation aimed towards achieving these highly pleasurable states specifically as a cause area and/or something we should be doing personally?
In a building somewhere, tucked away in a forgotten corner, there are four clocks. Each is marked with a symbol: the first with a paperclip, the second with a double helix, the third with a trefoil, and the fourth with a stormcloud.
As you might expect from genre convention, these are not ordinary clocks. In fact, they started ticking when the first human was born, and when they strike midnight, a catastrophe occurs. The type depends on the clock, but what is always true is the disaster kills at least one in ten.
The times currently remaining on the clocks a...
The division-by-zero type error is that EV(preventing holocaust|universe is infinite) would be calculated as ∞-∞, which in the extended reals is undefined rather than zero. If it was zero, then you could prove 0 = ∞-∞ = (∞+1)-∞ = (∞-∞)+1 = 1.
This reminds me of the most important AMA question of all:
MacAskill, would you rather fight 1 horse-sized chicken, or 100 chicken-sized horses?
One way that x-risk outreach is done outside of EA is by evoking the image of some sort of countdown to doom. There are 12 years until climate catastrophe. There are two minutes on the Doomsday clock, etc.
However, in reality, instead of doomsday being some fixed point of time on the horizon that we know about, all the best-calibrated experts have is probability distribution smeared over a wide range of times, mostly sitting on “never” simply for the purposes of just taking the median time not working.
And yet! The doomsday clock, so evocative! And I would l...
In 2017, 80k estimated that $10M of extra funding could solve 1% of AI xrisk (todo: see if I can find a better stock estimate for the back of my envelope than this). Taking these numbers literally, this means that anyone who wants to buy AI offsets should, today, pay $1G*(their share of the responsibility).
There are 20,000 AI researchers in the world, so if they're taken as being solely responsible for the totality of AI xrisk the appropriate pigouvian AI offset tax fine is $45,000 per researcher hired per year. This is large but not overwhelmingly so.
Addi...
"How targeted should donation recommendations be" (sorta)
I've noticed that Givewell targets specific programs (e.g. their recommendation), ACE targets whole organisations, and among far future charities you just kinda get promising-sounding cause areas.
I'm interested in what kind of differences between cause areas lead to this, and also whether anything can be done to make more fine-grained evaluations more desirable in practice.
The total number of cows probably stays about the same, because if they had space to raise more cows they would have just done that - I don't think that availability of semen is the main limiting factor. So the amount of suffering averted by this intervention can be found by comparing the suffering per cow per year in either cases.
Model a cow as having two kids of experiences: normal farm life where it experiences some amount of suffering x in a year, and slaughter where it experiences some amount of suffering y all at once.
In equilibrium, the population o
...If you want to make a decision, you will probably agree with me that it's more likely that you'll end up making that decision, or at least that it's possible to alter the likelyhood that you'll make a certain decision by thinking (otherwise your question would be better stated as "if physics is deterministic, does ethics matter"). And, under many worlds, if something is more likely to happen, then there will be more worlds where that happens, and more observers that see that happen (I think this is usually how it's posed, anyway). So while there'll always be some worlds where you're not altruistic, no matter what you do, you can change how many worlds are like that.
When I have a question about the future, I like to ask it on Metaculus. Do you have any operationalisations of synthetic biology milestones that would be useful to ask there?
Agmatine is an aminoacid you can buy over the counter at supplement stores and online. It is used as a workout supplement, to make weed feel stronger, and as a hangover prevention remedy. Agmatine has a high affinity for a number of receptors sites, and it is currently being debated whether it satisfies the criteria for being called a neurotransmitter.
Of particular note is agmatine's high affinity to the imidazoline receptor, which according to Thomas Ray- who analyzed the receptor affinity of 30+ psychedelics- might be one of the keys to the "ma...
This 2019 article has some costs listed:
GiveWell did an intervention report on maternal mortality 10 years ago, and at the time concluded that the evidence is less compelling than for their top charities (though they say that it is now probably out of date).
The amount of carbon that they say could be captured by restoring these trees is 205 GtC, which for $300bn to restore comes to ~70¢/ton of CO2 ~40¢/ton of CO2. Founders Pledge estimates that, on the margin, Coalition for Rainforest Nations averts a ton of CO2e for 12¢ (range: factor of 6) and the Clean Air Task Force averts a ton of CO2e for 100¢ (range: order of magnitude). So those numbers do check out.
I did not look at the details, but it appears that neither of these estimates take into account opportunity costs. Typical farming profit is around $200 per hectare per year, so if instead you sequester 5 tCO2e per hectare per year, that would cost ~$40 per tCO2e, ~2 orders of magnitude more expensive. By the way, I believe $300 billion divided by 205 billion tons carbon = 750 billion tons CO2 would be $0.40 per ton CO2.
You can't just ask the AI to "be good", because the whole problem is getting the AI to do what you mean instead of what you ask. But what if you asked the AI to "make itself smart"? On the one hand, instrumental convergence implies that the AI should make itself smart. On the other hand, the AI will misunderstand what you mean, hence not making itself smart. Can you point the way out of this seeming contradiction?
(Under the background assumptions already being made in the scenario where you can "ask things" to "the A...
The Terra Ignota series takes place in a world where global poverty has been solved by flying cars, so this is definitely well-supported by fictional evidence (from which we should generalise).
In MIRI's fundraiser they released their 2019 budget estimate, which spends about half on research personnel. I'm not sure how this compares to similar organizations.
The cost per researcher is typically larger than what they get paid, since it also includes overhead (administration costs, office space, etc).
One can convert the utility-per-researcher into utility-per-dollar by dividing everything by a cost per researcher. So if before you would have 1e-6 x-risk reduction per researcher, and you also decide to value researchers at $1M/researcher, then your evaluation in terms of cost is 1e-12 x-risk per dollar.
For some values (i.e. fake numbers but still acceptable for comparing orders-of-magnitude of cause areas) that I've saw used: The Oxford Prioritisation Project uses 1.8 million (lognormal distribution between $1M and $3M) for a MIRI researcher over t...
I love that “one person out of extreme poverty per second” statistic! It’s much easier to picture in my head than a group of 1,000 million people, since a second is something I’m familiar with seeing every day.
Are there any organisations you investigated and found promising, but concluded that they didn't have much room for extra funding?
One issue that comes up with multi-winner approval voting is: suppose there are 15 longtermists and 10 global poverty people. All the longtermists approve the LTFF, MIRI, and Redwood; all the global poverty people approve the Against Malaria Foundation, GiveWell, and LEEP.
The top three vote winners are picked: they're the LTFF, with 15 votes, MIRI, with 15 votes, and Redwood, with 15 votes.
It is maybe undesirable that 40% of the people in this toy example think those charities are useless, yet 0% of money is going to charities that aren't those. (Or ... (read more)