In a building somewhere, tucked away in a forgotten corner, there are four clocks. Each is marked with a symbol: the first with a paperclip, the second with a double helix, the third with a trefoil, and the fourth with a stormcloud.
As you might expect from genre convention, these are not ordinary clocks. In fact, they started ticking when the first human was born, and when they strike midnight, a catastrophe occurs. The type depends on the clock, but what is always true is the disaster kills at least one in ten.
The times currently remaining on the clocks are:
Since there are many clocks, ticking somewhat randomly, they can be combined to estimate how long until at least one strikes midnight. 40 seconds of humanity.
These numbers were calculated using the Metaculus community median predictions of the probability of 10% of people dying from each of the causes from the Ragnarök question series.
I took those values as a constant probability of extinction over a period of 81 years (sort of like what I brought up in my previous shortform post), and calculated the mean time until catastrophe given this.
I mapped 350,000 years (the duration for which anatomically modern humans have existed according to Wikipeda) to 24 hours.
It is of course possible for human activity to push on the hands of these clocks, just as the clocks can influence humanity. An additional person working full time on those activities that would wind back the clocks could expect to delay them by this amount:
And these were calculated even more tenuously, by taking 80,000 hours' order-of-magnitude guesses at how much of the problem an additional full-time worker would solve completely literally and then finding the difference in the Doomsday clock time for that.
I really like seeing problems presented like this. It makes them easier to understand.
The sum of the grants made by the Long Term Future fund in August 2019 is $415,697. Listed below these grants is the "total distributed" figure $439,197, and listed above these grants is the "payout amount" figure $445,697. Huh?
Hi, I saw this and asked on our slack about it. These was a leftover figures from when the post was in draft and the grants weren't finalized; someone's now fixed it. If you see anything else wrong, feel free to reach out to email@example.com.
I think it would be good to have a single source of truth in situations like this, ideally in the form of a spreadsheet of all grants, as suggested here.
In 2017, 80k estimated that $10M of extra funding could solve 1% of AI xrisk (todo: see if I can find a better stock estimate for the back of my envelope than this). Taking these numbers literally, this means that anyone who wants to buy AI offsets should, today, pay $1G*(their share of the responsibility).
There are 20,000 AI researchers in the world, so if they're taken as being solely responsible for the totality of AI xrisk the appropriate pigouvian AI offset tax fine is $45,000 per researcher hired per year. This is large but not overwhelmingly so.
Additional funding towards AI safety will probably go to hiring safety researchers for $100,000 per year each, so continuing to take these cost effectiveness estimates literally, to zeroth order another way of offsetting is to hire one safety researcher for every two capabilities researchers.
I've been thinking of how to assign credit for a donor lottery.
Some ways that seem compelling:
Some principles about assigning credit:
Some actual uses of assigning credit and what they might say:
Another principle, conservation of total expected credit:
Say a donor lottery has you, who donates a fraction p of the total with an impact judged by you if you win of X, the other participants, who collectively donate a fraction q of the total with an average impact as judged by you if they win of Y, and the benefactor, who donates a fraction 1−p−q of the total with an average impact if they win of 0. Then total expected credit assigned by you should be pX+qY (followed by A, B and C), and total credit assigned by you should be X if you win, Y if they win, and 0 otherwise (violated by C).
The Double Up Drive, an EA donation matching campaign (highly recommended) has, in one group of charities that it's matching donations to:
StrongMinds is quite prominent in EA as the mental health charity; most recently, Founders Pledge recommends it in their report on mental health.
The International Refugee Assistance Project (IRAP) works in immigration reform, and is a recipient of grants from OpenPhilanthropy as well as recommended for individual donors by an OpenPhil member of staff.
The Massachusetts Bail Fund, on the other hand, seems less centrally EA-recommended. It is working in the area of criminal justice reform, and posting bail is an effective-seeming intervention that I do like, but I haven't seen any analysis of its effectiveness or strong hints of non-public trust placed in it by informed donors (e.g. it has not received any OpenPhil grants; though note that it is listed in the Double Up Drive and the 2017 REG Matching Challenge).
I'd like to know more about the latter two from an EA perspective because they're both working on fairly shiny and high-status issues, which means that it would be quite easy for me to get my college's SU to make a large grant to them from the charity fund.
Is there any other EA-aligned information on this charity (and also on IRAP and StrongMinds, since the more the merrier)?
Open Phil has made multiple grants to the Brooklyn Community Bail Fund, which seems to do similar work to the MA Bail Fund (and was included in Dan Smith's 2017 match). I don't know why MA is still here and Brooklyn isn't, but it may have something to do with room for more funding or a switch in one of the orgs' priorities.
You've probably seen this, but Michael Plant included StrongMinds in his mental health writeup on the Forum.
One way that x-risk outreach is done outside of EA is by evoking the image of some sort of countdown to doom. There are 12 years until climate catastrophe. There are two minutes on the Doomsday clock, etc.However, in reality, instead of doomsday being some fixed point of time on the horizon that we know about, all the best-calibrated experts have is probability distribution smeared over a wide range of times, mostly sitting on “never” simply for the purposes of just taking the median time not working.And yet! The doomsday clock, so evocative! And I would like to make a bot that counts down on Twitter, I would like to post vivid headlines to really get the blood flowing. (The Twitter bot question is in fact what prompted me to start thinking about this.)Some thoughts on ways to do this in an almost-honest way:
Nowadays I would not be so quick to say that existential risk probability is mostly sitting on "never" 😔. This does open up an additional way to make a clock, literally just tick down to the median (which would be somewhere in the acute risk period).