Tetraspace Grouping's Shortform

10 comments, sorted by Highlighting new comments since Today at 9:01 AM
New Comment

In a building somewhere, tucked away in a forgotten corner, there are four clocks. Each is marked with a symbol: the first with a paperclip, the second with a double helix, the third with a trefoil, and the fourth with a stormcloud.

As you might expect from genre convention, these are not ordinary clocks. In fact, they started ticking when the first human was born, and when they strike midnight, a catastrophe occurs. The type depends on the clock, but what is always true is the disaster kills at least one in ten.

The times currently remaining on the clocks are:

  • AI Clock: 3:00 to midnight
  • Biotech Clock: 3:50 to midnight
  • Nuclear Clock: 4:30 to midnight
  • Climate Clock: 3:10 to midnight

Since there are many clocks, ticking somewhat randomly, they can be combined to estimate how long until at least one strikes midnight. 40 seconds of humanity.

----

These numbers were calculated using the Metaculus community median predictions of the probability of 10% of people dying from each of the causes from the Ragnarök question series.

I took those values as a constant probability of extinction over a period of 81 years (sort of like what I brought up in my previous shortform post), and calculated the mean time until catastrophe given this.

I mapped 350,000 years (the duration for which anatomically modern humans have existed according to Wikipeda) to 24 hours.

----

It is of course possible for human activity to push on the hands of these clocks, just as the clocks can influence humanity. An additional person working full time on those activities that would wind back the clocks could expect to delay them by this amount:

  • AI Clock: 20,000 microseconds
  • Biotech Clock: 200 microseconds
  • Nuclear Clock: 30 microseconds
  • Climate Clock: 20 microseconds

----

And these were calculated even more tenuously, by taking 80,000 hours' order-of-magnitude guesses at how much of the problem an additional full-time worker would solve completely literally and then finding the difference in the Doomsday clock time for that.

I really like seeing problems presented like this. It makes them easier to understand.

The sum of the grants made by the Long Term Future fund in August 2019 is $415,697. Listed below these grants is the "total distributed" figure $439,197, and listed above these grants is the "payout amount" figure $445,697. Huh?

Hi, I saw this and asked on our slack about it. These was a leftover figures from when the post was in draft and the grants weren't finalized; someone's now fixed it. If you see anything else wrong, feel free to reach out to funds@effectivelatruism.org.

I think it would be good to have a single source of truth in situations like this, ideally in the form of a spreadsheet of all grants, as suggested here.

In 2017, 80k estimated that $10M of extra funding could solve 1% of AI xrisk (todo: see if I can find a better stock estimate for the back of my envelope than this). Taking these numbers literally, this means that anyone who wants to buy AI offsets should, today, pay $1G*(their share of the responsibility).

There are 20,000 AI researchers in the world, so if they're taken as being solely responsible for the totality of AI xrisk the appropriate pigouvian AI offset tax fine is $45,000 per researcher hired per year. This is large but not overwhelmingly so.

Additional funding towards AI safety will probably go to hiring safety researchers for $100,000 per year each, so continuing to take these cost effectiveness estimates literally, to zeroth order another way of offsetting is to hire one safety researcher for every two capabilities researchers.

The Double Up Drive, an EA donation matching campaign (highly recommended) has, in one group of charities that it's matching donations to:

  • StrongMinds
  • International Refugee Assistance Project
  • Massachusetts Bail Fund

StrongMinds is quite prominent in EA as the mental health charity; most recently, Founders Pledge recommends it in their report on mental health.

The International Refugee Assistance Project (IRAP) works in immigration reform, and is a recipient of grants from OpenPhilanthropy as well as recommended for individual donors by an OpenPhil member of staff.

The Massachusetts Bail Fund, on the other hand, seems less centrally EA-recommended. It is working in the area of criminal justice reform, and posting bail is an effective-seeming intervention that I do like, but I haven't seen any analysis of its effectiveness or strong hints of non-public trust placed in it by informed donors (e.g. it has not received any OpenPhil grants; though note that it is listed in the Double Up Drive and the 2017 REG Matching Challenge).

I'd like to know more about the latter two from an EA perspective because they're both working on fairly shiny and high-status issues, which means that it would be quite easy for me to get my college's SU to make a large grant to them from the charity fund.

Is there any other EA-aligned information on this charity (and also on IRAP and StrongMinds, since the more the merrier)?

Open Phil has made multiple grants to the Brooklyn Community Bail Fund, which seems to do similar work to the MA Bail Fund (and was included in Dan Smith's 2017 match). I don't know why MA is still here and Brooklyn isn't, but it may have something to do with room for more funding or a switch in one of the orgs' priorities.

You've probably seen this, but Michael Plant included StrongMinds in his mental health writeup on the Forum.

One way that x-risk outreach is done outside of EA is by evoking the image of some sort of countdown to doom. There are 12 years until climate catastrophe. There are two minutes on the Doomsday clock, etc.

However, in reality, instead of doomsday being some fixed point of time on the horizon that we know about, all the best-calibrated experts have is probability distribution smeared over a wide range of times, mostly sitting on “never” simply for the purposes of just taking the median time not working.

And yet! The doomsday clock, so evocative! And I would like to make a bot that counts down on Twitter, I would like to post vivid headlines to really get the blood flowing. (The Twitter bot question is in fact what prompted me to start thinking about this.)

Some thoughts on ways to do this in an almost-honest way:

  • Find the instantaneous probability, today. Convert this to a timescale until disaster. If there is a 0.1% chance of a nuclear war this year, then this is sort of like there being 1,000 years until doom. Adjust the clock with the probability each year. Drawback is that this both understates and overstates the urgency: there’s a good chance disaster will never happen once the acute period is over, but if it does happen it will be much sooner than 100 years. This is what the Doomsday clock seems to want to do, though I think it's just a political signalling tool for the most part.
  • Make a conditional clock. If an AI catastrophe happens in the next century (11% chance), it will on average happen in 2056 (50% CI: 2040 - 2069), so have the clock tick down until that date. Display both the probability and the timer prominently, of course, as to not mislead. Drawback is that this is far too complicated and real clocks don’t only exist with 1/10 probability. This is what I would do if I was in charge of the Bulletin of the Atomic Scientists.
  • Make a countdown instead to the predicted date of an evocative milestone strongly associated with acute risk, like the attainment of human level AI or the first time a superbug is engineered in a biotech lab. Drawback is that this will be interpreted as a countdown until doomsday approximately two reblogs in (one if I'm careless in phrasing), and everyone will laugh at me when the date passes and the end of the world has not yet happened. This is the thing everyone is ascribing to AOC on Twitter.
[+][comment deleted]1y 1