G

GMM

412 karmaJoined

Comments
25

lasting catastrophe?

perma-cataclysm?

hypercatastrophe?

That makes sense, thanks for the explanation! Yeah still a bit confused why they chose different numbers of years for the scientist and PhD, how those particular numbers arise, and why they're so different (I'm assuming it's 1 year of scientist funding or 5 years of PhD funding).

Pretty ambitious, thanks for attempting to quantify this!

Having only quickly skimmed this and not looked into your code (so could be my fault), I find myself a bit confused about the baselines: funding a single research scientist (I'm assuming this means at a lab?) or Ph.D. student for even 5 years seems to unclearly equivalent to 87 or 8 adjusted counterfactual years of research--I'd imagine it's much less than that. Could you provide some intuition for how the baseline figures are calculated (maybe you are assuming second-order effects, like funded individuals getting interested in safety and doing more or it or mentoring others under them)? 

around the start of this year, the SERI SRF (not MATS) leadership was thinking seriously about launching a MATS-styled program for strategy/governance

I'm on the SERI (not MATS) organizing team. One person from SERI (henceforce meaning not MATS as they've rather split) was thinking about this in collaboration with some of the MATS leadership. The idea is currently not alive, but afaict didn't strongly die (i.e. I don't think people decided not to do it and cancelled things but rather failed to make it happen due to other priorities).

I think something like this is good to make happen though, and if others want to help make it happen, let me know and I'll loop you in with the people who were discussing it.

Interesting results!

Does "TE" in the graphs mean "Time 1: Why Uncontrollable AI Looks More Likely Than Ever | Time" and "Time 2" mean "Time 2: The Only Way to Deal With the Threat From AI? Shut It Down | Time"? I was a bit confused.

Excited for this!

Nit: your logo seems to show the shrimp a bit curled up, which iirc is a sign that they're dead and not a happy freely living shrimp (though it's good thay they're blue and not red).

Some discussion of this consideration in this thread: https://forum.effectivealtruism.org/posts/bBoKBFnBsPvoiHuaT/announcing-the-ea-merch-store?commentId=jaqayJuBonJ5K7rjp

Agree that they shouldn't be ignored. By "you shouldn't defer to them," I just meant that it's useful to also form one's own inside view models alongside prediction markets (perhaps comparing to them afterwards).

Load more