kokotajlod

Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo

Sequences

Tiny Probabilities of Vast Utilities: A Problem for Longtermism?
What to do about short timelines?

Wiki Contributions

Comments

Is effective altruism growing? An update on the stock of funding vs. people

I hadn't even taken into account future donors; if you take that into account then yeah we should be doing even more now. Huh. Maybe it should be like 20% or so. Then there's also the discount rate to think about... various risks of our money being confiscated, or controlled by unaligned people, or some random other catastrophe killing most of our impact, etc.... (Historically, foundations seem to pretty typically diverge from the original vision/mission laid out by their founders.)

I've read the hinge of history argument before, and was thoroughly unconvinced (for reasons other people explained in the comments).

One quick thing is that I think high interest rates are overall an argument for giving later rather than sooner!

Hmmm, toy model time: Suppose that our overall impact is log(whatwespendinyear2021)+log(whatwespendinyear2022)+Log(whatwespendinyear2023)... etc. up until some year when existential safety is reached or x-risk point of no return is passed.
Then is it still the case that going from e.g. a 10% interest rate to a 20% interest rate means we should spend less in 2021? Idk, but I'll go find out! (Since I take this toy model to be reasonably representative of our situation)

Is effective altruism growing? An update on the stock of funding vs. people

Thanks, this data is really helpful -- and it also is reassuring to know that people in the EA community are on top of this stuff. I would be disappointed if no one was.

I'm curious as to how the 3% per year number could be justified (via models, rather than by aggregating survey answers). It seems to me that it should be substantially higher.

Suppose you have my timelines (median 2030). Then, intuitively, I feel like we should be spending something like 10% per year. If you have 2055 as your median, then maybe 3% per year makes sense...

EXCEPT that this doesn't take into account interest rates! Even if we spent 10% per year, we should still expect our total pot of money to grow, leaving us with an embarrassingly large amount of money going to waste at the end. (Sure, sure,  it wouldn't literally go to waste--we'd probably blow it all on last-ditch megaprojects to try to turn things around--but these would probably be significantly less effective per dollar compared to a world in which we had spread out our spending more, taking more opportunities on the margin over many years.) And if we spent 3%...

Idk. I'm new to this whole question. I'd love for people to explain more about how to think about this.

Digital People Would Be An Even Bigger Deal

Digital people that become less economically, militarily, and politically powerful--e.g. as a result of reward hacking making them less interested in the 'rat race'--will be outcompeted by those that don't,  unless there are mechanisms in place to prevent this, e.g. all power centralized in one authority that decides not to let that happen, or strong effective regulations that are universally enforced.

DeepMind: Generally capable agents emerge from open-ended play

My take is that indeed, we now have AGI -- but it's really shitty AGI, not even close to human-level. (GPT-3 was another example of this; pretty general, but not human-level.) It seems that we now have the know-how to train a system that combines all the abilities and knowledge of GPT-3 with all the abilities and knowledge of these game-playing agents. Such a system would qualify as AGI, but not human-level AGI. The question is how long it'll take, and how much money (to make it bigger, train for longer) to get to human-level or something dangerously powerful at least.

DeepMind: Generally capable agents emerge from open-ended play

I would love to know! If anyone finds out how many PF-DAYs or operations or whatever were used to train this stuff, I'd love to hear it. (Alternatively: How much money was spent on the compute, or the hardware.)

DeepMind: Generally capable agents emerge from open-ended play

I did say it was a hot take. :D If I think of more sophisticated things to say I'll say them. 

 

Load More