Matt makes lots of money on his independent substack now, so that feels less urgent, but funding other things like future perfect in other news sources as the Rockefeller Foundation does now seems great.
Don't know if this fits the bill, but this channel talks about papers in AI and it's really fun and useful, especially for examples of improvements in the field and examples of gaming specifications: https://www.youtube.com/user/keeroyz
There's a new channel that could be added: (doesn't have much yet, but seems promising)
Thanks for writing this up! It's great to formalize intuitions, and this had a bunch of links I'm interested in following up on.One simplifying assumption that got made was that both interventions cash out in constant amounts of utility for the duration of their relevance. You spoke at the end about the ways in which conclusions would change by changing assumptions; this seems like an important one! If utility increases over time, you have additional juice in that part of the race. Is this basically addressed by you saying you weren't assuming the bigness of the universe (since if there's more people, presumably some good intervention will have more impact) + leaving aside attractor states? I think not quite, since attractor states mostly seem good by limiting the unpredictability rather than by increasing the impact, and I can imagine ways besides there being more people that a good intervention will increase in utility generation over time (snowball effects, allowing other good things to happen on top, etc).But maybe it just doesn't add a lot to the central idea? The question is simply one of comparing integrals, and we can construct more complicated integrands and model a bunch of different possibilities / hopefully test them empirically and that will tell us a lot about how to proceed.Thanks for this, and would love your thoughts!