RG

Ryan Greenblatt

Member of Technical Staff @ Redwood Research
70 karmaJoined Sep 2022

Bio

This other Ryan Greenblatt is my old account[1]. Here is my LW account.

  1. ^

    Account lost to the mists of time and expired university email addresses.

Comments
11

I believe prior work showed large effects from short periods of supplementation. (Edit: Note that this work seems to debunk prior work, but this should explain the study design)

From a "current generations" perspective, reducing GCRs is probably not more cost-effective than directly improving the welfare of people / animals alive today

I think reducing GCRs seems pretty likely to wildly outcompete other traditional approaches[1] if we use a slightly broad notion of current generation (e.g. currently existing people) due to the potential for a techno utopian world which making the lives of currently existing people >1,000x better (which heavily depends on diminishing returns and other considerations). E.g., immortality, making them wildly smarter, able to run many copies in parallel, experience insanely good experiences, etc. I don't think BOTECs will be a crux for this unless we ignore start discounting things rather sharply.

  • If GCRs actually are more cost-effective under a "current generations" worldview, then I question why EAs would donate to global health / animal charities (since this is no longer a question of "worldview diversification", just raw cost-effectiveness)

IMO, the main axis of variation for EA related cause prio is "how far down the crazy train do we go" not "person affecting (current generations) vs otherwise" (though views like person affecting ethics might be downstream of crazy train stops).

Mildly against the Longtermism --> GCR shift

Idk what I think about Longtermism --> GCR, but I do think that we shouldn't lose "the future might be totally insane" and "this might be the most important century in some longer view". And I could imagine focus on GCR killing a broader view of history.

  1. ^

    That said, if we literally just care about experiences which are somewhat continuous with current experiences, it's plausible that speeding up AI outcompetes reducing GCRs/AI risk. And it's plausible that there are more crazy sounding interventions which look even better (e.g. extremely low cost cryonics). Minimally the overall situation gets dominated by "have people survive until techno utopia and ensure that techno utopia happens". And the relative tradeoffs between having people survive until techno utopia and ensuring that techno utopia happen seem unclear and will depend on some more complicated moral view. Minimally, animal suffering looks relatively worse to focus on.

This seems like it might be a good price discrimination strategy though I'm not sure if that's the intent.

Thanks for the report.

If I were to add one thing to this report, it would probably be a comparison of increasing the likelihood of space settlement vs increasing the likelihood of extremely resilient and self-sustaining disaster shelters (e.g. shelters that could be self-sustaining for decades or possibly centuries). You note the similarities in "Design of disaster shelters", but don't compare these as possible interventions (as far as I can tell).

My naive (mostly uninformed) guess would have been that very good disaster shelters are wildly cheaper and easier (prior to radical technology change like superhuman AI or nanotech) while offering most of the same benefits.

(I put a low probability on commercially viable and self-sustaining space colonies prior to some other radical change in the technical landscape, but perhaps I'm missing some story for economic viability. Like I think the probability of these sorts of space colonies in the next 60 years is low (without some other radical technical advancement like AI or nanotech happening prior in which case the value add is more complex).)

One problem I have with these discussions, including past discussions about why national EA orgs should have fundraising platform, is the reductionist and zero-sum thinking given in response. 

Wait, but it might actually have opportunity cost? Like those poeple could be doing something other than trying to get more medium sized donors? There is a cost to trying to push on this versus something else. (If you want to push on it, then great, this doesn't impose any cost on others, but that seems different from a claim that this is among the most promising things to be working on at the margin.)

I identified above, how an argument stating less donors results in more efficiency, would never be made in the for profit world. Similarly, a lot of the things we care about (talent, networks, entrepreneurship) become stronger the more small/medium donors we have. For the same reason that eating 3 meals in a day makes it easier to be productive - despite it taking more time compared to not eating at all - having more of a giving small-donor ecosystem will make it easier to achieve other things we need. 

Your argument here is "getting more donors has benefits beyond just the money" (I think).  But, we can also go for those benefits directly without necessarily getting more donors. Like maybe trying to recruit more medium sized donors is the best way to community build, but this seems like sort of a specific claim which seems a priori unlikely (it could be true ofc) unless having more small donors is itself a substantial fraction of the value and that's why it's better than other options.

So, recruiting donors is perhaps subsidized by causing the other effects you noted, but if it's subsidized by some huge factor (e.g. more like 10x than 1.5x) than directly pursuing the effects seems like probably a better strategy.

The underlying claim is that many people with technical expertise should do part time grant making?

This seems possible to me, but a bit unlikely.

OP doesn't have the capacity to evaluate everything, so there are things they don't fund that are still quite good.

Also OP seems to prefer to evaluate things that have a track record, so taking bets on people to be able to get more of a track record to then apply to OP would be pretty helpful.

IMO, these both seem like reasons for more people to work at OP on technical grant making more than reasons for Neel to work part time on grant making with his money.

I'm in a relatively similar position to Neel. I think technical AI safety grant makers typically know way more than me about what is promising to fund. There is a bunch of non-technical info which is very informative for knowing whether a grant is good (what do current marginal grants look like, what are the downside risks, is there private info on the situation which makes things seem sketchier, etc.) and grant makers are generally in a better position than I am to evaluate this stuff.

The limiting factor [in technical ai safety funding] is in having enough technical grant makers, not in having enough organizational diversity among grantmakers (at least at current margins).

If OpenPhil felt more saturated on technical AI grant makers, then I would feel like starting new orgs pursing different funding strategies for technical AI safety could look considerably better than just having more people work at grant making at OpenPhil.

That said, note that I tend to agree to reasonable extent with the technical takes at OpenPhil on AI safety. If I heavily disagreed, I might think starting new orgs looks pretty good.

First, the recent surveys of the general public's attitudes towards AI risk suggest that a strongly enforced global pause would actually get quite a bit of support. It's not outside the public's Overton Window. It might be considered an 'extreme solution' by AI industry insiders and e/acc cultists. But the public seems to understand that it's just fundamentally dangerous to invent Artificial General Intelligence that's as smart as smart humans (and much, much faster), or to invent Artificial Superintelligence. AI experts might patronize the public by claiming they're just reacting to sensationalized Hollywood depictions of AI risk. But I don't care. If the public understands the potential risks, through whatever media they've been exposed to, and if it leads them to support a pause, we might as well capitalize on public sentiment.

I think the public might support a pause on scaling, but I'm much more skeptical about the sort of hardware-inclusive pause that Holden discusses here:

global regulation-backed pause on all investment in and work on (a) general3 enhancement of AI capabilities beyond the current state of the art, including by scaling up large language models; (b) building more of the hardware (or parts of the pipeline most useful for more hardware) most useful for large-scale training runs (e.g., H100’s); (c) algorithmic innovations that could significantly contribute to (a)

A hardware-inclusive pause which is sufficient for pausing for >10 years would probably effectively dismantle companies like nvidia and would be at least a serious dent in TSMC. This would involve huge job loss and a large hit to the stock market. I expect people would not support such a pause which effectively requires dismantling a powerful industry.

It's possible I'm overestimating the extent to which hardware needs to be stopped for such a ban to be robust and an improvement on the status quo.

EAs are especially rational people and not eating animals is obviously the more rational choice for 90%+ people reading this

I'm about 99% bivalve vegan (occasionally I eat fish for cognitive reasons). However, I think it doesn't make sense for strongly longtermist individuals in terms of the direct straightforward benefits of veganism. The direct animal suffering is negligable relative to the future. I'm strongly longtermist, but I stay vegan for a combination of less direct reasons like signaling to myself and generally being cooperative (for reasons like acausal decision theory and directly being cooperative with current people).

Load more