Chi

685Joined Aug 2017

Bio

I work for the Center on Long-Term Risk on s-risk reduction projects. (Currently: hiring, community building, and grantmaking.)

Feel free to dm me with thoughts and questions! Especially as they relate to s-risk/AI risk.

Context for some other things you might want to ask me about: Previously, I was a guest manager at the EA Infrastructure Fund (2021), did some research for 1 Day Sooner on Human Challenge Trials for Covid vaccines (2020), did the summer research fellowship at FHI writing about IDA (2019), worked a few hours a week for CEA on local groups mentoring for a few months (2018), and helped a little bit with organizing EA Oxford (2018/19). I studied PPE at Oxford (2018-2021) and psychology in Freiburg (2015-2018)

Comments
49

Topic Contributions
1

Center on Long-Term Risk (my employer) focuses on reducing s-risk (risks of astronomical suffering.)

(And AFAIK coined the term (long before my times though))

Many large donors (and donation advisors) do not take general applications. This includes Open Philanthropy (“In general, we expect to identify most giving opportunities via proactive searching and networking”), Longview, REG, CERR, CLR, and the new Longtermism Fund.

Grant manager at CLR here - we take general applications to the CLR Fund and would love to get more of them. Note that our grantmaking is specifically s-risk focused.*

Copy pasting another comment of mine from another post over here:

If you or someone you know are seeking funding to reduce s-risk, please send me a message. If it's for a smaller amount, you can also apply directly to CLR Fund. This is true even if you want funding for a very different type of project than what we've funded in the past.

I work for CLR on s-risk community building and on our CLR Fund, which mostly does small-scale grantmaking, but I might also be able to make large-scale funding for s-risk projects ~in the tens of $ millions (per project) happen. And if you have something more ambitious than that, I'm also always keen to hear it :)

 

 

*We also fund things that aren't specifically targeted towards s-risk reduction but still seem beneficial to s-risk reduction. Some of our grants this year that we haven't published yet are such grants. That said, we are often not in the best position to evaluate applications that aren't focused on s-risk even if they would have some s-risk-reducing side effects, especially when these side effects are not clearly spelled out in the application.

Automatically create a bibliography with all the links in a post.

Not OP, but I'm guessing it's at least unclear for the non-safety positions at OpenAI listed but it depends a lot on what a person would do in those positions. (I think they are not necessarily good "by default", so the people working in these positions would have to be more careful/more proactive to make it positive. Still think it could be great.) Same for many similar positions on the sheet but pointing out OpenAI since a lot of roles there are listed. For some of the roles, I don't know enough about the org to judge.

Haha, no, it took me quite a bit longer to phrase what I wrote but I didn't have dedicated non-writing thinking time, e.g. the claim about the expected ratio of future assets seems like something I could sanity check + get a better number for with a pen and pencil and a few minutes but I was too lazy to do that :)

(And I can't let false praise of me stand)

edit to also comment on the substantial part of your comment: Yes, that takeaway seems good to me!

edit edit: Although I'd caveat that s-risk is less mature than general longtermism (more "pre-paradigmatic" for people who like that word), so there might be less (obvious) to do for founders/leaders right now and that can be very frustrating. We still always want to hear about such people.

last edit?: And as in general longtermism, if somebody is interested in s-risk and has really high EtG potential, I might sometimes prefer that. Especially given what I said above about founder/leader type people. Something within an order of magnitude or two of FTX F for s-risk reduction would obviously be a huge win for the space and I don't think it's crazy to think that people could achieve that.

I didn't run this by anyone else in the s-risk funding space, so please don't hold others to these numbers/opinions.
 

Tl;dr: I think this is probably right in direction but with lots of caveats. In particular, it's still the case that s-risk has a lot of money (~low hundreds $m) compared to ideas/opportunities at least right now and at least possibly more so than general longtermism. I think this might change soon since I expect s-risk money to grow less than general longtermist money.

edit: I think s-risk is ideas constrained when it comes to small grants and funding (and ideas) constrained for large grants/investments.

I'd estimate s-risk to have something in the low hundreds $m in expected value (not time-discounted) of current assets specifically dedicated to it. Your question is slightly hard to answer since I'm guessing OpenPhil and FTXF would fund at least some s-risk projects if there were more proposals/more demand for money in s-risk. Also, a lot of funded people and projects who don't work directly on s-risk still care about s-risk. Maybe that should be counted somehow. Naively not counting these people and OpenPhil/FTXF money at all and comparing current total assets in general longtermism vs. s-risk:

In absolute terms: Yup, general longtermism definitely has much more money (~two orders of magnitude.) My guess is that this ratio will grow bigger over time and that it will in expectation grow bigger over time. (~70% credence for each of the claims? Again confused about how to count OpenPhil and FTX F money and how they'll decide to spend money in the future. If I stick to not counting them as s-risk money at all, then >70% credence.)

Per person working on s-risk/general longtermism: Would still say yes although I don't have a good way to count s-risk people and general longtermist people. Could be closer to even and probably not (much) more than an order of magnitude difference. Again, quick and wild guess is that the difference will in expectation grow larger over time, but less confident in this than my guess about how the ratio of absolute money will develop. (55%?)

Per quality-adjusted idea/opportunity to spend money: Unsure. I'd (much) rather have more money-eating ideas/opportunities to reduce s-risk than more money to reduce s-risk but I'm not sure if this is more or less the case compared to general longtermism (s-risk has both fewer ideas/opportunities and less money). Also don't know how this will develop. Arguably, the ratio between money and idea/opportunity also isn't a great metric because you might care more about absolutes here. I think some people might argue that s-risk is less funding constrained compared to ideas-constrained than general longtermism. This isn't exactly what you've asked for but still seems relevant. OTOH, having less absolute money does mean that the s-risk space might struggle to fund even one really expensive project.

edit: I do think if we had significantly more money right now, we would be spending more money now-ish.

Per "how much people in the EA community care about this issue": Who knows :) I'm  obviously both biased and in a position that selects for my opinion.

Funding infrastructure: Funding in s-risk is even more centralized than in general longtermism, so if you think diversification is good, more s-risk funders are good :) There are also fewer structured opportunities for funding in s-risk and I think the s-risk funding sources are generally harder to find. Although again, I assume one could easily apply with an s-risk motivated proposal to general longtermist places, so it's kind of weird to compare the s-risk funding infrastructure to the general longtermist funding infrastructure.

 

I wrote this off the cuff and in particular, might substantially revise my predictions with 15 minutes of thought.

If you or someone you know are seeking funding to reduce s-risk, please send me a message. If it's for a smaller amount, you can also apply directly to CLR Fund. This is true even if you want funding for a very different type of project than what we've funded in the past.

I work for CLR on s-risk community building and on our CLR Fund, which mostly does small-scale grantmaking, but I might also be able to make large-scale funding for s-risk projects ~in the tens of $ millions (per project) happen. And if you have something more ambitious than that, I'm also always keen to hear it :)

Thanks for asking! We would definitely consider later starts if people aren't available earlier and I would be surprised if we rejected a strong candidate just on the basis that they are only available a month later. There's some chance we would shorten the default fellowship length (not necessarily by the same number of weeks that they would start later) for them, though but we would discuss this with them first. I think if they would only accept the fellowship if it starts later and is the original 9 weeks long, this would increase the threshold for accepting them somewhat, but again, I would be surprised if we rejected a very strong candidate just on the basis. (I think it would only matter for edge cases.) It also depends a bit on what other applications we get: E.g. if we get many strong applications for Germans who can only start later, we would probably be much more happy to accommodate all of them.

Thanks for the question! It's unclear whether we'll run an S-Risk Intro Fellowship in this precise format again. We are fairly likely to run intro events with similar content in the future though. I think this will most likely happen on an annual or semi-annual basis.

Some data that I didn't formally write up and put in the post (mostly for time reasons) on how past fellows evaluated the fellowship:

 

2021

10 out of 14 fellows filled in the fellowship feedback survey:

  • 10 of 10 respondents answered "Are you glad that you participated in the fellowship" with 5/5 ("hell yeah")
  • 9 of 10 respondents answered "If the same program happened next year, would you recommend a friend (with similar background to you before the fellowship) to apply?" with 10/10 ("strongly yes).
    • 1 of 10 fellows rated the question with 9/10.
  • 4 of 10 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "notably more valuable (3-10x the counterfactual)"
    • 3 of 10 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "Much more valuable (10-30x the counterfactual)"
    • 1 of 10 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "Far more valuable (>30x the counterfactual)"
    • 1 of 10 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "Somewhat more valuable (1-3x the counterfactual)"
    • 1 of 10 respondents did not answer the question

It's possible that the respondents were anchored by the possible options for the last question: There was one option "about as valuable" and 4 options each in the directions more and less valuable. The lowest respondents could go was "not at all valuable (<10% of counterfactual)"

The survey was not anonymous (although the name field was optional and one respondent chose not to enter their name) and several of the respondents were either in employment, on a grant, or on a trial with us at the time of responding.

 

2020

7 out of 9 fellows filled in the fellowship feedback survey:

  • 6 of 7 respondents answered "Are you glad that you participated in the fellowship" with 5/5 ("hell yeah")
    • 1 of 7 respondents did not answer this question
  • 3 of 7 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "Much more valuable (10-30x the counterfactual)"
    • 3 of 7 respondents answered  "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "notably more valuable (3-10x the counterfactual)"
    • 1 of 7 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "much more valuable (>30x the counterfactual)"
  • We did not ask the question of whether they would recommend the program to someone similar to them this year.

The survey was anonymous in 2020. Several of the respondents were either in employment, on a grant, or on a trial with us at the time of responding.

Load More