All of satvikberi's Comments + Replies

These are both good points worth addressing! My understanding on (2) is that any proposed method of slowing down AGI research would likely antagonize the majority of AI researchers with relatively little actual slowdown. It seems more valuable to build alliances with current AI researchers, and get them to care about safety, in order to increase the amount of safety-concerned research done vs. safety-agnostic research.

3
MichaelDickens
8y
Exactly. If someone were trying to slow down AI research, they definitely wouldn't want to make it publicly known that they were doing so, and they wouldn't write articles on a public forum about how they believe we should try to slow down AI research.

GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).

Oh, cool! I definitely didn't realize this.

I get the (purely anecdotal) impression that recruiting is sensitive to salaries in the sense that some people who would be good fits for EA charities automatically rule them out because the salaries are low enough that they would have to make undesirable time/money tradeoffs. However, it's a bit of a tricky problem, because most nonprofits want to pay everyone roughly the same amount, so hiring one marginal person at say 20% more really means increasing all salaries by that much.

Another relevant factor is how much of a salary cut you're looking at when mo... (read more)

From talking to Matt Wage a few times I got the impression that he spends the equivalent of a few full time work weeks per year figuring out where to donate. Requiring potential donors to spend that much time seems like a flaw in the system, and EA ventures seems to be addressing it.

It's hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.

I get the impression that these are going mostly to programs that already have a lot of evidence and aren't really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people... (read more)

2
Ben Kuhn
9y
I wouldn't say that New Incentives has "a lot of evidence and aren't really exploring the space of possible interventions." But again, this is just dueling anecdata for now. GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires). Ah, good point. This seems like a pretty plausible mechanism.

To play devil's advocate (these don't actually represent my beliefs):

I can’t remember any EA orgs failing to reach a fundraising target.

This doesn't necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.

Open Phil has recently posted about an org they wish existed but doesn’t and funder-initiated startups.

It's pretty hard to get funding for a new organization, e.g. Spencer and I put a lot of effort into it without much success. The general problem I see is a lack of "angel investi... (read more)

3
Ben Kuhn
9y
I agree that this could confound the result, but it's still some evidence! It's hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well. This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
5
redmoonsoaring
9y
I agree with this. Moreover, I think there's a serious lack of funding in the 'fringe' areas of EA like biosecurity, systemic change in global poverty, rationality training, animal rights, or personal development. These areas arguably have the greatest impact, but it's difficult to attract the major funders. For example, I think the Swiss EA groups are quite funding-constrained, but they aren't well-known to the major funders and movement-building lacks robust evidence.