GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).
Oh, cool! I definitely didn't realize this.
I get the (purely anecdotal) impression that recruiting is sensitive to salaries in the sense that some people who would be good fits for EA charities automatically rule them out because the salaries are low enough that they would have to make undesirable time/money tradeoffs. However, it's a bit of a tricky problem, because most nonprofits want to pay everyone roughly the same amount, so hiring one marginal person at say 20% more really means increasing all salaries by that much.
Another relevant factor is how much of a salary cut you're looking at when mo...
From talking to Matt Wage a few times I got the impression that he spends the equivalent of a few full time work weeks per year figuring out where to donate. Requiring potential donors to spend that much time seems like a flaw in the system, and EA ventures seems to be addressing it.
It's hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
I get the impression that these are going mostly to programs that already have a lot of evidence and aren't really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people...
To play devil's advocate (these don't actually represent my beliefs):
I can’t remember any EA orgs failing to reach a fundraising target.
This doesn't necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.
Open Phil has recently posted about an org they wish existed but doesn’t and funder-initiated startups.
It's pretty hard to get funding for a new organization, e.g. Spencer and I put a lot of effort into it without much success. The general problem I see is a lack of "angel investi...
These are both good points worth addressing! My understanding on (2) is that any proposed method of slowing down AGI research would likely antagonize the majority of AI researchers with relatively little actual slowdown. It seems more valuable to build alliances with current AI researchers, and get them to care about safety, in order to increase the amount of safety-concerned research done vs. safety-agnostic research.