My boring answer would be to see details on our website. In terms of submission style, we say:
- We recommend that applicants take about 1–2 hours to write their applications. This does not include the time spent developing the plan and strategy for the project – we recommend thinking about those carefully prior to applying.
- Please keep your answers brief and ensure the total length of your responses does not exceed 10,000 characters. We recommend a total length of 2,000–5,000 characters.
- We recommend focusing on the substantive arguments in favour of your project rather than polishing your submission.
- We recommend honestly communicating the strengths and weaknesses of your project rather than trying to “sell” your proposal.
You can find details on the scope of grants that EAIF will consider funding for here (although this is subject to change - details here).
For non-obvious mistakes, some examples that come to mind are:
Currently we don't have a process for retroactively evaluating EAIF grants. However, there are a couple of informal channels which can help to improve decision-making:
I think a lack of a proper M&E function is a problem, and one that I would be keen to address longer term
Hey - I think it's important to clarify that EAIF is optimising for something fairly different from GiveWell (although we share the same broad aim):
As such, a direct/equivalent comparison is fairly challenging, with our "bar for funding" fairly different to GiveWell's. The other caveat is that we don't have a systematic process for retroactively classifying grants as "wins" or "losses" - our current M&E process is much more fuzzy.
Given this, any answer about the cost-effectiveness of GiveWell vs EAIF will be pretty subjective and prone to error.
Nonetheless, my personal opinion is that the mean EAIF grant is likely more impactful than the typical GiveWell grant. Very briefly, this is becuase:
But this is just my personal view, contingent on a very large number of assumptions, which people very reasonably disagree on.
I think the premise of your question is roughly correct: I do think it's pretty hard to "help EA notice what it is important to work on", for a bunch of reasons:
Given those challenges, it's not suprising to me if we struggle to find many projects in this area. To overcome that I think we would need to take a more active approach (e.g. RFPs, etc). But we are still in the early days of thinking about these kinds of questions
Good Question! We have discussed running RFP(s) to more directly support projects we'd like to see. First, though, I think we want to do some more strategic thinking about the direction we want EAIF to go in, and hence at this stage I think we are fairly unsure about which project types we'd like to see more of.
Caveats aside, I personally[1] would be pretty interested in:
Not speaking for EAIF / EA Funds / EV
Hey, good question!
Here's a crude rationale:
Of course, there are a bunch of important considerations and nuance that have been ignored in this hypothetical - indeed, I think it's pretty important to be cautious / suspicious about calculations like the above, so we should often discount the "multiplier" factor signficantly. Nonetheless, I think (some version of) the above argument goes through for a number of projects EAIF supports.
I agree there's no single unified resource. Having said that, I found Richard Ngo's "five alignment clusters" pretty helpful for bucketing different groups & arguments together. Reposting below:
- MIRI cluster. Think that P(doom) is very high, based on intuitions about instrumental convergence, deceptive alignment, etc. Does work that's very different from mainstream ML. Central members: Eliezer Yudkowsky, Nate Soares.
- Structural risk cluster. Think that doom is more likely than not, but not for the same reasons as the MIRI cluster. Instead, this cluster focuses on systemic risks, multi-agent alignment, selective forces outside gradient descent, etc. Often work that's fairly continuous with mainstream ML, but willing to be unusually speculative by the standards of the field. Central members: Dan Hendrycks, David Krueger, Andrew Critch.
- Constellation cluster. More optimistic than either of the previous two clusters. Focuses more on risk from power-seeking AI than the structural risk cluster, but does work that is more speculative or conceptually-oriented than mainstream ML. Central members: Paul Christiano, Buck Shlegeris, Holden Karnofsky. (Named after Constellation coworking space.)
- Prosaic cluster. Focuses on empirical ML work and the scaling hypothesis, is typically skeptical of theoretical or conceptual arguments. Short timelines in general. Central members: Dario Amodei, Jan Leike, Ilya Sutskever.
- Mainstream cluster. Alignment researchers who are closest to mainstream ML. Focuses much less on backchaining from specific threat models and more on promoting robustly valuable research. Typically more concerned about misuse than misalignment, although worried about both. Central members: Scott Aaronson, David Bau.
To return to the question "what is the current best single article (or set of articles) that provide a well-reasoned and comprehensive case for believing that there is a substantial (>10%) probability of an AI catastrophe this century?", my guess is that these different groups would respond as follows:[1]
But I could easily be misrepresenting these different groups' "core" argument, and I haven't read all of these, so could be misunderstanding
A couple of weeks ago I blocked all mentions of "Effective Altruism", "AI Safety", "OpenAI", etc from my twitter feed. Since then I've noticed it become much less of a time sink, and much better for mental health. Would strongly recommend!
I wrote the following on a draft of this post. For context, I currently do (very) part-time work at EAIF
Overall, I‘m pretty excited to see EAIF orient to a principles-first EA. Despite recent challenges, I continue to believe that the EA community is doing something special and important, and is fundamentally worth fighting for. With this reorientation of EAIF, I hope we can get the EA community back to a strong position. I share many of the uncertainties listed - about whether this is a viable project, how EAIF will practically evaluate grants under this worldview, or if it’s even philosophically coherent. Nonetheless, I’m excited to see what can be done.
I thought it might be helpful for me to add my own thoughts, as a fund manger at EAIF (Note I'm speaking in a personal capacity, not on behalf of EA Funds or EV).
I'm happy to go into details as to the details about changes we proposed and why, although I don't think they are especially relevant to this situation