I think some form of AI-assited governance have great potential.
However, it seems like several of these ideas are (in theory) possible in some format today - yet in practice don't get adopted. E.g.
Enhancing epistemics and decision-making processes at the top levels of organizations, leading to more informed and rational strategies.
I think it's very hard to get even the most basic forms of good epistemic practices (e.g. putting probabilities on helpful, easy-to-forecast statements) embedded at the top levels of organizations (for standard moral maze-type reasons).
As such I think the role of AI here is pretty limited - the main bottleneck to adoption is political / bureacratic, rather than technological.
I'd guess the way to make progress here is in aligning [implementation of AI-assisted governance] with [incentives of influential people in the organization] - i.e. you first have to get the organization to actually care about good goverance (perhaps by joining it, or using external levers).
[Of course, if we go through crazy explosive AI-driven growth then maybe the existing model of large organizations being slow will no longer be true - and hence there would be more scope for AI-assisted governance]
Hi Arden, thanks for the comment
I think this was something that got lost-in-translation during the grant writeup process. In the grant evaluation doc this was written as:
I think [Richard's research] clearly fits into the kind of project that we want the EA community to be - [that output] feels pretty closely aligned to our “principles-first EA” vision
This a fairly fuzzy view, but my impression is Richard's outputs will align with the takes in this post both by "fighting for EA to thrive long term" (increasing the quality of discussion around EA in the public domain), and also by increasing the number of "thoughtful, sincere, selfless" individuals in the community (via his substack which has a decently sized readership), who may become more deeply involved in EA as a result.
--
On the broader question about "principles first" vs "cause specific" EA work:
My boring answer would be to see details on our website. In terms of submission style, we say:
- We recommend that applicants take about 1–2 hours to write their applications. This does not include the time spent developing the plan and strategy for the project – we recommend thinking about those carefully prior to applying.
- Please keep your answers brief and ensure the total length of your responses does not exceed 10,000 characters. We recommend a total length of 2,000–5,000 characters.
- We recommend focusing on the substantive arguments in favour of your project rather than polishing your submission.
- We recommend honestly communicating the strengths and weaknesses of your project rather than trying to “sell” your proposal.
You can find details on the scope of grants that EAIF will consider funding for here (although this is subject to change - details here).
For non-obvious mistakes, some examples that come to mind are:
Currently we don't have a process for retroactively evaluating EAIF grants. However, there are a couple of informal channels which can help to improve decision-making:
I think a lack of a proper M&E function is a problem, and one that I would be keen to address longer term
Hey - I think it's important to clarify that EAIF is optimising for something fairly different from GiveWell (although we share the same broad aim):
As such, a direct/equivalent comparison is fairly challenging, with our "bar for funding" fairly different to GiveWell's. The other caveat is that we don't have a systematic process for retroactively classifying grants as "wins" or "losses" - our current M&E process is much more fuzzy.
Given this, any answer about the cost-effectiveness of GiveWell vs EAIF will be pretty subjective and prone to error.
Nonetheless, my personal opinion is that the mean EAIF grant is likely more impactful than the typical GiveWell grant. Very briefly, this is becuase:
But this is just my personal view, contingent on a very large number of assumptions, which people very reasonably disagree on.
I think the premise of your question is roughly correct: I do think it's pretty hard to "help EA notice what it is important to work on", for a bunch of reasons:
Given those challenges, it's not suprising to me if we struggle to find many projects in this area. To overcome that I think we would need to take a more active approach (e.g. RFPs, etc). But we are still in the early days of thinking about these kinds of questions
Good Question! We have discussed running RFP(s) to more directly support projects we'd like to see. First, though, I think we want to do some more strategic thinking about the direction we want EAIF to go in, and hence at this stage I think we are fairly unsure about which project types we'd like to see more of.
Caveats aside, I personally[1] would be pretty interested in:
Not speaking for EAIF / EA Funds / EV
I'm pretty confident (~80-90%?) this is true, for reasons well summarized here.
I'm interested in thoughts on the OOM difference between animal welfare vs GHD (i.e. would $100m to animal welfare be 2x better than GHD, or 2000x?)