(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil's resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher. I'm also a proud GWWC pledger and vegan.
Are you a US resident who spends a lot of money on rideshares + food delivery/pickup? If so, consider the following:
I think the opposite might be true: when you apply it to broad areas, you're likely to mistake low neglectedness for a signal of low tractability, and you should just look at "are there good opportunities at current margins." When you start looking at individual solutions, it starts being quite relevant whether they have already been tried. (This point already made here.)
- Would it be good to solve problem P?
- Can I solve P?
What is gained by adding the third thing? If the answer to #2 is "yes," then why does it matter if the answer to #3 is "a lot," and likewise in the opposite case, where the answers are "no" and "very few"?
Edit: actually yeah the "will someone else" point seems quite relevant.
Fair enough on the "scientific research is super broad" point, but I think this also applies to other fields that I hear described as "not neglected" including US politics.
Not talking about AI safety polling, agree that was highly neglected. My understanding, reinforced by some people who have looked into the actually-practiced political strategies of modern campaigns, is that it's just a stunningly under-optimized field with a lot of low-hanging fruit, possibly because it's hard to decouple political strategy from other political beliefs (and selection effects where especially soldier-mindset people go into politics).
I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.
Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause.
That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn't already being done within that cause. And so on.
What I mean by "disaster for EA" (despite the wins/exceptions in the previous paragraph) is that I often encounter "but that's not neglected" as a reason not to do something, whether at a personal or organizational or movement-strategy level, and it seems again like a decent initial heuristic but easily overridden by taking a closer look. Sure, maybe other people are doing that thing, and fewer or zero people are doing your alternative. But can't you just look at the existing projects and ask whether you might be able to improve on their work, or whether there still seems to be low-hanging fruit that they're not taking, or whether you could be a force multiplier rather than just an input with diminishing returns? (Plus, the fact that a bunch of other people/orgs/etc are working on that thing is also some evidence, albeit noisy evidence, that the thing is tractable/important.) It seems like the neglectedness heuristic often leads to more confusion than clarity on decisions like these, and people should basically just use importance * tractability (call it "the IT framework") instead.
Biggest disagreement between the average worldview of people I met with at EAG and my own is something like "cluster thinking vs sequence thinking," where people at EAG are like "but even if we get this specific policy/technical win, doesn't it not matter unless you also have this other, harder thing?" and I'm more like, "Well, very possibly we won't get that other, harder thing, but still seems really useful to get that specific policy/technical win, here's a story where we totally fail on that first thing and the second thing turns out to matter a ton!"
Thanks, glad to hear it's helpful!
I hope to eventually/maybe soon write a longer post about this, but I feel pretty strongly that people underrate specialization at the personal level, even as there are lots of benefits to pluralization at the movement level and large-funder level. There are just really high returns to being at the frontier of a field. You can be epistemically modest about what cause or particular opportunity is the best, not burn bridges, etc, while still "making your bet" and specializing; in the limit, it seems really unlikely that e.g. having two 20 hr/wk jobs in different causes is a better path to impact than a single 40 hr/wk job.
I think this applies to individual donations as well; if you work in a field, you are a much better judge of giving opportunities in that field than if you don't, and you're more likely to come across such opportunities in the first place. I think this is a chronically underrated argument when it comes to allocating personal donations.
Giving now vs giving later, in practice, is a thorny tradeoff. I think these add up to roughly equal considerations, so my currently preferred policy is to split my donations 50-50, i.e. give 5% of my income away this year and save/invest 5% for a bigger donation later. (None of this is financial/tax advice! Please do your own thinking too.)
In favor of giving now (including giving a constant share of your income every year/quarter/etc, or giving a bunch of your savings away soon):
In favor of giving later: