One of the reasons highly useful projects don't get discovered quickly is that they are in under explored spaces. Certain areas are systematically under explored due to biases in peoples' search heuristics. Several examples of such biases are:
1) Schlep blindness: named by Paul Graham, posits that difficult projects are under explored.
2) Low-status blindness: projects which are not predicted to bring the project lead prestige are under explored.
3) High-variance blindness: projects which are unlikely to succeed but that have a positive expected value anyway are under explored.
4) Already invented blindness: projects that cover areas that have already been explored by others are assumed to have been competently explored.
5) Not obviously scalable blindness: projects that don't have an obvious route to scaling are under explored.
Are there other biases in what EAs pay attention to?
I believe this is useful because a project checking a lot of boxes in this space is *some* evidence that it is worth pursuit: few others will be interested in it giving you a comparative advantage.
Is there a term for the first one? I generally refer to as the concentration of benefits and harms problem.
WRT the second: That reminds me that improving the metrics (such as QALYs) could be very high impact, but I think Stanford METRICS is already working on this?
Interesting, didn't hear about Stanford METRICS working on this, is there a link you can provide?
Not sure about the term for the first one, would be nice to come up with one :-) I think your term might be good, but something that would signal systemic intervention would be good, maybe something about meta-interventions? Also maybe something related to these LW posts, which I'm sure you're well familiar with: http://lesswrong.com/lw/kn/torture_vs_dust_specks/ and http://lesswrong.com/lw/n3/circular_altruism/