One of the reasons highly useful projects don't get discovered quickly is that they are in under explored spaces. Certain areas are systematically under explored due to biases in peoples' search heuristics. Several examples of such biases are:
1) Schlep blindness: named by Paul Graham, posits that difficult projects are under explored.
2) Low-status blindness: projects which are not predicted to bring the project lead prestige are under explored.
3) High-variance blindness: projects which are unlikely to succeed but that have a positive expected value anyway are under explored.
4) Already invented blindness: projects that cover areas that have already been explored by others are assumed to have been competently explored.
5) Not obviously scalable blindness: projects that don't have an obvious route to scaling are under explored.
Are there other biases in what EAs pay attention to?
I believe this is useful because a project checking a lot of boxes in this space is *some* evidence that it is worth pursuit: few others will be interested in it giving you a comparative advantage.
I'm not sure I have as good of a handle on the broader EA ecosystem as others, so consider my thoughts provisional, but I'd suggest adding
A special subset of low-status blindness: there's a bias toward more conventional projects that are easy to understand, since it's easier to get affirmation from others if they understand what you're working on. (Lifted from Jaan Taallinn's Singularity Summit 2011 talk)
I suspect EAs may prefer going down the nonprofit route, which seems very noble, but more overall long-term utility may often be produced by starting a for-profit business. E.g., Elon Musk is one of the most effective EAs on the planet because he did decide to go the capitalist route.
I'm not sure whether to add basic research stuff or not- the QALY is a pretty creaky foundation, but I grant there's a lot of uncertainty as to how to improve it.