Do you have examples of systemic problems in the EA project that could be solved by targeted coordination mechanisms?
I'll give some answers as examples. I'd like to see answers even if you aren't sure if they are actually problems or not, or if they are partially solved, or even if you think that there might be a better solution - just mention it in the text.
I'm asking mostly because I'm curious about the extent to which we could use more coordination mechanisms, but I might also try to tackle something here - especially if it's related to local groups or to prioritization research (which I plan on learning more about).
That is indeed a problem, I also saw signs of this several times. Thank you for that comment. At least for initial funding lotteries might be a good idea, as they would allow much quicker grant applications and would remove bias. I recently asked a questions about this here: https://forum.effectivealtruism.org/posts/XtxnLERQfampY7dhh/lotteries-for-everything
While I do think that lotteries have some flaws, they still seem pretty good to me when it comes to initial funding.
An aspect of the funding problem is that money allocation is bad everywhere. (On a larger scale, the market mostly woks, but if you get into the details of being a human wanting to trade your time for money, most things around job applications and grant applications, is more or less terrible.) If we design a system that don't suck, over time EA will attract people who are here for the money not for the mission.
A solution should have the feature:
1) It don't suck if you are EA aligned
2) If you are not EA aligned it should not be easier to get money from us than from other places. (It is possible to get non EA aligned people to do EA aligned actions. But that require an very different level of oversight.)
I think a grant lottery, where the barrier to entry is to have done some significant amount of EA volunteer work or EA donation or similar, would be an awesome experiment.