Low neglectedness can be outweighed by high importance or tractability. The hard part is being confident about tractability and room for more funding. I think one can make space for importance-focused efforts despite this uncertainty, especially with the consideration that rival actors are incentivized to increase it.
EA insights could be a valuable complement to existing ecosystems. Precisely because large political organizations have established roles to maintain, they may have operational or epistemic limitations. It's easy to draw analogies with large health charities that have received EA critique for marginal impact.
Given an aligned AGI, what is your point estimate for the TOTAL (across all human history) cost in USD of having aligned it?
To hopefully spare you a bit of googling without unduly anchoring your thinking, Wiki says the Manhattan Project cost $21-23 billion in 2018 USD, with only about 3.7% or $786m of that being research and development.
Q1: How closely does MIRI currently coordinate with the Long-Term Future Fund (LTFF)?
Q2: How effective do you currently consider [donations to] the LTFF relative to [donations to] MIRI? Decimal coefficient preferred if you feel comfortable guessing one.
Q3: Do you expect the LTFF to become more or less effective relative to MIRI as AI capability/safety progresses?
I'm sorry this was downvoted so much. If I had to guess, I would say it was because this post challenges EA doctrines (donating to nonprofits) while touching on sensitive political issues (homelessness) while being a subjective essay. If you want more karma on this forum, or more success in changing minds, try writing more formally and verifiably.
"No moral problems in life are shaped like Trolley Problems, and a lot are shaped like Giving Money to Homeless People."
That's a bold claim that you have not made an argument for. In sensitive political issues, it is unusual to make someone better off without making someone else worse off, somehow. That's typically why those issues are sensitive instead of uncontroversial. A possible downside of large-scale nonprofit donations that I find greatly underdiscussed in EA is the potential for donors to impose negative externalities on third parties. Unilateral donation lacks the sensitivity that public policy requires. So, can't we use nonprofits for uncontroversial issues, and public policy for sensitive issues? That's an aspiration that entails making (meta-)political judgements. It will have failures in practice.