E

ExempliGratia

89 karmaJoined

Comments
8

I'm sorry this was downvoted so much. If I had to guess, I would say it was because this post challenges EA doctrines (donating to nonprofits) while touching on sensitive political issues (homelessness) while being a subjective essay. If you want more karma on this forum, or more success in changing minds, try writing more formally and verifiably.

"No moral problems in life are shaped like Trolley Problems, and a lot are shaped like Giving Money to Homeless People."

That's a bold claim that you have not made an argument for. In sensitive political issues, it is unusual to make someone better off without making someone else worse off, somehow. That's typically why those issues are sensitive instead of uncontroversial. A possible downside of large-scale nonprofit donations that I find greatly underdiscussed in EA is the potential for donors to impose negative externalities on third parties. Unilateral donation lacks the sensitivity that public policy requires. So, can't we use nonprofits for uncontroversial issues, and public policy for sensitive issues? That's an aspiration that entails making (meta-)political judgements. It will have failures in practice.

Low neglectedness can be outweighed by high importance or tractability. The hard part is being confident about tractability and room for more funding. I think one can make space for importance-focused efforts despite this uncertainty, especially with the consideration that rival actors are incentivized to increase it.

EA insights could be a valuable complement to existing ecosystems. Precisely because large political organizations have established roles to maintain, they may have operational or epistemic limitations. It's easy to draw analogies with large health charities that have received EA critique for marginal impact.

Given an aligned AGI, what is your point estimate for the TOTAL (across all human history) cost in USD of having aligned it?

To hopefully spare you a bit of googling without unduly anchoring your thinking, Wiki says the Manhattan Project cost $21-23 billion in 2018 USD, with only about 3.7% or $786m of that being research and development.

How efficiently could MIRI "burn through" its savings if it considered AGI sufficiently likely to be imminent? In other words, if MIRI decided to spend all its savings in a year, how many normal-spending-years' worth of progress on AI safety do you think it would achieve?

Q1: Has MIRI noticed a significant change in funding following the change in disclosure policy?

Q2: If yes to Q1, what was the direction of the change?

Q3: If yes to Q1, were you surprised by the degree of the change?

ETA:

Q4: If yes to Q3, in which direction were you surprised?


Given a "bad" AGI outcome, how likely do you think a long-term worse-than-death fate for at least some people would be relative to extinction?

Q1: How closely does MIRI currently coordinate with the Long-Term Future Fund (LTFF)?

Q2: How effective do you currently consider [donations to] the LTFF relative to [donations to] MIRI? Decimal coefficient preferred if you feel comfortable guessing one.

Q3: Do you expect the LTFF to become more or less effective relative to MIRI as AI capability/safety progresses?