Just flagging that space doesn't solve anything - it just pushes back resource constraints a bit. Given speed-of-light constraints, we can only increase resources via space travel ~quadratically with time, which won't keep up with either exponential or hyperbolic growth.
"Research or writing assistance for researchers (especially senior ones) at orgs like FHI, Forethought, MIRI, CHAI"
As a senior research scholar at FHI, I would find this valuable if the assistant was competent and the arrangement was low cost to me (in terms of time, effort, and money). I haven't tried to set up anything like this since I expect finding someone competent, working out the details, and managing them would not be low cost, but I could imagine that if someone else (such as BERI) took care of details, it very well may be low cost. I support efforts to try to set something like this up, and I'd like to throw my hat into the ring of "researchers who would plausibly be interested in assistants" if anyone does set this up.
I'm honestly not certain - I don't believe we'll solve any of these problems by a degrowth approach, so the only way to get a real solution is via innovation and/or adoption of solutions. More people would help with that, but also would contribute more to the problem in the meantime. I think whether the sign was positive or negative might depend on the specifics (eg, I think if environmentalists have fewer kids because of a fear of overpopulation, that will generally be bad for the environment).
Another point - more humans means more people to find solutions. So we have more people polluting the planet, but also more people working on clean energy solutions that will get us off fossil fuels.
Worth noting that if some political choices have very large negative outcomes, then choosing political paths that avoid those outcomes would have very positive counterfactual impact, even if no one sees it.
I agree with the general point that:
E[~optimal future] - E[good future] >> E[good future] - E[meh/no future]
It's not totally clear to me how much can be done to optimize chances of ~optimal future (as opposed to, there's probably a lot more that can be done to decrease X-risk), but I do have an intuition that probably some good work on the issue can be done. This does seem like an under-explored area, and I would personally like to see more research in it.
I'd also like to signal-boost this relevant paper by Bostrom and Shulman:https://nickbostrom.com/papers/digital-minds.pdfwhich proposes that an ~optimal compromise (along multiple axes) between human interests and totalist moral stances could be achieved by, for instance, filling 99.99% of the universe with hedonium and leaving the rest to (post-)humans.
I think it's really bad if people feel like they can't push back against claims they don't agree with (especially regarding cause/intervention prioritization), and I don't think the author of a post saying (effectively) "please don't push back against this claim if you disagree with it" should be able to insulate claims from scrutiny. Note that the author didn't say "if we think claim X is true, what should we do, but please let's stay focused and not argue about claim X here" but instead "I think claim X is true - given that, what should we do?"
"the root cause of most of the ills of society is inequality, primarily economic inequality - income inequality"
While I think income inequality (or, perhaps even more so, consumption inequality) is a large problem, I don't think it's the root cause of most of the ills of society. I'd imagine that tribalism, selfishness, mental-health problems, and so on are larger causes. In the US, for instance, my sense is that racism is a root of more problems than is income inequality.
More specifically answering the question you asked, I'd imagine political solutions would be the most effective here, as the government plays such a large role in influencing the economic distribution, and the amount of money in politics is incredibly small compared to the effect of political outcomes. I could imagine effective organizations in this area could include think tanks searching for political solutions, firms lobbying for implementing these solutions, or organizations that work to elect politicians/parties that are more likely to appropriately address these concerns.
[I'd also note that, from a global perspective, inequality between countries may typically larger than within countries, so it would perhaps be better to focus on health and development charities such as AMF, though one could make an argument that (for instance) social problems in the US spill over into problems for the rest of the world, so focusing on inequality in the US may be more important that a naive calculation would indicate.]
FWIW, here's a Vox article arguing that gridlock from presidential systems isn't just bad in terms of "normal" policy outcomes, but can also lead to crises of legitimacy if polarization is too high (in which case the executive and legislative branches may both claim to speak for the people while disagreeing, and democratic principles won't necessarily say how to resolve the disagreement), which runs the risk of collapsing the entire political system:
Thanks, I think this is interesting and these sorts of considerations may become increasingly important as EA grows. One other strategy that I think is worth pursuing is preventative measures. IMHO, ideally EA would be the kind of community that selectively repels people likely to be malicious (eg I think it's good if we repel people who are generally fueled by anger, people who are particularly loud and annoying, people who are racist, etc). I think we already do a pretty good job of "smacking down" people who are very brash or insulting to other members, and I think the epistemic norms in the community probably also select somewhat against people who are particularly angry or who have a tendency to engage in ad hominem. Might also be worth considering what other traits we want to select for/against, and what sort of norms we could adopt towards those ends.