Student at Caltech. I help run Caltech EA.
Sometimes I catch myself using jargon even knowing it's a bad communication strategy, because I just like feeling clever, or signaling that I'm an insider, or obscuring my ideas so people can't challenge them. OP says these are "naughty reasons to use jargon" (slide 9), but I think that in some cases they fulfill some real social need for people, and if these motivations are still there, we need better ways to satisfy them.
I'm sure there are more and better ideas in this direction.
So far, I’ve produced one of what I hope will be several sections of the Handbook. The topic is “Motivation”: What are the major ideas and principles of effective altruism, and how do they inspire people to take action? (You could also think of this as a general introduction to EA.)If this material is received well enough, I’ll keep releasing additional material on a variety of topics, following a similar format. If people aren’t satisfied with the content, style, or format, I may switch things up in the future.
So far, I’ve produced one of what I hope will be several sections of the Handbook. The topic is “Motivation”: What are the major ideas and principles of effective altruism, and how do they inspire people to take action? (You could also think of this as a general introduction to EA.)
If this material is received well enough, I’ll keep releasing additional material on a variety of topics, following a similar format. If people aren’t satisfied with the content, style, or format, I may switch things up in the future.
Can someone create an “introduction to EA” sequence? I would love to do it, but I think that this should be done by an actual mod or someone from an official EA institution.
The EA handbook is being turned into a sequence.
I may write up an answer because the question is interesting, but I think the premise of this question-- that we have a meaningful choice between planets and habitats-- is unlikely.
1. Assuming space colonization and terraforming get here before AI or other transformative technologies like whole brain emulation, it seems very unlikely that the terraformed planet will be "unmanaged wilderness". First, the Earth is already over 35% of the land area of the inner planets, so it's not like there will be a large amount of free space. Second, without the benefit of natural water and nutrient sources, not to mention hundreds of thousands of years of evolution to reach a stable equilibrium, wilderness will be necessarily managed to maintain ecosystem balances.
2. In the long run, planets are extremely inefficient as space colonies. It takes just a few years to disassemble Mercury into solar panels and habitats, creating thousands of times as much economic value as anything that could exist on the planet. Asteroids don't even need to be lifted out of a gravity well to be turned into habitats. So economic incentives will be strongly against planets, making the question moot. (Unless we turn them into planet-sized computers or something, which would again be out of scope of this question.)
An idea very similar to this was mentioned on the EA forum in 2015.
A non-exhaustive subset of admired individuals I believe includes: E. Yudkowsky, P. Christiano, S. Alexander, N. Bostrom, W. MacAskill, Ben Todd, H. Karnowsky, N. Beckstead, R. Hanson, O. Cotton-Barratt, E. Drexler, A. Critch, … As far as I perceive it, all revered individuals are male.
Although various metrics do show that the EA community has room to grow in diversity, I don't think the fandom culture has nearly that much gender imbalance. Some EA women who consistently produce very high-quality content include Arden Koehler, Anna Salamon, Kelsey Piper, Elizabeth Van Nostrand. I have also heard others revere Julia Wise, Michelle Hutchinson and Julia Galef, whose writing I don't follow. I think that among EAs, I have only slightly below median tendency to revere men over women, and these women EA thinkers feel about as "intimidating" or "important" to me as the men on your list.
Hmm, that's what I suspected. Maybe it's possible to estimate anyway though-- quick and dirty method would be to identify the most effective interventions a large charity has, estimate that the rest follow a power law, take the average and add error bars upwards for the possibility we underestimated an intervention's effectiveness?
Are there GiveWell-style estimates of the cost-effectiveness of the world's most popular charities (say UNICEF), preferably by independent sources and/or based on past results? I want to be able to talk to quantitatively-minded people and have more data than just saying some interventions are 1000x more effective.
First off, welcome to the EA community! If you haven't already, you might want to read the Introduction to Effective Altruism. I don't have time to write up a full answer, so here are a few of my thoughts.
Usually in the effective altruism community, we are cause-neutral; that is, we try to address whichever charitable cause area maximizes impact. While it's intuitively compelling that the most cost-effective effort is to eliminate the root cause of a problem, this could be a suboptimal choice for a few reasons.
I haven't looked in depth at the arguments for systemic change being cost-effective, partly because global health isn't my specialty. If you have a strong argument for it that isn't already addressed in a literature review, I encourage posting it here as an article or shortform post.
In the interest of being helpful and welcoming to this new user, could any downvoters give feedback or explain their votes?
Edit: Someone is trying to join, or at least interface with, the EA community by asking a question that we can answer. The question is well-formed, represents an hour or more of thought, and addresses a popular idea among the altruistically-minded. The only concrete thing I don't like about this post is that the OP is slightly rude in saying "Please, if you disagree with me, carry your precious opinion elsewhere."
I think that people are downvoting this because the OP is not impartial, and has a preferred way to improve the world. I think that in general, automatically downvoting posts by such people is wrong, and if we have good epistemic hygiene, the benefits (being more welcoming and intellectually diverse, helping future people understand EA by addressing popular misconceptions and mistakes) by engaging with the question will far outweigh risks of dilution. This is because dilution only becomes a big problem when people start to misunderstand or misappropriate EA ideas, and we address such misunderstandings precisely through high-fidelity communication. Engaging here is one of the highest-fidelity forms of text-based communication possible.