Thomas Kwa

Student at Caltech. I help run Caltech EA.

Posts

Sorted by New

Comments

When you shouldn't use EA jargon and how to avoid it

Sometimes I catch myself using jargon even knowing it's a bad communication strategy, because I just like feeling clever, or signaling that I'm an insider, or obscuring my ideas so people can't challenge them. OP says these are "naughty reasons to use jargon" (slide 9), but I think that in some cases they fulfill some real social need for people, and if these motivations are still there, we need better ways to satisfy them.

Some ideas:

  • Instead of associating jargon with cleverness, mentally reframe things. Someone who uses jargon isn't necessarily clever, especially if they're misusing it. Feynman said "If you can’t explain something in simple terms, you don’t understand it", so pat yourself on the back for translating something into straightforward language when appropriate.
  • Instead of using jargon to feel connected to the in-group, build a group identity that doesn't rely on jargon. I'm not really sure how to do this.
  • Instead of using jargon to prevent people from understanding your ideas to challenge them, keep your identity small so you don't feel personally attacked when being challenged. When you have low confidence in a belief, qualify them with an "I think" or "I have a lot of confusing intuitions here, but..."
    • Perhaps also doing exposure therapy to practice losing debates without feeling like you've been slapped down
    • This is actually one of the reasons I like the "epistemic status" header; it helps me qualify my statements much more efficiently. From now one I'll be dropping the "epistemic status" terminology but keeping the header.

I'm sure there are more and better ideas in this direction.

Making More Sequences

Can someone create an “introduction to EA” sequence? I would love to do it, but I think that this should be done by an actual mod or someone from an official EA institution.

 

The EA handbook is being turned into a sequence.

Which is better for animal welfare, terraforming planets or space habitats? And by how much?

I may write up an answer because the question is interesting, but I think the premise of this question-- that we have a meaningful choice between planets and habitats-- is unlikely.

1. Assuming space colonization and terraforming get here before AI or other transformative technologies like whole brain emulation, it seems very unlikely that the terraformed planet will be "unmanaged wilderness". First, the Earth is already over 35% of the land area of the inner planets, so it's not like there will be a large amount of free space. Second, without the benefit of natural water and nutrient sources, not to mention hundreds of thousands of years of evolution to reach a stable equilibrium, wilderness will be necessarily managed to maintain ecosystem balances.

2. In the long run, planets are extremely inefficient as space colonies. It takes just a few years to disassemble Mercury into solar panels and habitats, creating thousands of times as much economic value as anything that could exist on the planet. Asteroids don't even need to be lifted out of a gravity well to be turned into habitats. So economic incentives will be strongly against planets, making the question moot. (Unless we turn them into planet-sized computers or something, which would again be out of scope of this question.)

Objections to Value-Alignment between Effective Altruists
A non-exhaustive subset of admired individuals I believe includes: E. Yudkowsky, P. Christiano, S. Alexander, N. Bostrom, W. MacAskill, Ben Todd, H. Karnowsky, N. Beckstead, R. Hanson, O. Cotton-Barratt, E. Drexler, A. Critch, … As far as I perceive it, all revered individuals are male.

Although various metrics do show that the EA community has room to grow in diversity, I don't think the fandom culture has nearly that much gender imbalance. Some EA women who consistently produce very high-quality content include Arden Koehler, Anna Salamon, Kelsey Piper, Elizabeth Van Nostrand. I have also heard others revere Julia Wise, Michelle Hutchinson and Julia Galef, whose writing I don't follow. I think that among EAs, I have only slightly below median tendency to revere men over women, and these women EA thinkers feel about as "intimidating" or "important" to me as the men on your list.

Thomas Kwa's Shortform

Hmm, that's what I suspected. Maybe it's possible to estimate anyway though-- quick and dirty method would be to identify the most effective interventions a large charity has, estimate that the rest follow a power law, take the average and add error bars upwards for the possibility we underestimated an intervention's effectiveness?

Thomas Kwa's Shortform

Are there GiveWell-style estimates of the cost-effectiveness of the world's most popular charities (say UNICEF), preferably by independent sources and/or based on past results? I want to be able to talk to quantitatively-minded people and have more data than just saying some interventions are 1000x more effective.

What types of charity will be the most effective for creating a more equal society?

First off, welcome to the EA community! If you haven't already, you might want to read the Introduction to Effective Altruism. I don't have time to write up a full answer, so here are a few of my thoughts.

Usually in the effective altruism community, we are cause-neutral; that is, we try to address whichever charitable cause area maximizes impact. While it's intuitively compelling that the most cost-effective effort is to eliminate the root cause of a problem, this could be a suboptimal choice for a few reasons.

  • Most things have multiple causes, and it's not obvious which one to spend the most resources on without an in-depth analysis; one could just as easily say that the root cause of poverty-related problems is a lack of caring about the poor, or inability to coordinate to fix large problems, or the high cost of basic necessities like medicine and clean water.
  • Even if systemic change would fix wealth inequality, actually finding and implementing such change could be difficult or expensive enough that it's more impactful to address the needs of the extreme poor first.
  • It could be tractable to research, say, government structures that incentivize redistribution of wealth if you have a political science PhD, but there might be no good way for the average person to spend money on the cause area.

I haven't looked in depth at the arguments for systemic change being cost-effective, partly because global health isn't my specialty. If you have a strong argument for it that isn't already addressed in a literature review, I encourage posting it here as an article or shortform post.

What types of charity will be the most effective for creating a more equal society?

In the interest of being helpful and welcoming to this new user, could any downvoters give feedback or explain their votes?

Edit: Someone is trying to join, or at least interface with, the EA community by asking a question that we can answer. The question is well-formed, represents an hour or more of thought, and addresses a popular idea among the altruistically-minded. The only concrete thing I don't like about this post is that the OP is slightly rude in saying "Please, if you disagree with me, carry your precious opinion elsewhere."

I think that people are downvoting this because the OP is not impartial, and has a preferred way to improve the world. I think that in general, automatically downvoting posts by such people is wrong, and if we have good epistemic hygiene, the benefits (being more welcoming and intellectually diverse, helping future people understand EA by addressing popular misconceptions and mistakes) by engaging with the question will far outweigh risks of dilution. This is because dilution only becomes a big problem when people start to misunderstand or misappropriate EA ideas, and we address such misunderstandings precisely through high-fidelity communication. Engaging here is one of the highest-fidelity forms of text-based communication possible.

Load More