Thomas Kwa

Student at Caltech. I help run Caltech EA.

Comments

Thomas Kwa's Shortform

I want to skill up in pandas/numpy/data science over the next few months. Where can I find a data science project that is relevant to EA? Some rough requirements:

  • Takes between 1 and 3 months of full-time work
  • Helps me (a pretty strong CS undergrad) become fluent in pandas quickly, and maybe use some machine learning techniques I've studied in class
  • About as open-ended as a research internship
  • Feels meaningful
    • Should be important enough that I enjoy doing it, but it's okay if it has e.g. 5% as much direct benefit as the highest-impact thing I could be doing
    • I'm interested in AI safety and other long-term cause areas
  • Bonus: working with time-series data, because I'm particularly confused about how it works.

I've already looked at the top datasets on kaggle and other places, and don't feel inclined to work on them because they don't seem relevant and have probably been analyzed to death. Also, I've only taken a few ML and no data science classes, so I might not be asking the right questions.

Spirituality & Science Policy and Infrastructure

I downvoted this because it contains large claims which are vague and probably false, and also because I don't see any relevance to the EA movement. To single one out, "The skeptical movement seems to be involved to some extent with regards to its branding and possibly research interference" sound like how pseudoscientists claim that controlled experiments interfere with their supernatural powers. Will reverse this vote if there's evidence I'm wrong.

There are efforts to promote geographic diversity in EA, as well as translate and integrate EA ideas to other cultures and do cross-cultural moral research. Furthering any one of these would reduce the effect of any Eurocentric bias the EA community has inherited, and I think they're all better places to look than alternative medicine.

[Expired] 20,000 Free $50 Charity Gift Cards

Some EA-aligned charities listed (use the search function in the bottom right corner):

  • Center for Long-Term Risk (listed as Effective Altruism Foundation)
  • Founders Pledge
  • 80,000 Hours
  • Center for Effective Altruism
  • Future of Humanity Institute
  • Machine Intelligence Research Institute
  • AMF
  • GiveDirectly
  • Animal Ethics

I'm probably missing a ton of global health and animal charities, because I don't know them.

andrewleeke's Shortform

You might find it helpful to look at this ethnography of an EA group. Also relevant is this analysis of the Big Five personality traits of respondents to the Rethink Charity community survey. It has statistical flaws, but one takeaway is that most EAs are high in openness. Finally, there's this Global Optimum Podcast episode on the personality of EAs.

Justification and signalling explanations don't seem especially compelling to me because in some sense, everything is justification and signaling. Also, I'm not sure if you're hinting at this, but it's unlikely that you'll be diagnosed with a mental illness just for being drawn to / believing in EA, unless it significantly impedes your everyday functioning. Since I'm not a therapist, I don't think I can comment further on what a therapist would say.

Correlations Between Cause Prioritization and the Big Five Personality Traits

The link to the survey data (https://github.com/rethinkpriorities/ea-data/tree/master/data) is now broken.

Please Take the 2020 EA Survey

To add to that, if there are concerns about data being de-anonymized, there are statistical techniques to mitigate it.

What are some quick, easy, repeatable ways to do good?

This is a bit of a frame challenge, but I think it's OK to feed stray cats. Most people are built to empathize with people around us, not the total sum of global utility, so it's hard to beat the emotional high of a simple random act of kindness. (Conversely, for most people, the vast majority of good you can do comes from your career choice, and it's hard to approach this with small-scale actions.) So my advice is to pick someone close to you, do something nice for them and not worry about the magnitude of the altruistic payoff. You could also reflect on the positive long-term impact of some action (mentally follow the chain all the way from "finish project" -> "gain career capital" -> "get hired by <EA org>" -> "be able to work on <cause area>" -> reduce suffering) and use that to motivate yourself, but that only works for some people.

This is a classic idea in EA circles going back to 2009, and it absolutely still applies.

Desperation Hamster Wheels

As an empirical matter, one's naive/early/quick analyses of how good (or cost-effective, or whatever) something is seem to often be overly optimistic.

One possible reason is completely rational: if we're estimating expected value of an intervention with a 1% chance to be highly valuable, then 99% of the time we realize the moonshot won't work and revise the expected value downward.

When you shouldn't use EA jargon and how to avoid it

Sometimes I catch myself using jargon even knowing it's a bad communication strategy, because I just like feeling clever, or signaling that I'm an insider, or obscuring my ideas so people can't challenge them. OP says these are "naughty reasons to use jargon" (slide 9), but I think that in some cases they fulfill some real social need for people, and if these motivations are still there, we need better ways to satisfy them.

Some ideas:

  • Instead of associating jargon with cleverness, mentally reframe things. Someone who uses jargon isn't necessarily clever, especially if they're misusing it. Feynman said "If you can’t explain something in simple terms, you don’t understand it", so pat yourself on the back for translating something into straightforward language when appropriate.
  • Instead of using jargon to feel connected to the in-group, build a group identity that doesn't rely on jargon. I'm not really sure how to do this.
  • Instead of using jargon to prevent people from understanding your ideas to challenge them, keep your identity small so you don't feel personally attacked when being challenged. When you have low confidence in a belief, qualify them with an "I think" or "I have a lot of confusing intuitions here, but..."
    • Perhaps also doing exposure therapy to practice losing debates without feeling like you've been slapped down
    • This is actually one of the reasons I like the "epistemic status" header; it helps me qualify my statements much more efficiently. From now one I'll be dropping the "epistemic status" terminology but keeping the header.

I'm sure there are more and better ideas in this direction.

Load More