AP

Aman Patel

168 karmaJoined Aug 2020Los Angeles, CA 90007, USA

Comments
23

The hygiene hypothesis (especially the autoimmune disease variant, brief 2-paragraph summary here if you Ctrl+F "Before we go") could be another example. 

On a somewhat related note, Section V of this SlateStarCodex post goes through some similar examples where humans departing from long-lived tradition has negative effects that don't become visible for a long time. 

I'm curious about what's the original source of the funding you're giving out here. According to this Nonlinear received $250k from Future Fund and $600k from Survival and Flourishing Fund. Is the funding being distributed here coming solely from the SFF grant? Does Nonlinear have other funding sources besides Future Fund and SFF? 

(I didn't do any deeper dive than looking at Nonlinear's website, where I couldn't find anything about funding sources.)

Thanks for writing this--even though I've been familiar with AI x-risk for a while, it didn't really hit me on an emotional level that dying from misaligned AI would happen to me too, and not just "humanity" in the abstract. This post changed that. 

Might eventually be useful to have one of these that accounts for biorisk too, although biorisk "timelines" aren't as straightforward as trying to estimate the date that humanity builds the first AGI.

Thanks, great points (and counterpoints)!

If you are a community builder (especially one with a lot of social status), be loudly transparent with what you are building your corner of the movement into and what tradeoffs you are/aren’t willing to make.

I like this suggestion--what do you imagine this transparency looks like? Do you think, e.g., EA groups should have pages outlining their community-building philosophies on their websites? Should university groups should write public Forum posts about their plans and reasoning before every semester/quarter or academic year? Would you advocate for more community-building roundtables at EAGs? (These are just a few possible example modalities of transparency that just came to my head, very interested in hearing more.)

Hey Jordan! Great to see another USC person here. The best writing advice I've gotten (that I have yet to implement) is to identify a theory of change for each potential piece--something to keep in mind!

6 sounds interesting, if you can make a strong case for it. Aligning humans isn't an easy task (as most parents, employers, governments, and activists know very well), so I'm curious to hear if you have tractable proposals.

7 sounds important given that a decent number of EAs are vegan, and I'm quite surprised I haven't heard of this before. 15 IQ points is a whole standard deviation, so I'd love to see the evidence for that.

8 might be interesting. I suspect most people are already aware of groupthink, but it could be good to be aware of other relevant phenomena that might not be as widely-known (if there are any).

From what I can tell, 11 proposes a somewhat major reconsideration of how we should approach improving the long-term future. If you have a good argument, I'm always in favor of more people challenging the EA community's current approach. I'm interested in 21 for the same reason.

(In my experience, the answer to 19 is no, probably because there isn't a clear, easy-to-calculate metric to use for longtermist projects in the way that GiveWell uses cost-effectiveness estimates.)

Out of all of these, I think you could whip up a draft post for 7 pretty quickly, and I'd be interested to read it!

Thanks Linch! This list is really helpful. One clarifying question on this point: 

Relatedly, what does the learning/exploration value of this project look like?

  1. To the researcher/entrepreneur?
  2. To the institution? (if they're working in an EA-institutional context)
  3. To the EA or longtermist ecosystem as a whole?

For 1) and 2), I assume you're referring to the skills gained by the person/institution completing the project, which they could then apply to future projects. 

For 3), are you referring to the possibility of "ruling out intervention X as a feasible way to tackle x-risks"? That's what I'm assuming, but I'm just asking to make sure I understand properly.

Thanks again!

Load more