All of Aman Patel's Comments + Replies

The hygiene hypothesis (especially the autoimmune disease variant, brief 2-paragraph summary here if you Ctrl+F "Before we go") could be another example. 

On a somewhat related note, Section V of this SlateStarCodex post goes through some similar examples where humans departing from long-lived tradition has negative effects that don't become visible for a long time. 

I'm curious about what's the original source of the funding you're giving out here. According to this Nonlinear received $250k from Future Fund and $600k from Survival and Flourishing Fund. Is the funding being distributed here coming solely from the SFF grant? Does Nonlinear have other funding sources besides Future Fund and SFF? 

(I didn't do any deeper dive than looking at Nonlinear's website, where I couldn't find anything about funding sources.)

Hi Aman, 

Appreciate the question. We’ve received funding from different sources like the Survival and Flourishing Fund, Future Fund, and other private donors, with Emerson Spartz donating six figures annually.

This project would not fall under the scope of what the Future Fund granted us, so we will not be using their funding for this. 

This is coming directly out of our operating budget, so we're aiming to make payouts that have a higher counterfactual likelihood of impact.

Thanks for writing this--even though I've been familiar with AI x-risk for a while, it didn't really hit me on an emotional level that dying from misaligned AI would happen to me too, and not just "humanity" in the abstract. This post changed that. 

Might eventually be useful to have one of these that accounts for biorisk too, although biorisk "timelines" aren't as straightforward as trying to estimate the date that humanity builds the first AGI.

Thanks for posting your attempt! Yeah, it does seem like you ran into some of those issues in your attempt, and it's useful information to know that this task is very hard. I guess one lesson here is that we probably won't be able to build perfect institutions on the first try, even in safety-critical cases like AGI governance.

Just stumbled upon this post--I like the general vein in which you're thinking. Not sure if you're aware of it already, but this post by Paul Christiano addresses the "inevitable dangerous technology" argument as it relates to AI alignment. 

 - "First-principles design is intractable and misses important situation-specific details" - This could easily be true, I don't have a strong opinion on it, just intutions.

I think this objection is pretty compelling. The specific tools that an institution can use to ensure that a technology is deployed safely... (read more)

Thanks, great points (and counterpoints)!

If you are a community builder (especially one with a lot of social status), be loudly transparent with what you are building your corner of the movement into and what tradeoffs you are/aren’t willing to make.

I like this suggestion--what do you imagine this transparency looks like? Do you think, e.g., EA groups should have pages outlining their community-building philosophies on their websites? Should university groups should write public Forum posts about their plans and reasoning before every semester/quarter or a... (read more)

+1 to transparency!

I would love to see more community builders share their theories of change, even if they are just 1/2 page google docs with a few bullets and links to other articles (and where their opinions differ), and periodically update this (say, every 6 months or so) with major changes, examples of where they were wrong (this is by far the most important to me)

Yeah, I've had several (non-exchange) students ask me what altruism means--my go-to answer is "selflessly helping others," which I hope makes it clear that it describes a practice rather than a dogma. 

Thanks for the comment! I agree with your points--there are definitely elements of EA, whether they're core to EA or just cultural norms within the community, that bear stronger resemblances to cult characteristics. 

My main point in this post was to explore why someone who hasn't interacted with EA before (and might not be aware of most of the things you mentioned) might still get a cult impression. I didn't mean to claim that the Google search results for "altruism" are the most common reason why people come away with a cult impression. Rather, I thi... (read more)

Hey Jordan! Great to see another USC person here. The best writing advice I've gotten (that I have yet to implement) is to identify a theory of change for each potential piece--something to keep in mind!

6 sounds interesting, if you can make a strong case for it. Aligning humans isn't an easy task (as most parents, employers, governments, and activists know very well), so I'm curious to hear if you have tractable proposals.

7 sounds important given that a decent number of EAs are vegan, and I'm quite surprised I haven't heard of this before. 15 IQ points is ... (read more)

5
Jordan Arel
2y
Dang yeah I did a quick search on creatine and the IQ number right before writing this post, but now it’s looking like that source was not credible. Would have to research more to see if I can find an accurate reliable measure of creatine cognitive improvement, it seems it at least has a significant impact on memory. Anecdotally, I noticed quite a difference when I took a number of supplements while vegan, and I know there’s some research on a number of differences of various nutrients which vegans lack related to cognitive function. Will do a short post on sometime! I think human alignment is incredibly difficult, but too important to ignore. I have thought about it a very long time so do have some very ambitious ideas that could feasibly start small and scale up. Yes! I have been very surprised since joining how narrowly longtermism is focused. I think if the community is right about AGI being within a few decades with fast takeoff then broad longtermism may be less appealing, but I think if there is any doubt about this then we are massively underinvested in broad longtermism and putting all eggs in one basket so to speak. Will definitely write more about this! Right, definitely wouldn’t be exactly analogous to GiveWell, but I think nonetheless it is important to have SOME way of comparing all the longtermist projects to know what a good investment looks like. Thanks again for all the feedback Aman! Really appreciate it (and everything else you do for the USC group!!) and really excited to write more on some of these topics :)

Thanks Linch! This list is really helpful. One clarifying question on this point: 

Relatedly, what does the learning/exploration value of this project look like?

  1. To the researcher/entrepreneur?
  2. To the institution? (if they're working in an EA-institutional context)
  3. To the EA or longtermist ecosystem as a whole?

For 1) and 2), I assume you're referring to the skills gained by the person/institution completing the project, which they could then apply to future projects. 

For 3), are you referring to the possibility of "ruling out intervention X as a feas... (read more)

2
Linch
2y
Thanks for the question! Hmm, I don't think there's a hard cutoff of person/institution vs. ecosystem. For 3), skills learned from completing a project (or trying to complete a project) might also be generalizable elsewhere (so there's value other than ruling out specific interventions).  For example, learning how to do a biosecurity ballot initiative in California can be useful for doing future biosecurity ballot initiatives in California, or AI safety ones. Some of the skills and knowledge acquired here can be passed on to other individuals or orgs.

This thinking has come up in a few separate intro fellowship cohorts I’ve facilitated. Usually, somebody tries to flesh it out by asking whether it’s “more effective” to save one doctor (who could then be expected to save five more lives) or two mechanics (who wouldn’t save any other lives) in trolley-problem scenarios. This discussion often gets muddled, and many people have the impression that “EAs” would think it’s better to save the doctor, even though I doubt that’s a consensus opinion among EAs. I’ve found this to be a surprisingly large snag point t... (read more)

Changed "guilt" to "responsibility," but I'm not sure if that's much better.

Thanks for the feedback! I think this is probably a failure of the story more than a failure of your understanding--after all, a story that's hard to understand isn't fulfilling its purpose very well. Jackson Wagner's comment below is a good summary of the main points I was intending to get across.

Next time I write, I'll try to be more clear about the points I'm trying to convey. 

"As tagged, this story strikes me as a fable intended to explain one of the mechanisms behind so-called "S-risks", hellish scenarios that might be a fate worse than the "death" represented by X-risks."

That's what I was going for, although I'm aware that I didn't make this as clear as I should have.

"Of course it's a little confusing to have the twist with the sentient birds -- I think rather than a literal "farmed animal welfare" thing, this is intended to showcase a situation where two different civilizations have very different values."

Same thing here. Th... (read more)

Thanks! I'm glad you enjoyed it. The main reason I wrote this was to practice creative writing--and the Forum contest seemed to be a good place to do that. This is the first time I tried writing short stories--the only other creative writing piece I've published anywhere is this one, which I also wrote for the Forum contest: https://forum.effectivealtruism.org/posts/sGTHctACf73gunnk7/creative-writing-contest-the-legend-of-the-goldseeker

I hope that helps!

I recently learned about Training for Good, a Charity Entrepreneurship-incubated project, which seems to address some of these problems. They might be worth checking out.

I think this is a great exercise to think about, especially in light of somewhat-recent discussion on how competitive jobs at EA orgs are. There seems to be plenty of room for more people working on EA projects, and I agree that it’s probably good to fill that opportunity. Some loose thoughts:

There seem to be two basic ways of getting skilled people working on EA cause areas:
1.  Selec... (read more)

Thanks for this post! Reading through these lessons has been really informative. I have a few more questions that I'd love to hear your thinking on:

1) Why did you choose to run the fellowship as a part-time rather than full-time program?

2) Are there any particular reasons why fellowship participants tended to pursue non-venture projects?

3) Throughout your efforts, were you optimizing for project success or project volume, or were you instead focused on gathering data on the incubator space?

4) Do you consider the longtermist incubation space to be distinct from the x-risk reduction incubation space?

5) Was there a reason you didn't have a public online presence, or was it just not a priority?

2
Clifford
3y
Thanks, great questions! In response:  1) How come you choose to run the fellowship as a part-time rather than full-time program? We wanted to test some version of this quickly, part time meant: * It was easier to get a cohort of people to commit at short notice as they could participate alongside other commitments * We could deliver a reasonable quality stripped back programme in a short space of time and had more capacity to test other ideas at the same time With that said, if we were to run it again, we almost certainly would have explored running a full-time program for the next iteration.  2) Are there any particular reasons why fellowship participants tended to pursue non-venture projects? Do you mean non-profits rather than for-profits? If so, I think this is because nonprofits present the most obvious neglected opportunities for doing good. Participants did consider some for profit ideas. 3) Throughout your efforts, were you optimizing for project success or project volume, or were you instead focused on gathering data on the incubator space? The latter - we were trying to learn rather than optimise for early success. 4) Do you consider the longtermist incubation space to be distinct from the x-risk reduction incubation space? Yes, mostly insofar as the Longtermist space is broader than the x-risk space - there are ideas that might help the long term future or reduce s-risk without reducing x-risk. 5) Was there a reason you didn't have a public online presence? I think having an online presence that is careful about how this work is described (e.g. not overhyping entrepreneurship or encouraging any particular version of it) is important and therefore quite a bit of work. We felt we could be productive without one for the time we were working on the project so decided to deprioritise it. If we had continued to work on the project, we would have spent time on this.

Thanks for the post, this is an important and under-researched topic. 

Examples include some well-known conditions (chronic migraine, fibromyalgia, non-specific low-back pain), as well as many lesser-known ones (trigeminal neuralgia, cluster headache, complex regionary pain syndrome)

Some of these well-known chronic pain conditions can be hard to diagnose, too. Chronic pain conditions like fibromyalgia, ME/CFS, rheumatoid arthritis, and irritable bowel syndrome are frequently comorbid with each other, and may also be related to depression and mental hea... (read more)

This is an interesting idea. I'm trying to think of it in terms of analogues: you could feasibly replace "digital minds" with "animals" and achieve a somewhat similar conclusion. It doesn't seem that hard to create vast amounts of animal suffering (the animal agriculture industry has this figured out quite well), so some agent could feasibly threaten all vegans with large-scale animal suffering. And as you say, occasionally following through might help make that threat more credible. 

Perhaps the reason we don't see this happening is that nobody really... (read more)

8
saulius
3y
I think it is useful to think about something like this happening in the current world like you did here because we have better intuitions about the current world. Someone could say that they will torture animals unless vegans give them money, I guess. I think this doesn't happen for multiple reasons. One of them is that it would be irrational for vegans to agree to give money because then other people would continue exploiting them with this simple trick. I think that the same applies to far future scenarios. If an agent allows itself to be manipulated this easily, it won't become powerful. It's more rational to just make it publicly known that you refuse to engage with such threats. This is one of the reasons why most Western countries have a publicly declared policy to not negotiate with terrorists. So yeah, thinking about it this way, I am no longer concerned about this threats thing.
2
EdoArad
3y
Interesting!  Other analogies might be human rights and carbon emissions, as used in politics. Say that Party A cares about reducing emissions, then the opposing Party B has an incentive to appear as though they don't care about it at all and even propose actions that would increase emissions so that they could trade "not doing that" with some concession from Party A. I'm sure that we could find lots of real-world examples of that. Similarly, some (totalitarian?) regimes might have some incentive to make major parts of the population politically conceived as unworthy and let them have a very poor lifestyle, so that other countries who care about that population would be open to trade where helping those people would be considered a benefit for those other countries. 

Thanks for the tip! I'll try contacting him through the website you linked--it would be great to hear more from people who have attempted this sort of project before.

How do you think the EA community can improve its interactions and cooperation with the broader global community, especially those who might not be completely comfortable with the underlying philosophy? Do you think it's more of a priority to spread those underlying arguments, or to simply grow the network of people sympathetic to EA causes, even if they disagree with the principles of EA?

4
Owen Cotton-Barratt
4y
Good question. Of the two options I'd be tempted to say it's more of a priority to spread the underlying arguments, but actually I think something more nuanced: it's a priority to keep engaging with people about the underlying arguments, finding where there seems to be the greatest discomfort and turning a critical eye on the arguments there, looking to see if we can develop stronger versions of them. I think that talking about the tentative conclusions along with this is  important both for growing the network of people sympathetic to those, and for providing concrete instantiation of what is meant by the underlying philosophy (too much risk of talking past each other or getting lost in abstraction-land without this)

Hi everyone! I'm Aman, an undergrad at USC currently majoring in computational neuroscience (though that might change). I'm very new to EA, so I haven't yet had the chance to be involved with any EA groups, but I would love to start participating more with the community. I found EA after spending a few months digging into artificial general intelligence, and it's been great to read everyone's thoughts about how to turn vague moral intuitions into concrete action plans.

I have a soft spot for the standard big-picture philosophy/phys... (read more)