I'm curious about what's the original source of the funding you're giving out here. According to this Nonlinear received $250k from Future Fund and $600k from Survival and Flourishing Fund. Is the funding being distributed here coming solely from the SFF grant? Does Nonlinear have other funding sources besides Future Fund and SFF?
(I didn't do any deeper dive than looking at Nonlinear's website, where I couldn't find anything about funding sources.)
Hi Aman,
Appreciate the question. We’ve received funding from different sources like the Survival and Flourishing Fund, Future Fund, and other private donors, with Emerson Spartz donating six figures annually.
This project would not fall under the scope of what the Future Fund granted us, so we will not be using their funding for this.
This is coming directly out of our operating budget, so we're aiming to make payouts that have a higher counterfactual likelihood of impact.
Thanks for writing this--even though I've been familiar with AI x-risk for a while, it didn't really hit me on an emotional level that dying from misaligned AI would happen to me too, and not just "humanity" in the abstract. This post changed that.
Might eventually be useful to have one of these that accounts for biorisk too, although biorisk "timelines" aren't as straightforward as trying to estimate the date that humanity builds the first AGI.
Thanks for posting your attempt! Yeah, it does seem like you ran into some of those issues in your attempt, and it's useful information to know that this task is very hard. I guess one lesson here is that we probably won't be able to build perfect institutions on the first try, even in safety-critical cases like AGI governance.
Just stumbled upon this post--I like the general vein in which you're thinking. Not sure if you're aware of it already, but this post by Paul Christiano addresses the "inevitable dangerous technology" argument as it relates to AI alignment.
- "First-principles design is intractable and misses important situation-specific details" - This could easily be true, I don't have a strong opinion on it, just intutions.
I think this objection is pretty compelling. The specific tools that an institution can use to ensure that a technology is deployed safely...
Thanks, great points (and counterpoints)!
If you are a community builder (especially one with a lot of social status), be loudly transparent with what you are building your corner of the movement into and what tradeoffs you are/aren’t willing to make.
I like this suggestion--what do you imagine this transparency looks like? Do you think, e.g., EA groups should have pages outlining their community-building philosophies on their websites? Should university groups should write public Forum posts about their plans and reasoning before every semester/quarter or a...
+1 to transparency!
I would love to see more community builders share their theories of change, even if they are just 1/2 page google docs with a few bullets and links to other articles (and where their opinions differ), and periodically update this (say, every 6 months or so) with major changes, examples of where they were wrong (this is by far the most important to me)
Yeah, I've had several (non-exchange) students ask me what altruism means--my go-to answer is "selflessly helping others," which I hope makes it clear that it describes a practice rather than a dogma.
Thanks for the comment! I agree with your points--there are definitely elements of EA, whether they're core to EA or just cultural norms within the community, that bear stronger resemblances to cult characteristics.
My main point in this post was to explore why someone who hasn't interacted with EA before (and might not be aware of most of the things you mentioned) might still get a cult impression. I didn't mean to claim that the Google search results for "altruism" are the most common reason why people come away with a cult impression. Rather, I thi...
Hey Jordan! Great to see another USC person here. The best writing advice I've gotten (that I have yet to implement) is to identify a theory of change for each potential piece--something to keep in mind!
6 sounds interesting, if you can make a strong case for it. Aligning humans isn't an easy task (as most parents, employers, governments, and activists know very well), so I'm curious to hear if you have tractable proposals.
7 sounds important given that a decent number of EAs are vegan, and I'm quite surprised I haven't heard of this before. 15 IQ points is ...
Thanks Linch! This list is really helpful. One clarifying question on this point:
Relatedly, what does the learning/exploration value of this project look like?
- To the researcher/entrepreneur?
- To the institution? (if they're working in an EA-institutional context)
- To the EA or longtermist ecosystem as a whole?
For 1) and 2), I assume you're referring to the skills gained by the person/institution completing the project, which they could then apply to future projects.
For 3), are you referring to the possibility of "ruling out intervention X as a feas...
This thinking has come up in a few separate intro fellowship cohorts I’ve facilitated. Usually, somebody tries to flesh it out by asking whether it’s “more effective” to save one doctor (who could then be expected to save five more lives) or two mechanics (who wouldn’t save any other lives) in trolley-problem scenarios. This discussion often gets muddled, and many people have the impression that “EAs” would think it’s better to save the doctor, even though I doubt that’s a consensus opinion among EAs. I’ve found this to be a surprisingly large snag point t...
Thanks for the feedback! I think this is probably a failure of the story more than a failure of your understanding--after all, a story that's hard to understand isn't fulfilling its purpose very well. Jackson Wagner's comment below is a good summary of the main points I was intending to get across.
Next time I write, I'll try to be more clear about the points I'm trying to convey.
"As tagged, this story strikes me as a fable intended to explain one of the mechanisms behind so-called "S-risks", hellish scenarios that might be a fate worse than the "death" represented by X-risks."
That's what I was going for, although I'm aware that I didn't make this as clear as I should have.
"Of course it's a little confusing to have the twist with the sentient birds -- I think rather than a literal "farmed animal welfare" thing, this is intended to showcase a situation where two different civilizations have very different values."
Same thing here. Th...
Thanks! I'm glad you enjoyed it. The main reason I wrote this was to practice creative writing--and the Forum contest seemed to be a good place to do that. This is the first time I tried writing short stories--the only other creative writing piece I've published anywhere is this one, which I also wrote for the Forum contest: https://forum.effectivealtruism.org/posts/sGTHctACf73gunnk7/creative-writing-contest-the-legend-of-the-goldseeker
I hope that helps!
I recently learned about Training for Good, a Charity Entrepreneurship-incubated project, which seems to address some of these problems. They might be worth checking out.
I think this is a great exercise to think about, especially in light of somewhat-recent discussion on how competitive jobs at EA orgs are. There seems to be plenty of room for more people working on EA projects, and I agree that it’s probably good to fill that opportunity. Some loose thoughts:
There seem to be two basic ways of getting skilled people working on EA cause areas:
1. Selec...
Thanks for this post! Reading through these lessons has been really informative. I have a few more questions that I'd love to hear your thinking on:
1) Why did you choose to run the fellowship as a part-time rather than full-time program?
2) Are there any particular reasons why fellowship participants tended to pursue non-venture projects?
3) Throughout your efforts, were you optimizing for project success or project volume, or were you instead focused on gathering data on the incubator space?
4) Do you consider the longtermist incubation space to be distinct from the x-risk reduction incubation space?
5) Was there a reason you didn't have a public online presence, or was it just not a priority?
Thanks for the post, this is an important and under-researched topic.
Examples include some well-known conditions (chronic migraine, fibromyalgia, non-specific low-back pain), as well as many lesser-known ones (trigeminal neuralgia, cluster headache, complex regionary pain syndrome)
Some of these well-known chronic pain conditions can be hard to diagnose, too. Chronic pain conditions like fibromyalgia, ME/CFS, rheumatoid arthritis, and irritable bowel syndrome are frequently comorbid with each other, and may also be related to depression and mental hea...
This is an interesting idea. I'm trying to think of it in terms of analogues: you could feasibly replace "digital minds" with "animals" and achieve a somewhat similar conclusion. It doesn't seem that hard to create vast amounts of animal suffering (the animal agriculture industry has this figured out quite well), so some agent could feasibly threaten all vegans with large-scale animal suffering. And as you say, occasionally following through might help make that threat more credible.
Perhaps the reason we don't see this happening is that nobody really...
Thanks for the tip! I'll try contacting him through the website you linked--it would be great to hear more from people who have attempted this sort of project before.
How do you think the EA community can improve its interactions and cooperation with the broader global community, especially those who might not be completely comfortable with the underlying philosophy? Do you think it's more of a priority to spread those underlying arguments, or to simply grow the network of people sympathetic to EA causes, even if they disagree with the principles of EA?
Hi everyone! I'm Aman, an undergrad at USC currently majoring in computational neuroscience (though that might change). I'm very new to EA, so I haven't yet had the chance to be involved with any EA groups, but I would love to start participating more with the community. I found EA after spending a few months digging into artificial general intelligence, and it's been great to read everyone's thoughts about how to turn vague moral intuitions into concrete action plans.
I have a soft spot for the standard big-picture philosophy/phys...
The hygiene hypothesis (especially the autoimmune disease variant, brief 2-paragraph summary here if you Ctrl+F "Before we go") could be another example.
On a somewhat related note, Section V of this SlateStarCodex post goes through some similar examples where humans departing from long-lived tradition has negative effects that don't become visible for a long time.