Minor nitpick:
I would've found it more helpful to see Haydn's and Esben's judgments listed separately.
Thanks for writing this up! Was gonna apply anyway, but a post like this might have gotten me to apply last year (which I didn't, but which would've been smart). It also contained some useful sections that I didn't know about yet!
I'm not sure what my general take is on this, I think it's quite plausible that keeping it exclusive is net good, maybe more likely good than not. But I want to add one anecdote of my own which pushes the other way.
Over the last two years, while I was a student, I made two career choices in part (though not only) to gain EA credibility:
Both of these were fun, but I think it's unlikely that they were good for my career or impact in ways other th...
This is so useful! I love this kind of post and will buy many things from this one in particular.
Probably a very naive question, but why can't you just take a lot of DHA **and** a lot of EPA to get both supplements' benefits? Especially if your diet means you're likely deficient in both (which is true of veganism? vegetarianism?).
Assuming the Reddit folk wisdom about DHA inducing depression was wrong (which it might not be, I don't want to dismiss it), I don't understand from the rest of what you wrote why this doesn't work? Why is there a trade-off?
This seems really exciting!
I skimmed some sections so might have missed it in case you brought it up, but I think one thing that might be tricky about this project is the optics of where your own funding would be coming from. E.g. it might look bad if most (any?) of your funding was coming from OpenPhil and then Dustin Moskovitz and Cari Tuna were very highly ranked (which they probably should be!). In worlds where this project is successful and gathers some public attention, that kind of thing seems quite likely to come up.
So I think conditional on thinki...
Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.
Thanks for pointing this out, wasn't aware of that, sorry for the mistake. I have retracted my comment.
Hey, interesting to hear your reaction, thanks.
I can't respond to all of it now, but do want to point out one thing.
And, of course, if elected he will very visibly owe his win to a single ultra-wealthy individual who is almost guaranteed to have business before the next congress in financial and crypto regulation.
I think this isn't accurate.
Donations from individuals are capped at $5,800, so whatever money Carrick is getting is not one giant gift from Sam Bankman-Fried, but rather many small ones from individual Americans. Some of them may work for org...
SBF's Protect Our Future PAC has put more than $7M towards Flynn's campaign. I think this is what _pk and others are concerned about, not direct donations. And this is what most people concerned with "buying elections" are concerned about. (This is what the Citizens United controversy is about.)
If you're wondering who you might know in Oregon, you can search your Facebook friends by location:
Search for Oregon (or Salem) in the normal FB search bar, then go to People. You can also select to see "Friends of Friends".
I assume that will miss a few, so it's probably worth also actively thinking about your network, but this is probably a good low-effort first start.
Edit: Actually they need to live in district 6. The biggest city in that district is Salem as far as I can tell. Here's a map.
Thanks for writing this!
I believe there's a small typo here:
The expected deaths are N+P_nM in the human-combatant case and P_yM in the autonomous combatant case, with a difference in fatalities of (P_y−P_n)(M−N). Given how much larger M (~1-7 Bn) is than N (tens of thousands at most) it only takes a small difference (Py−Pn) for this to be a very poor exchange.
Shouldn't the difference be (P_y−P_n)M−N ?
This is *so* cool, thanks! Might be nice to have a feature where people can add a second location. E.g. I used to study in Munich, but spend ~2 months per year in Luxembourg. Many friends stayed much longer in Luxembourg. According to the EA survey, there are Luxembourgish EAs other than me, but I have so far failed to find them --- I'd expect many of them to be in a similar situation.
I recommend you add that in your bio, since the text search will match on both the map location and any text written in your bio. :)
I thought this was a great article raising a bunch of points which I hadn't previously come across, thanks for writing it!
Regarding the risk from non-state actors with extensive resources, one key question is how competent we expect such groups to be. Gwern suggests that terrorists are currently not very effective at killing people or inducing terror --- with similar resources, it should be possible to induce far more damage than they actually do. This has somewhat lowered my concern about bioterrorist attacks, especially when considering that successfull...
Fair, that makes sense! I agree that if it's purely about solving a research problem with long timelines, then linear or decreasing returns seem very reasonable.
I would just note that speed-sensitive considerations, in the broad sense you use it, will be relevant to many (most?) people's careers, including researchers to some extent (reputation helps doing research: more funding, better opportunities for collaboration etc). But I definitely agree there are exceptions and well-established AI safety researchers with long timelines may be in that class.
I agree that superlinearity is way more pronounced in some cases than in others.
However, I still think there can be some superlinear terms for things that aren't inherently about speed. E.g. climbing seniority levels or getting a good reputation with ever larger groups of people.
I think ASB's recent post about Peak Defense vs Trough Defense in Biosecurity is a great example of how the longtermist framing can end up mattering a great deal in practical terms.
Great post, thanks for writing it! This framing appears a lot in my thinking and it's great to see it written up! I think it's probably healthy to be afraid of missing a big multiplier.
I'd like to slightly push back on this assumption:
If output scales linearly with work hours, then you can hit 60% of your maximum possible impact with 60% of your work hours
First, I agree with other commenters and yourself that it's important not to overwork / look after your own happiness and wellbeing etc.
Having said that, I do think working harder can often have super...
(I accidentally asked multiple versions of this question at once.
This was because I got the following error message when submitting:
"Cannot read properties of undefined (reading 'currentUser')"
So I wrongly assumed the submission didn't work.
@moderators)
Make the best case against: "Some non-trivial fraction of highly talented EAs should be part- or full-time community builders." The argument in favor would be pointing to the multiplier effect. Assume you could attract the equivalent of one person as good as yourself to EA within one year of full-time community building. If this person is young and we assume the length of a career to be 40 years, then you have just invested 1 year and gotten 40 years in return. By the most naive / straightforward estimate then, a chance of about 1/40 of you attracting one ...
EA Hotel / CEEALAR except at EA Hubs
Effective Altruism
CEEALAR is currently located in Blackpool, UK. It would be a lot more attractive if it were in e.g. Oxford, the Bay Area, or London. This would allow guests to network with local EAs (as well as other smart people, of which there are plenty in all of the above cities). In as far as budget is less of a constraint now and in as far as EA funders are already financing trips to such cities for select individuals (for conferences and otherwise), it seems an EA Hotel would similarly be justified on the same grounds. (E.g. intercontinental flights can sometimes be more expensive than one month's rent in those cities)
Studying stimulants' and anti-depressants' long-term effects on productivity and health in healthy people (e.g. Modafinil, Adderall, and Wellbutrin)
Economic Growth, Effective Altruism
Is it beneficial or harmful for long-term productivity to take Modafinil, Adderall, Wellbutrin, or other stimulants on a regular basis as a healthy person (some people speculate that it might make you less productive on days where you're not taking it)? If it's beneficial, what's the effect size? What frequency hits the best trade-off between building up tolerance vs short-ter...
Thanks for this! I think it's good for people to suggest new pitches in general. And this one would certainly allow me to give a much cleaner pitch to non-EA friends than rambling about a handful of premises and what they lead to and why (I should work on my pitching in general!). I think I'll try this.
I think I would personally have found this pitch slightly less convincing than current EA pitches though. But one problem is that I and almost everyone reading this were selected for liking the standard pitch (though to be fair whatever selection mechanism ...
Thanks for the feedback! Yep, it's pretty hard to judge this kind of thing given survivorship bias. I expect this kind of pitch would have worked best on me, though I got into EA long enough ago that I was most grabbed by global health pitches. Which maybe got past my weirdness filter in a way that this one didn't.
I'd love to see what happens if someone tries an intro fellowship based around reading the Most Important Century series!
From the perspective of a grant-maker, thinking about reduction in absolute basis points makes sense of course, but for comparing numbers between people, relative risk reduction might be more useful?
E.g. if one person thinks AI risk is 50% and another thinks it's 10%, it seems to me the most natural way for them to speak about funding opportunities is to say it reduces total AI risk by X% relatively speaking.
Talking about absolute risk reduction compresses these two numbers into one, which is more compact, but makes it harder to see where disagreements com...
What about individual Earning To Givers?
Is there some central place where all the people doing Earning To Give are listed, potentially with some minimal info about their potential max grant size and the type of stuff they are happy to fund?
If not, how do ETGers usually find non-standard funding opportunities? Just personal networks?
I assume those estimates are for current margins? So if I were considering whether to do earning to give, I should use lower estimates for how much risk reduction my money could buy, given that EA has billions to be spent already and due to diminishing returns your estimates would look much worse after those had been spent?
Great question! Guarding against pandemics do advocacy for pandemic prevention and need many small donors due to legal reasons for some of their work. Here's an excerpt from their post on the EA Forum:
...While GAP’s lobbying work (e.g. talking to members of Congress) is already well-funded by Sam Bankman-Fried and others, another important part of GAP’s work is supporting elected officials from both parties who will advocate for biosecurity and pandemic preparedness. U.S. campaign contribution limits require that this work be supported by many small-to-medi
Hey, thanks for writing this!
Strong +1 for this part:
I had conversations along the lines of “I already did a Bachelor’s in Biology and just started a Master’s in Nanotech, surely it’s too late for me to pivot to AI safety”. To which my response is “You’re 22, if you really want to go into AI safety, you can easily switch”.
I think this pattern is especially suspicious when used to justify some career that's impactful in one worldview over one that's impactful in another.
E.g. I totally empathize with people who aren't into longtermism, but the reasoning ...
Here's a couple that came to mind just now.
How smart do you need to be to contribute meaningfully t AI safety? Near top in class in high-school? Near top in class at ivy-league? Potential famous prof at ivy league? Potential fields medalist?
Also, how hard should we expect alignment to be? Are we trying to throw resources at a problem we expect to be able to at least partially solve in most worlds (which is e.g. the superficial impression I get from biorisk) or are we attempting a hail mary, because it might just work and it's important enough to be
I agree it's fine if fellowships aren't interesting to already-engaged EAs and I also see why the question is asked --- I don't even have a strong view on whether it's a bad idea to ask it.
I do think though that the fellowship would have been boring to me at times, even if I had known much less about EA. But maybe I'm just not the type of person who likes to learn stuff in groups and I was never part of the target audience.
Thanks for writing this, I think it's great you're thinking about alternatives!
The way I learned about EA was just by spending too much time on the forum and with the 80k podcast.
Then, I once attended one session of a fellowship and was a little underwhelmed. I remember the question "so can anybody name the definition of an existential risk according to Toby Ord" after we had been asked to read about exactly that — this just seemed like a waste of time. But to be fair, I was also much more familiar with EA at that point than an average fellow. It's very possible that other people had a better experience in the same session.
But I definitely agree there's room for experimentation and probably improvement!
Thanks for writing this up, super interesting!
Intuitively I would expect persistence effects to be weaker now than e.g. 300 years ago. This is mostly because today society changes much more rapidly than back then. I would guess that it's more common now to live hundreds of kilometres from where you grew up, that the internet allows people to "choose" their culture more freely (my parents like EA less than I do), that the same goes for bigger cities etc. Generally advice from my parents and grandparents sometimes feels outdated, which makes me less likely t...
Thanks for pointing this out! Hadn't known about this, though it totally makes sense in retrospect that markets would find some way of partially cancelling that inefficiency. I've added an edit to the post.
Thanks for pointing that out! I agree it's notable and have added it to the list. I don't have a strong opinion on how important this is relative to other things on there.
Thanks for your comment! Super interesting to hear all that.
And my pledge is 10%, although I expect more like 50-75% to go to useful world-improving things but don't want to pledge it because then I'm constrained by what other people think is effective.
Amazing! Glory to you :) I've added this to the post.
Can you say more about the 20% per year discount rate for community building?
In particular, is the figure meant to refer to time or money? I.e. does it mean that
(For money a 20% discount rate seems very high to me, barring very short timelin... (read more)