I wrote something about campaign contributions in federal US elections earlier this year. I could be wrong, but based on my (non-expert) survey of the campaign finance literature, it doesn't seem like donating to political campaigns has a very substantial impact on election outcomes (most of the time). The main takeaway is that spending and success are correlated, but the former doesn't cause the latter. Spending is simply a useful heuristic for the size/traction/etc. of a campaign.
This is very similar to the comment I was going to make.
I admit that it has crossed my mind that even a moderate EA lifestyle is unusually demanding, especially in the longterm, and therefore could make finding a longterm partner more difficult. However, I do resonate with that last bit – encouraging inter-EA dating also seems culty and insular to me, and I’d like to think that most of us could integrate EA (as a project and set of values) into our lives in way that allows us to have other interests, values, friends, and so on (i.e., our live...
There are two different angles on this question. One is whether the level of response in EA has been appropriate, the second is whether the level of response outside of EA (i.e., by society at large) has been appropriate.
I really don't know about the first one. People outside of EA radically underestimate the scale of ongoing moral catastrophes, but once you take those into account, it's not clear to me how to compare -- as one example -- the suffering produced by factory farming to the suffering produced by a bad response to coronavirus in devel...
Glad the alienation objection is getting some airtime in EA. I wanted to add two very brief notes in defense of consequentialism:
1) The alienation objection seems generalizable beyond consequentialism to any moral theory which (as you put it) inhibits you from participating in a normative ideal. I am not too familiar with other moral traditions, but it is possible for me to see how following certain deontological or contractualist theories too far also results in a kind of alienation. (Virtue ethics may be the safest here!)
2) The normative ideals that deal...
This was basically going to be my response -- but to expand on it, in a slightly different direction, I would say that, although maybe we shouldn't be more concerned about biorisk, young EAs who are interested in biorisk should update in favor of pursuing a career in/getting involved with biorisk. My two reasons for this are:
1) There will likely be more opportunities in biorisk (in particular around pandemic preparedness) in the near-future.
2) EAs will still be unusually invested in lower-probability, higher-risk problems than non-EAs (like GCBRs).
(1)...
Some low-effort thoughts (I am not an economist so I might be embarrassing myself!):
I dug up a few other places 80,000 Hours mentions law careers, but I couldn't find any article where they discuss US commercial law for earning-to-give. The other mentions I found include:
In their profile on US AI Policy, one of their recommended graduate programs is a "prestigious law JD from Yale or Harvard, or possibly another top 6 law school."
In this article for people with existing experience in a particular field, they write “If you have experience as a lawyer in the U.S. that’s great because it’s among the best w...
TL;DR, I think EAs should probably use the following heuristics if they are interested in some career for which law school is a plausible path:
In general, definitely carefully r
...You mentioned in the answer to another question that you made the transition from being heavily involved with social justice in undergrad to being more involved with EA in law school. This makes me kind of curious -- what's your EA "origin story"? (How did you find out about effective altruism, how did you first become involved, etc.)
My EA origins story is pretty boring! I was a research assistant for a Philosophy professor who included a unit on EA in her Environmental Ethics course. That was my first exposure to the ideas of EA (although obviously I had exposure to Peter Singer previously). As a result, I added Doing Good Better to my reading list, and I read it in December 2016 (halfway through my first year of law school). I was pretty immediately convinced of its core ideas.
I then joined the Harvard Law School EA group, which was a really cool group at the time. In fact, it's some
...
I love this post! It’s beautifully written, and one of the best things I’ve read on the forum in a while. So take my subsequent criticism of it with that in mind! I apologize in advance if I’m totally missing the point.
I feel like EAs (and most ambitious people generally) are pretty confused about how to reconcile status/impact with self-worth (I’m including myself in this group). If confronted, many of us would say that status/impact should really be orthogonal to how we feel about ourselves, but we can’t quite bring t...
This is fair. I was trying to salvage his argument without running into the problems mentioned in the above comment, but if he means "aim" objectively, then its tautologically true that people aim to be morally average, and if he means "aim" subjectively, then it contradicts the claim that most people subjectively aim to be slightly above average (which is what he seems to say in the B+ section).
The options are: (1) his central claim is uninteresting (2) his central claim is wrong (3) I'm misunderstanding his central claim. And I normally would feel like I should play it safe and default to (3), but it's probably (2).
This was a good comment and very clarifying. I agree with most of what you say about the evidence – Schwitzgebel seems to be misinterpreting the evidence (and I think I was also initially).
Just to be extra charitable to Schwitzgebel, however, I think we can assume his central claim is basically intelligible (even if it’s not supported by the evidence), and he’s just using some words in an inconsistent way. Some of the confusion in your comment may be caused by this inconsistency.
In most of his piece, by “aiming to be mediocre...
Not to be pedantic, but
This is not to say that she couldn't and that she might use this as an excuse to avoid doing what she thinks is necessary to excuse doing what is convenient, but to say that we should have compassion for those who may find they agree with EA but find they cannot immediately make the changes they would like to due to life conditions, and we should not judge them as less good EAs even if they are less able to contribute to EA missions than if they were a different person in a different world that doesn't exist.
This is great, and I'd like to ad...
Scott Alexander has a very interesting response to this post on reddit: see here.