UK Medical Student.
Very badly, probably, but I was assuming that most EAs will be familiar with the term.
Doesn't capture all neartermists, but for me, person-affecting EA
I think the costs of using a different word seem very low, and the potential benefits of slightly better PR seem high, so I think things like this are worth thinking about.
I also think retreats / summits are probably towards the extreme end of “things in EA which seem culty”, so are particularly worth thinking about.
More money in EA just means that it makes sense for us to have a lower bar for cost-effectiveness in our donations and spending.
It doesn’t actually change any moral obligations surrounding donations.
However, the lower cost effectiveness bar makes it more likely than before that the most cost-effective donations could be to incubators like CE and new orgs like CE charities, since it’s more likely than before that these orgs could meet our new, lower cost effectiveness bar.
The lower cost-effectiveness bar also means that expected value maximising, hits-based giving based more on theory and less on evidence makes more sense than before, because it’s more likely to meet the lower cost effectiveness bar.
I also identify as an EA and disagree to some extent with EA answers on cause prioritisation, but my disagreement is mostly about the extent to which they’re priorities compared to other things, and my disagreement isn’t too strong.
But it seems very unlikely for someone to continue to identify as an EA if they strongly disagree with all of these answers, which is why I think, in practice, these answers are part of the EA identity now (although I think we should try to change this, if possible).
Do you know an individual who identifies as an EA and strongly disagrees with all of these areas being priorities?
But in practice, I don’t think we come up with answers anymore.
Some people came up with a set of answers, enough of us agree with this set and they’ve been the same answers for long enough that they’re an important part of EA identities, even if they’re less important than the question of how to do the most good.
So I think the relevant empirical claims are baked in to identifying as an EA.
This is sort of getting into the thick EA vs thin EA idea that Ben Todd discussed once, but practically I think almost everyone who identifies as an EA mostly agrees with these areas being amongst top priorities. If you disagree too strongly you would probably not feel like part of the EA movement.
I think in practice, EA is now an answer to the question of how to do the most good, and the answer is “randomista development, animal welfare, extreme pandemic mitigation and AI alignment”. This has a bunch of empirical claims baked into it.
It seems pretty easy to optimise for consequentialist impact and still be more virtuous and principled than most people.
Maybe EA can lead to bad moral licensing effects in some people.
It still seems quite effective to me, even if you’re only affecting the construction of future buildings
Cool, good to hear!