I agree with just about everything in this comment :)
(Also re: Shapley values -- I don't actually have strong takes on these and you shouldn't take this as a strong endorsement of them. I haven't engaged with them beyond reading the post I linked. But they're a way to get some handle on cases where many people contribute to an outcome, which addresses one of the points in your post.)
Thanks for writing this! "EA is too focused on individual impact" is a common critique, but most versions of it fall flat for me. This is a very clear, thorough case for it, probably the best version of the argument I've read.
I agree most strongly with the dangers of internalizing the "heavy-tailed impact" perspective in the wrong way, e.g. thinking "the top people have the most impact -> I'm not sure I'm one of the top people -> I won't have any meaningful impact -> I might as well give up." (To be clear, steps 2, 3, and 4 are all errors: if ther...
The FTX collapse took place in November 2022. Among other things, this resulted in a lot of negative media attention on EA.
It's also worth noting that this immediately followed a large (very positive, on the whole) media campaign around Will MacAskill's book What We Owe the Future in summer 2022, which I imagine caused much of the growth earlier that year.
Many of the songs associated with Secular Solstice[1] have strong EA themes, or were explicitly written with EA in mind.
A few of the more directly EA songs that I like:
Setting Beeminder goals for the number of hours worked on different projects has substantially increased my productivity over the past few months.
I'm very deadline-motivated: if a deadline is coming up, I can easily put in 10 hours of work in a day. But without any hard deadlines, it can take active willpower to work for more than 3 or 4 hours. Beeminder gives me deadlines almost every day, so it takes much less willpower now to have productive days.
(I'm working on a blog post about this currently, which I expect to have out in about two weeks. If I rememb...
Interesting post! But I’m not convinced.
I’ll stick to addressing the decision theory section; I haven’t thought as much about the population ethics but probably have broadly similar objections there.
(1) What makes STOCHASTIC better than the strategy “take exactly N tickets and then stop”?
Another podcast episode on a similar topic came out yesterday, from Rabbithole Investigations (hosted by former Current Affairs podcasts hosts Pete Davis, Sparky Abraham, and Dan Thorn). They had Joshua Kissel on to talk about the premises of EA and his paper "Effective Altruism and Anti-Capitalism: An Attempt at Reconciliation."
This is the first interview (and second episode) in a new series dedicated to the question "Is EA Right?". The premise of the show is that the hosts are interested laypeople who interview many guests with different perspectives, in...
I read this piece a few months ago and then forgot what it was called (and where it had been posted). Very glad to have found it again after a few previous unsuccessful search attempts.
I think all the time about that weary, determined, unlucky early human trying to survive, and the flickering cities in the background. When I spend too long with tricky philosophy questions, impossibility theorems, and trains to crazytown, it's helpful to have an image like this to come back to. I'm glad that guy made it. Hopefully we will too!
An important principle of EA is trying to maximize how much good you do, when you're trying to do good. So EAs probably won't advise you to base most of your charitable giving on emotional connection (which is unlikely to be highly correlated with cost-effectiveness) -- instead, according to EA, you should base this on some kind of cost-effectiveness calculation.
However, many EAs do give some amount to causes they personally identify with, even if they set aside most of their donations for more cost-effective causes. (People often talk about "warm fuzzies" in this context, i.e. donations that give you a warm fuzzy feeling.) In that sense, some amount of emotion-based giving is completely compatible with EA.
There have been a few posts discussing the value of small donations over the past year, notably:
There's a lot of discussion here (especially if you go through the comments of each piece), and so plenty of room to come to different conclusions.
Here's roughly...
Unless I'm misunderstanding, isn't this "just" an issue of computing Shapley values incorrectly? If kindling is important to the fire, it should be included in the calculation; if your modeling neglects to consider it, then the problem is with the modeling and not with the Shapley algorithm per se.
Of course, I say "just" in quotes because actually computing real Shapley values that take everything into account is completely intractable. (I think this is your main point here, in which case I mostly agree. Shapley values will almost always be pretty made-up ... (read more)