All of Alex Semendinger's Comments + Replies

Unless I'm misunderstanding, isn't this "just" an issue of computing Shapley values incorrectly? If kindling is important to the fire, it should be included in the calculation; if your modeling neglects to consider it, then the problem is with the modeling and not with the Shapley algorithm per se.

Of course, I say "just" in quotes because actually computing real Shapley values that take everything into account is completely intractable. (I think this is your main point here, in which case I mostly agree. Shapley values will almost always be pretty made-up ... (read more)

I agree with just about everything in this comment :)

(Also re: Shapley values -- I don't actually have strong takes on these and you shouldn't take this as a strong endorsement of them. I haven't engaged with them beyond reading the post I linked. But they're a way to get some handle on cases where many people contribute to an outcome, which addresses one of the points in your post.)

3
Sam_Coggins
1mo
I'm skeptical that Shapley values can practically help us much in addressing the 'conceptual problem' raised by the post. See critique of estimated Shapley values in another comment on this post Thanks for the considered and considerate discussion

Thanks for writing this! "EA is too focused on individual impact" is a common critique, but most versions of it fall flat for me. This is a very clear, thorough case for it, probably the best version of the argument I've read.

I agree most strongly with the dangers of internalizing the "heavy-tailed impact" perspective in the wrong way, e.g. thinking "the top people have the most impact -> I'm not sure I'm one of the top people -> I won't have any meaningful impact -> I might as well give up." (To be clear, steps 2, 3, and 4 are all errors: if ther... (read more)

8
Sarah Weiler
1mo
Thanks for your comment, very happy to hear that my post struck you as clear and thorough (I'm never sure how well I do on clarity in my philosophical writing, since I usually retain a bit of confusion and uncertainty even in my own mind). I agree that many dangers of internalizing the "heavy-tailed impact" perspective in the wrong way are due to misguided inference, not a strictly necessary implication of the perspective itself. Not least thanks to input from several comments below, I am back to reconsidering my stance on the claims made in the essay around empirical reality and around appropriate conceptual frameworks. I have tangentially encountered Shapley values before but not yet really tried to understand the concept, so if you think they could be useful for the contents of this post, I'll try to find the time to read the article you linked; thanks for the input!  I share the wariness that you mention re "arguments that have the form "even if X is true, believing / saying it has bad consequences, so we shouldn't believe / say X."". At the same time, I don't think that these arguments are always completely groundless (at least the arguments around refraining from saying something; much more inclined to agree that we should never believe something just for the sake of supposed better consequences from believing it). I also tend to be more sympathetic to these arguments when X is very hard to know ("we don't really have means to tell whether X is true, and since believing in X might well have bad side-effects, we should not claim that X and we should maybe even make an effort to debunk the certainty with which others claim that X"). But yes, agree that wariness (though maybe not unconditional rejection) around arguments of this form is generally warranted, to avoid misguided dogmatism in the flawed attempt to prevent (supposed) information hazards.

The FTX collapse took place in November 2022. Among other things, this resulted in a lot of negative media attention on EA.

It's also worth noting that this immediately followed a large (very positive, on the whole) media campaign around Will MacAskill's book What We Owe the Future in summer 2022, which I imagine caused much of the growth earlier that year.

Many of the songs associated with Secular Solstice[1] have strong EA themes, or were explicitly written with EA in mind.

A few of the more directly EA songs that I like:

... (read more)

Setting Beeminder goals for the number of hours worked on different projects has substantially increased my productivity over the past few months.

I'm very deadline-motivated: if a deadline is coming up, I can easily put in 10 hours of work in a day. But without any hard deadlines, it can take active willpower to work for more than 3 or 4 hours. Beeminder gives me deadlines almost every day, so it takes much less willpower now to have productive days.

(I'm working on a blog post about this currently, which I expect to have out in about two weeks. If I rememb... (read more)

Interesting post! But I’m not convinced. 

I’ll stick to addressing the decision theory section; I haven’t thought as much about the population ethics but probably have broadly similar objections there.

(1) What makes STOCHASTIC better than the strategy “take exactly N tickets and then stop”?

  • Both avoid near-certain death (good!)
  • Both involve, at some point, turning down what looks like a strictly better option
    • To me, STOCHASTIC seems to do this at the very first round, and all subsequent rounds. (If I played STOCHASTIC and drew a non-black ball first, I th
... (read more)
3
Violet Hour
1y
Interesting comment! But I’m also not convinced. :P … or, more precisely, I’m not convinced by all of your remarks. I actually think you’re right in many places, though I’ll start by focusing on points of disagreement. (1) On Expected Payoffs. * You ask whether I’m saying: “when given a choice, you can just … choose the option with a worse payoff?”  * I’m saying ‘it’s sometimes better to choose an option with lower expected payoff’. Still, you might ask: “why would I choose an option with lower expected payoff?”  * First, I think the decision procedure “choose the option with the highest expected payoff” requires external justification. I take it that people appeal to (e.g.) long-run arguments for maximizing expected utility because they acknowledge that the decision procedure “choose the action with highest expected payoff” requires external justification.  * Arguments for EU maximization are meant to show you how to do better by your own values. If I can come up with an alternative decision procedure which does better by my values, this is an argument for not choosing the action with the highest expected payoff.  And I take myself to be appealing to the same standards which (doing better by your lights) which are appealed to in defense of EU maximization. * I intepret you as also asking a separate question, of the form: “you’re recommending a certain course of action — why (or on what basis) do you recommend that course of action?” * Trying to justify my more foundational reasons will probably take us a bit too far afield, but in short: when I decide upon some action, I ask myself the question “do I recommend that all rational agents with my values in this decision context follow the decision procedure I’m using to determine their action? * I think this criteria is independently justified, and indeed more foundational than purported justifications for EU maximization. Obviously, I don't expect you (or anyone else) to be convinced by this sh

Another podcast episode on a similar topic came out yesterday, from Rabbithole Investigations (hosted by former Current Affairs podcasts hosts Pete Davis, Sparky Abraham, and Dan Thorn). They had Joshua Kissel on to talk about the premises of EA and his paper "Effective Altruism and Anti-Capitalism: An Attempt at Reconciliation."

This is the first interview (and second episode) in a new series dedicated to the question "Is EA Right?". The premise of the show is that the hosts are interested laypeople who interview many guests with different perspectives, in... (read more)

I read this piece a few months ago and then forgot what it was called (and where it had been posted). Very glad to have found it again after a few previous unsuccessful search attempts. 

I think all the time about that weary, determined, unlucky early human trying to survive, and the flickering cities in the background. When I spend too long with tricky philosophy questions, impossibility theorems, and trains to crazytown, it's helpful to have an image like this to come back to. I'm glad that guy made it. Hopefully we will too!

An important principle of EA is trying to maximize how much good you do, when you're trying to do good. So EAs probably won't advise you  to base most of your charitable giving on emotional connection (which is unlikely to be highly correlated with cost-effectiveness) -- instead, according to EA, you should base this on some kind of cost-effectiveness calculation.

However, many EAs do give some amount to causes they personally identify with, even if they set aside most of their donations for more cost-effective causes. (People often talk about "warm fuzzies" in this context, i.e. donations that give you a warm fuzzy feeling.) In that sense, some amount of emotion-based giving is completely compatible with EA.

There have been a few posts discussing the value of small donations over the past year, notably:

  1. Benjamin Todd on "Despite billions of extra funding, small donors can still have a significant impact"
  2. a counterpoint, AppliedDivinityStudies on "A Red-Team Against the Impact of Small Donations"
  3. a counter-counterpoint, Michael Townsend  on "The value of small donations from a longtermist perspective"

There's a lot of discussion here (especially if you go through the comments of each piece), and so plenty of room to come to different conclusions.

Here's roughly... (read more)