Pro-pluralist, pro-bednet, anti-Bay EA. 🔸 10% Pledger.
I think 'meat-eating problem' > 'meat-eater problem' came in my comment and associated discussion here, but possibly somewhere else.[1]
(I still stand by the comment, and I don't think it's contradictory with my current vote placement on the debate week question)
On the platonic/philosophical side I'm not sure, I think many EAs weren't really bought into it to begin with and the shift to longtermism was in various ways the effect of deference and/or cohort effects. In my case I feel that the epistemic/cluelessness challenge to longtermism/far future effects is pretty dispositive, but I'm just one person.
On the vibes side, I think the evidence is pretty damning:
That's just my personal read on things though. But yeah, seems very much like that SBF-Community Drama-OpenAI board triple whammy from Nov22-Nov23 marked the death knell for longtermism at least as the public facing justification of EA.
For the avoidance of doubt, not gaining knowledge from the Carl Shulman episodes is at least as much my fault as it is Rob and Carl's![1] I think similar to his appearance on the Dwarkesh Podcast, it was interesting and full of information, but I'm not sure my mind has found a good way to integrate it into my existing perspective yet. It feels unresolved to me, and something I personally want to explore more, so a version of the post written later in time might include those episodes high up. But writing this post from where I am now, I at least wanted to own my perspective/bias leaning against the AI episodes rather than leave it implicit in the episode selection. But yeah, it was very much my list, and therefore inherits all of my assumptions and flaws.
I do think working in AI/ML means that the relative gain of knowledge may still be lower in this case compared to learning about the abolition of slavery (Brown #145) or the details of fighting Malaria (Tibenderana #129), so I think that's a bit more arguable, but probably an unimportant distinction.
(I'm pretty sure I didn't listen to part 2, and can't remember how much I listened to of part 1 over reading some of the transcript on the 80k website, so these episodes may be a victim of the 'not listened to fully yet' criteria)
I just want to publicly state that the whole 'meat-eater problem' framing makes me incredibly uncomfortable
For clarification I think Factory Farming is a moral catastrophe and I think ending it should be a leading EA cause. I just think that the latent misanthropy in the meal-eater problem framing/worldview is also morally catastrophic.
In general, reflecting on this framing makes it ever more clear to me that I'm just not a utilitarian or a totalist.
Hey Ben, I'll remove the tweet images since you've deleted them. I'll probably rework the body of the post to reflect that and happy to make any edits/retractions that you think aren't fair.
I apologise if you got unfair pushback as a result of my post, and regardlesss of your present/future affiliation with EA, I hope you're doing well.
I appreciate the pushback anormative, but I kinda stand by what I said and don't think your criticisms land for me. I fundamentally reject with your assessment of what I wrote/believe as 'targeting those who wish to leave', or saying people 'aren't allowed to criticise us' in any way.
and here - which is how I found out about the original tweets in the first place
Like Helen Toner might have disassociated/distanced herself from the EA Community or EA publicly, but her actions around the OpenAI board standoff have had massively negative consequences for EA imo
I expect I'll probably agree with a lot of his criticisms, but disagree that they apply to 'the EA Community' as a whole as opposed to specific individuals/worldviews who identify with EA
<edit: Ben deleted the tweets, so it doesn't feel right to keep them up after that. The rest of the text is unchanged for now, but I might edit this later. If you want to read a longer, thoughtful take from Ben about EA post-FTX, then you can find one here>
This makes me feel bad, and I'm going to try and articulate why. (This is mainly about my gut reaction to seeing/reading these tweets, but I'll ping @Benjamin_Todd because I think subtweeting/vagueposting is bad practice and I don't want to be hypocritical.) I look forward to Ben elucidating his thoughts if he does so and will reflect and respond in greater detail then.
Thanks Aaron, I think you're responses to me and Jason do clear things up. I still think the framing of it is a bit off though:
Secondary interpretation is: "EA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a 'shut-up-and-calculate' way. I now believe many fewer actors in the EA space actually do this than I did last year"
For example, in Ariel's piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they don't endorse doing 'the most good' (I think this is separable from OP's commitment to worldview diversification).
In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism can't be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesn't have much evidence.
The phenomenon you're looking at, for instance, is:
"I am trying to get at the phenomenon where people implicitly say/reason "yes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead."
And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really don't think there's many people saying "the best thing to do is donate to X, but I will donate to Y". (References please if so - clarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so there's no contradiction there where he implies the best thing would be to do X but in practice does do Y.
I think causing this to 'update downwards' on your views of the genuine interest of others - as opposed to, say, them being human and fallible despite trying to do the best they can - in the movement feels... well Jason used 'harsh', I might use a harsher word to describe this behavior.
For context, I think Aaron thinks that GiveWell deserves ~0 EA funding afaict
I think maybe there might be a difference between the best thing (or best thing using simple calculations) and the right thing. I think people think in terms of the latter and not the former, and unless you buy into strong or even naïve consequentialism we shouldn't always expect the two to go together
Something which has come up a few times, and recently a lot in the context of Debate Week (and the reaction to Leif's post) is things getting downvoted quickly and being removed from the Front Page, which drastically drops the likelihood of engagement.[1]
So a potential suggestion for the Frontpage might be:
Maybe some code like this already exists, but this thought popped into my head and I thought it was worth sharing on this post.
My poor little piece on gradient descent got wiped out by debate week 😭 rip
In a couple of places I've seen people complain about the use of the Community tag to 'hide' particular discussions/topics. Not saying I fully endorse this view.