Bio

Pro-pluralist, pro-bednet, anti-Bay EA. 🔸 10% Pledger.

Sequences
3

Against the overwhelming importance of AI Safety
EA EDA
Criticism of EA Criticism

Comments
332

Something which has come up a few times, and recently a lot in the context of Debate Week (and the reaction to Leif's post) is things getting downvoted quickly and being removed from the Front Page, which drastically drops the likelihood of engagement.[1]

So a potential suggestion for the Frontpage might be:

  • Hide the vote score of all new posts if the absolute score of the post is below some threshold (I'll use 20 as an example)
    • If a post hits -20, it drops off the front page
    • After a post hits 20+, it's karma score is permanently revealed
    • Galaxy-brain version is that Community/Non-Community grouping should only take effect once a post hits these thresholds[2]
  • This will still probably leave us with too many new posts to fit on the front page, so some rules to sort which stay and which get knocked off:
    • Some consideration to total karma should probably count (how much to weight it is debatable)
    • Some consideration to how recent the post is should count too (e.g. I'd probably want to see a new post that got 20+ karma quickly than 100+ karma over weeks)
    • Some consideration should also go to engagement - some metric related to either number of votes or comment count would probably indicate which posts are generating community engagement, though this could lead to bikeshedding/Matthew effect if not implemented correctly. I still think it's directionally correct though
    • Of course the user's own personal weighting of topic importance can probably contribute to this as well
  • There will always be some trade-offs when designing some ranking on many posts with limited space. But the idea above is that no post should quickly drop off the front page because a few people quickly down-vote it into negative karma.

Maybe some code like this already exists, but this thought popped into my head and I thought it was worth sharing on this post.

  1. ^

    My poor little piece on gradient descent got wiped out by debate week 😭 rip

  2. ^

    In a couple of places I've seen people complain about the use of the Community tag to 'hide' particular discussions/topics. Not saying I fully endorse this view.

I think 'meat-eating problem' > 'meat-eater problem' came in my comment and associated discussion here, but possibly somewhere else.[1]

  1. ^

    (I still stand by the comment, and I don't think it's contradictory with my current vote placement on the debate week question)

On the platonic/philosophical side I'm not sure, I think many EAs weren't really bought into it to begin with and the shift to longtermism was in various ways the effect of deference and/or cohort effects. In my case I feel that the epistemic/cluelessness challenge to longtermism/far future effects is pretty dispositive, but I'm just one person.

On the vibes side, I think the evidence is pretty damning:

  • The launch of WWOTF was almost perfectly at the worst time possible and the idea seems indelibly linked with SBF's risky/naïve ethics and immoral actions.
  • Do a Google News or Twitter search for 'longtermism' in its EA context and it's ~broadly to universally negative. The Google trends data also points toward the term fading away.
  • No big EA org or "EA leader" however defined is going to bat for longtermism any more in the public sphere. The only people talking about it are the critics. When you get that kind of dynamic, it's difficult to see how an idea can survive.
  • Even on the Forum, very little discussion on the Forum seems to be based on 'longtermism' these days. People either seem to have left the Forum/EA, or longtermist concerns have been subsumed into AI/bio risk. Longtermism just seems superfluous to these discussions.

That's just my personal read on things though. But yeah, seems very much like that SBF-Community Drama-OpenAI board triple whammy from Nov22-Nov23 marked the death knell for longtermism at least as the public facing justification of EA.

For the avoidance of doubt, not gaining knowledge from the Carl Shulman episodes is at least as much my fault as it is Rob and Carl's![1] I think similar to his appearance on the Dwarkesh Podcast, it was interesting and full of information, but I'm not sure my mind has found a good way to integrate it into my existing perspective yet. It feels unresolved to me, and something I personally want to explore more, so a version of the post written later in time might include those episodes high up. But writing this post from where I am now, I at least wanted to own my perspective/bias leaning against the AI episodes rather than leave it implicit in the episode selection. But yeah, it was very much my list, and therefore inherits all of my assumptions and flaws.

I do think working in AI/ML means that the relative gain of knowledge may still be lower in this case compared to learning about the abolition of slavery (Brown #145) or the details of fighting Malaria (Tibenderana #129), so I think that's a bit more arguable, but probably an unimportant distinction.

  1. ^

    (I'm pretty sure I didn't listen to part 2, and can't remember how much I listened to of part 1 over reading some of the transcript on the 80k website, so these episodes may be a victim of the 'not listened to fully yet' criteria)

I just want to publicly state that the whole 'meat-eater problem' framing makes me incredibly uncomfortable

  • First: why not call it the 'meat-eating' problem rather than 'meat-eater' problem? Human beliefs and behaviours are changeable and malleable. It is not a guarantee that future moral attitudes are set in stone - human history itself should be proof enough of that. Seeing other human beings as 'problems to be solved' is inherently dehumanising.
  • Second: the call on whether net human wellbeing is negated by net animal wellbeing is highly dependent on both moral weights and overall moral view. It isn't a 'solved' problem in moral philosophy. There's also a lot of empirical uncertainty people below have pointed out r.e. saving a life != increasing the population, counterfactual wild animal welfare without humans might be even more negative etc.
  • Third - and most importantly - this pattern matches onto very very dangerous beliefs:
    • Rich people in the Western World saying that poor people in Developing countries do not deserve to live/exist? bad bad bad bad bad
    • Belief that humanity, or a significant amount of it, ought not to exist (or the world would be better off were they to stop existing) danger danger
    • Like, already in the thread we've got examples of people considering whether murdering someone who eats meat isn't immoral, whether they ought to Thanos snap all humans out of existence, analogising average unborn children in the developing world to baby Hitler. my alarm bells are ringing
    • The dangers of the above grow exponentially if proponents are incredibly morally certain about their beliefs and unlikely to change regardless of evidence shown, believe that they may only have one chance to change things, believe that otherwise unjustifiable actions are justified in their case due to moral urgency.

For clarification I think Factory Farming is a moral catastrophe and I think ending it should be a leading EA cause. I just think that the latent misanthropy in the meal-eater problem framing/worldview is also morally catastrophic.

In general, reflecting on this framing makes it ever more clear to me that I'm just not a utilitarian or a totalist.

Hey Ben, I'll remove the tweet images since you've deleted them. I'll probably rework the body of the post to reflect that and happy to make any edits/retractions that you think aren't fair.

I apologise if you got unfair pushback as a result of my post, and regardlesss of your present/future affiliation with EA, I hope you're doing well.

I appreciate the pushback anormative, but I kinda stand by what I said and don't think your criticisms land for me. I fundamentally reject with your assessment of what I wrote/believe as 'targeting those who wish to leave', or saying people 'aren't allowed to criticise us' in any way.

  • Maybe your perception of 'accusation of betrayal' came from the use of 'defect' which was maybe unfortunate on my part. I'm trying to use it in a game theory 'co-operate/defect' framing. See Matthew Reardon from 80k here.[1]
    • I'm not against Ben leaving/disassociating (he can do whatever he wants), but I am upset/concerned that formerly influential people disassociating from EA leaves the rest of the EA community, who are by and large individuals with a lot less power and influence, to become bycatch.[2]
  • I think a load-bearing point for me is Ben's position and history in the EA Community. 
    • If an 'ordinary EA' were to post something similar, I'd feel sad but feel no need to criticise them individually (I might gather arguments that present a broader trend and respond to them, as you suggest).
    • I think there is some common-sense/value-ethics intuition I feel fairly strongly that being a good leader means being a leader when things are tough and not just when times are good. 
    • I think it is fair to characterise Ben as an EA Leader: Ben was a founder of 80,000 Hours, one of the leading sources of Community growth and recruitment. He was likely a part of the shift from the GH&D/E2G version of 80k to the longtermist/x-risk focused version, a move that was followed by the rest of EA. He was probably invited to attend (though I can't confirm if he did or not) the EA Leadership/Meta Co-ordination Forum for multiple years.
      • If the above is true, then Ben had a much more significant role shaping the EA Community than almost all other members of it.
      • To the extent Ben thinks that Community is bad/harmful/dangerous, the fact that he contributed to it implies some moral responsibility for correcting it. This is what I was trying to get at the with 'Omelas' reference in my original quick take.
  • As for rebuttals, Ben mentions that he has criticisms of the community but doesn't shared them to an extent they can be rebutted. When he does I look forward to reading and analysing them.[3] Even in the original tweets Ben himself mentions this "looks a lot like following vibes", and he's right, it does. 
  1. ^

    and here - which is how I found out about the original tweets in the first place

  2. ^

    Like Helen Toner might have disassociated/distanced herself from the EA Community or EA publicly, but her actions around the OpenAI board standoff have had massively negative consequences for EA imo

  3. ^

    I expect I'll probably agree with a lot of his criticisms, but disagree that they apply to 'the EA Community' as a whole as opposed to specific individuals/worldviews who identify with EA

<edit: Ben deleted the tweets, so it doesn't feel right to keep them up after that. The rest of the text is unchanged for now, but I might edit this later. If you want to read a longer, thoughtful take from Ben about EA post-FTX, then you can find one here>

This makes me feel bad, and I'm going to try and articulate why. (This is mainly about my gut reaction to seeing/reading these tweets, but I'll ping @Benjamin_Todd because I think subtweeting/vagueposting is bad practice and I don't want to be hypocritical.) I look forward to Ben elucidating his thoughts if he does so and will reflect and respond in greater detail then.

  • At a gut-level, this feels like an influential member of the EA community deciding to 'defect' and leave when the going gets tough. It's like deciding to 'walk away from Omelas' when you had a role in the leadership of the city and benefitted from that position. In contrast, I think the right call is to stay and fight for EA ideas in the 'Third Wave' of EA.
  • Furthermore,if you do think that EA is about ideas, then I don't think dissassociating from the name of EA without changing your other actions is going to convince anyone about what you're doing by 'getting distance' from EA. Ben is a GWWC pledger, 80k founder, and is focusing his career on (existential?) threats from advanced AI. To do this and then deny being an EA feels disingenuous for ~most plausible definitions of EA to me.
    • Similar considerations to the above make me very pessimisitic about the 'just take the good parts and people from EA, rebrand the name, disavow the old name, continue operating as per usual' strategy to work at all
    • I also think that actions/statements like this make it more likely for the whole package of the EA ideas/community/brand/movement to slip into a negative spiral which ends up wasting its potential, and given my points above such a collapse would also seriously harm any attempt to get a 'totally not EA yeah we're definitely not those guys' movement off the ground.
  • In general it's easy pattern for EA criticism to be something like "EA ideas good, EA community bad" but really that just feels like a deepity. For me a better criticism would be explicit about focusing on the funding patterns, or focusing on epistemic responses to criticism, because attacking the EA community at large to me means ~attacking every EA thing in total. 
    • If you think that all of EA is bad because certain actors have had overwhelmingly negative impact, you could just name and shame those actors and not implicitly attack GWWC meetups and the like. 
    • In the case of these tweets, I think a future post from Ben would ideally benefit from being clear about what 'the EA Community' actually means, and who it covers.

Thanks Aaron, I think you're responses to me and Jason do clear things up. I still think the framing of it is a bit off though:

  • I accept that you didn't intend your framing to be insulting to others, but using "updating down" about the "genuine interest" of others read as hurtful on my first read. As a (relative to EA) high contextualiser it's the thing that stood out for me, so I'm glad you endorse that the 'genuine interest' part isn't what you're focusing on, and you could probably reframe your critique without it.
  • My current understanding of your position is that it is actually: "I've come to realise over the last year that many people in EA aren't directing their marginal dollars/resources to the efforts that I see as most cost-effective, since I also think those are the efforts that EA principles imply are the most effective."[1] To me, this claim is about the object-level disagreement on what EA principles imply.
  • However, in your response to Jason you say “it’s possible I’m mistaken over the degree to which direct resources to the place you think needs them most” is a consensus-EA principle which switches back to people not being EA? Or not endorsing this view? But you've yet to provide any evidence that people aren't doing this, as opposed to just disagreeing about what those places are.[2]
  1. ^

    Secondary interpretation is: "EA principles imply one should make a quantitative point estimate of the good of all your relevant moral actions, and then act on the leading option in a 'shut-up-and-calculate' way. I now believe many fewer actors in the EA space actually do this than I did last year"

  2. ^

    For example, in Ariel's piece, Emily from OpenPhil implies that they have much lower moral weights on animal life than Rethink does, not that they don't endorse doing 'the most good' (I think this is separable from OP's commitment to worldview diversification).

In your original post, you talk about explicit reasoning, in the your later edit, you switch to implicit reasoning. Feels like this criticism can't be both. I also think the implicit reasoning critique just collapses into object-level disagreements, and the explicit critique just doesn't have much evidence.

The phenomenon you're looking at, for instance, is:

"I am trying to get at the phenomenon where people implicitly say/reason "yes, EA principles imply that the best thing to do would be to donate to X, but I am going to donate to Y instead."

And I think this might just be an ~empty set, compared to people having different object-level beliefs about what EA principles are or imply they should do, and also disagree with you on what the best thing to do would be.[1] I really don't think there's many people saying "the best thing to do is donate to X, but I will donate to Y". (References please if so - clarification in footnotes[2]) Even on OpenPhil, I think Dustin just genuinely believes in worldview diversification is the best thing, so there's no contradiction there where he implies the best thing would be to do X but in practice does do Y.

I think causing this to 'update downwards' on your views of the genuine interest of others - as opposed to, say, them being human and fallible despite trying to do the best they can - in the movement feels... well Jason used 'harsh', I might use a harsher word to describe this behavior.

  1. ^

    For context, I think Aaron thinks that GiveWell deserves ~0 EA funding afaict

  2. ^

    I think maybe there might be a difference between the best thing (or best thing using simple calculations) and the right thing. I think people think in terms of the latter and not the former, and unless you buy into strong or even naïve consequentialism we shouldn't always expect the two to go together

Load more