JWS

3543 karmaJoined

Bio

Kinda pro-pluralist, kinda anti-Bay EA.

I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.

(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)

Posts
7

Sorted by New
4
JWS
· · 1m read

Sequences
2

EA EDA
Criticism of EA Criticism

Comments
275

JWS
22
0
0

Thanks for sharing toby, I had just finished listening to the podcast and was about to share it here but it turns out you beat me to it! I think I'll do a post going into the interview (Zvi-style)[1] and bringing up the most interesting points and cruxes, and why the ARC Challenge matters. To quickly give my thoughts on some of the things you bring up:

  • The ARC Challenge is the best benchmark out their imo, and it's telling that labs don't release their scores on it. Chollet says in the interview that they test it but because they score badly, the don't release them.
  • On timelines, Chollet says that OpenAI's success led the field to 1) stop sharing Frontier research and 2) make the field focus on LLMs alone, thereby setting back timelines to AGI. I'd also suggest that the 'AGI in 2/3 years' claims don't make much sense to me unless you take an LLMs+scaling maximalist perspective.

And to respond to some other comments here:

  • To huw, I think the AI Safety field is mixed. The original perspective was that ASI would be like an AIXI model, but the success of transformers have changed that. Existing models and dependents could be economically damaging, but taking away the existential risk undermines the astronomical value of AI Safety from an EA perspective.
  • To OCB, I think we just disagree about how far LLMs are away from this. i think less that ARC is 'neat' and more that it shows a critical failure model in the LLM paradigm. In the interview Chollet argues that the 'scaffolding' is actually the hard part of reasoning, and I agree with him.
  • To Mo, I guess Chollet's perspective would be that you need 'open-endedness' to be able to automate many/most work? A big crux I think here is whether 'PASTA' is possible at all, or at least whether it can be used as a way to bootstrap everything else. I'm more of the perspective that science is probably the last thing that can possibly be automated, but that might depend on your definition of science. 
    • I'm quite sceptical of Davidson's work, and probably Karnofsky's, but I'll need to revisit them in detail to treat them fairly. 
    • The Metaculus AGI markets are, to me, crazy low. In both cases the resolution criteria are some LLM unfriendly, it seems that people are more going off 'vibes' and not reading the fine print. Right now, for instance, any OpenAI model will be easily discovered in a proper imitation game by asking it to do something that violates the terms of service.

I'll go into more depth on my follow-up post, and I'll edit this bit of my comment wiht a link once I'm done.

  1. ^

    In style only, I make no claims as to quality

JWS
17
2
0
1

Precommitting to not posting more in this whole thread, but I thought Habryka's thoughts deserved a response

IMO, it seems like a bad pattern that when someone starts thinking that we are causing harm that the first thing we do is to downvote their comment

I think this is a fair cop.[1] I appreciate the added context you've added to your comment and have removed the downvote. Reforming EA is certainly high on my list of things to write about/work on, so would appreciate your thoughts and takes here even if I suspect I'll ending up disagreeing with diagnosis/solutions.[2]

My guess is it would be bad for evaporative cooling reasons for people like me to just leave the positions from which they could potentially fix and improve things

I guess that depends on the theory of change for improving things. If it's using your influence and standing to suggest reforms and hold people accountable, sure. If it's asking for the community to "disband and disappear", I don't know. Like, I don't know in many other movements would that be tolerated with significant influence and funding power?[3] If one of the Lightcone Infrastructure team said "I think lightcone infrastructure in its entirety should shut down and disband, and return all funds" and then made decisions about funding and work that aligned with that goal and not yours, how long should they expect to remain part of the core team?

Maybe we're disagreeing about what we mean by the 'EA community' implicitly here, and I feel that sometimes the 'EA Community' is used as a bit of a scapegoat, but when I see takes like this I think "Why should GWWC shut down and disband because of the actions of SBF/OpenAI?" - Like I think GWWC and its members definitely count as part of the  EA Community, and your opinion seems to be pretty maximal without much room for exceptions.

(Also I think it's important to note that your own Forum use seems to have contributed to instances of evaporative cooling, so that felt a little off to me.)

I am importantly on the Long Term Future Fund, not the EA Infrastructure Fund

This is true, but LTFF is part of EA Funds, and to me is clearly EA-run/affiliated/associated. It feels like its odd that you're a grantmaker who decides where money to the community, from one of its most well-known and accessible funds, and you think that said community should disperse/disband/not grow/is net-negative for the world. That just seems rife for weird incentives/decisions unless, again, you're explicitly red-teaming grant proposals and funding decisions. If you're using it to "run interference" from the inside, to move funding away from the EA community and its causes, that feels a lot more sketchy to me.

  1. ^

    Never downvote while upset I guess

  2. ^

    I think I've noted before that there's a very large inferential difference between us, as we're two very different people

  3. ^

    Unless it was specifically for red-teaming

JWS
34
10
8

I wish the EA community would disband and disappear and expect it to cause enormous harm in the future

Feels like you should resign from EA Funds grantmaking then

Going to merge replies into this one comment, rather than sending lots and flooding the forum. If I've @ you specifically and you don't want to respond in the chain, feel free to DM:

On neglectedness - Yep, fair point that our relevant metric here is neglectedness in the world, not in EA. I think there is a point to make here but it was probably the wrong phrasing to use, I should have made it more about 'AI Safety being too large a part of EA' than 'Lack of neglectedness in EA implies lower ITN returns overall'

On selection bias/other takes - These were only ever meant to be my takes and reflections, so I definitely think they're only a very small part of the story. I guess @Stefan_Schubert would be interested to hear about your impression of 'lack of leadership' and any potential reasons why?

On the Bay/Insiders - It does seem like the Bay is convinced AI is the only game in town? (Aschenbrenner's recent blog seems to validate this). @Phib would be interested to hear you say more on your last paragraph, I don't think I entirely grok it but it sounds very interesting.

On the Object Level - I think this one for an upcoming sequence. Suffice to say that one can infer from my top level post that I have very different beliefs on this issue than many 'insider EAs', and I do work on AI/ML for my day job![1] But I think that while David sketches out a case for overall points, I think those points have been highly underargued and underscrutinised given their application in shaping the EA movement and its funding. So look it for a more specific sequence on the object level[2] maybe-soon-depending-on-writing-speed.

  1. ^

    Which I have recently left to do some AI research and see if it's the right fit for me.

  2. ^

    Currently tentatively titled "Against the overwhelming importance of AI x-risk reduction"

Ah sorry, it's a bit of linguistic shortcut, I'll try my best to explain more clearly:

As David says, it's an idea from Chinese history. Rulers used the concept as a way of legitimising their hold on power, where Tian (Heaven) would bestow a 'right to rule' on the virtuous ruler. Conversely, rebellions/usurpations often used the same concept to justify their rebellion, often by claiming the current rulers had lost heaven's mandate.

Roughly, I'm using this to analogise to the state of EA, where AI Safety and AI x-risk has become an increasingly large/well-funded/high-status[1] part of the movement, especially (at least apparently) amongst EA Leadership and the organisations that control most of the funding/community decisions.

My impression is that there was a consensus and ideological movement amongst EA leadership (as opposed to an already-held belief where they pulled a bait-and-switch), but many 'rank-and-file' EAs simply deferred to these people, rather than considering the arguments deeply.

I think that various amounts of scandals/bad outcomes/bad decisions/bad vibes around EA in recent years and at the moment can be linked to this turn towards the overwhelming importance of AI Safety, and as EffectiveAdvocate says below, I would like that part of EA to reduce its relative influence and power on the rest of it, and for rank-and-file EAs to stop deferring on this issue especially, but also in general.

  1. ^

    I don't like this term but again, I think people know what I mean when I say this

JWS
64
10
1

Reflections 🤔 on EA & EAG following EAG London (2024):

  • I really liked the location this year. The venue itself was easy to get to on public transport, seemed sleek and clean, and having lots of natural light on the various floors made for a nice environment. We even got some decent sun (for London) on Saturday and Sunday. Thanks to all the organisers and volunteers involved, I know it’s lot of work setting up an event like this us and making it run smoothly.
  • It was good to meet people in person who I previous had only met or recognised from online interaction. I won’t single out individual 1-on-1s I had, but it was great to be able to put faces to names, and hearing peoples stories and visions in person was hugely inspiring. I talked to people involved in all sorts of cause areas and projects, and that combination of diversity, compassion, and moral seriousness is one of the best things about EA.
  • Listening to the two speakers from the Hibakusha Project at the closing talk was very moving, and clear case of how knowing something intellectually is not the same thing as hearing personal (and in-person) testimony. I think it would’ve been one of my conference highlights in the feedback form if we hadn’t already been asked to fill it out a few minutes beforehand!
  • I was going to make a point about a ‘lack of EA leadership’ turning up apart from Zach Robinson, but when I double-checked the event attendee list I think I was just wrong on this. Sure, a couple of big names didn’t turn up, and it may depend on what list of ‘EA leaders’ you’re using as a reference, but I want to admit I was directionally wrong here.
  • I thought Zach gave a good opening speech, but many people noted on the apparent dissonance between saying that CEA wanted to focus on ‘principles-first’ approach to EA, but that they also expected AI to be their area of most focus/highest priority and that they don't expect that to change in the near future.
  • Finally, while I’m sure the people I spoke to (and those who wanted to speak to me) is strongly affected by selection-effects, and my own opinions on this are fairly strong, it did feel that there was consensus on there being a lack of trust/deference/shared beliefs from ‘Bay-Area EA’:[1]
    • Many people think that working on AI Safety and Governance is important and valuable, but not 'overwhelmingly important' or 'the most important thing human has done/will ever do'. This included some fairly well-known names from those who attended, and basically nobody there (as far as I could tell) I interacted with held extremely 'doomer' beliefs about AI.
    • There was a lot of uncomfortable feeling at the community-building funding being directed to ‘longtermism’ and AI Safety in particular. This is definitely a topic I'm want to investigate more post-EAG, as I'm not sure what the truth of the matter is, but I'd certainly find it problematic if some of the anecdotes I heard were a fair representation of reality.
    • In any case, I think it's clear that AI Safety is no longer 'neglected' within EA, and possibly outside of it.[2] (Retracted this as, while it's not true, commenters have pointed out that it's not really the relevant metric to be tracking here) 
    • On a personal level, it felt a bit odd to me that the LessOnline conference was held at exactly the same time as EAG. Feels like it could be a coincidence, but on the other hand this is not a coincidence because nothing is ever a coincidence. It feeds into my impression that the Bay is not very interested in what the rest of EA has to say.
    • One point which I didn't get any clear answers to was 'what are the feedback mechanisms in the community to push back on this', and do such feedback mechanisms even exist?
    • In summary: It feels like, from my perspective, that the Bay Area/Exclusively Longtermist/AI Safety Maximalist version of EA has 'lost of the mandate of heaven', but nonetheless at the moment controls a lot of the community's money and power. This, again, is a theme I want to explicitly explore in future posts.
  • I am old (over 30yo) and can’t party like the young EAs anymore 😔
  1. ^

    I'm not sure I have a good name for this, or concrete dividing lines. But in discussions people seemed to understand what it was meant to capture.

  2. ^

    To me it does seem like the case for the overwhelming importance of AI has been under-argued for and under-scrutinised.

JWS
32
9
6

I think this is a great initiative, and the new website looks great! I do, however, want to raise something (even if I'm afraid to be seen as 'that guy' on the Forum):

We are evolving into Consultants for Impact because we believe this new brand will better enable us to achieve our mission. Our new name gives us greater brand independence and control and provides a more professional presentation. It also enhances our capacity to accurately reflect the diverse philosophical frameworks (including, but not exclusively, Effective Altruism) that can benefit our work. We are excited about this transition and believe it will enable us to better support and inspire consultants dedicated to making a significant social impact.

Maybe this is me over-reacting, but it seems to imply 'we used to have EA in our name, but now EA is a toxic brand, so we removed it to avoid negative association'. If instead it's just because the new is more professional / just a better name then disregard my comment, but it's not what you wrote in the post.

There are still EA fingerprints, in terms of people, associated orgs, values and even language all over the website, but almost no mention of EA or the phrase 'Effective Altruism' at all.[1] I also think Effective Altruism does/can/should accomodate a set of 'diverse philosophical frameworks' and can still call itself EA. 

My fear is that people who are still reasonably thought of as EA[2] start to dissassociate from it, leaving only the most hardcore/weird/core people to hold the brand in an evaporative cooling dynamic (there was on discussion on a now-sadly-deleted post about this where someone shared their reasons for leaving EA which to me seemed to fit this dynamic, my response is here) which damages the movement, the organisation, and its aims, and which is mostly unnecessary if driven by roughly the same set of moral values and empirical beliefs.

  1. ^

    I wouldn't necessarily call this misleading, but I think the people CFI is going for would probably be smart enough to figure out the connection with some googling

  2. ^

    Very much 'EA-in-ideas' not 'EA-got-funded-by-OpenPhil' or 'EA-went-to-the-right-parties' or 'EA-has-lots-of-Forum-karma'

JWS
4
1
0
1

Thanks for responding David, and again I think that the survey work you've done is great :) We have many points of agreement:

  • Agreed that you basically note my points in the previous works (both in footnotes and in the main text)
  • Agreed that it's always a hard tradeoff when compressing detailed research findings into digestible summaries of research - I know from professional experience how hard that is!
  • Agreed that there is some structure which your previous factor analysis and general community discussions picked up on, which is worth highlighting and examining

I still think that the terminology is somewhat misguided. Perhaps the key part I disagree is that "Referring to these clusters of causes and ideas in terms of "longtermism" and "neartermism" is established terminology" - even if it has been established I want to push back and un-establish because I think it's more unhelpful and even harmful for community discussion and progress. I'm not sure what terms are better, though some alternatives I've seen have been:[1]

I guess, to state my point as clearly as possible, I don't think the current cluster names "carve nature at its joints", and that the potential confusion/ambiguity in use could lead to negative perceptions that aren't accurate became entrenched

  1. ^

    Though I don't think any of them are perfect distillations

JWS
16
5
0

First off, thank you for this research and for sharing it with the community. My overall feeling on this work is extremely positive, and the below is one (maybe my only?) critical nitpick, but I think it is important to voice.

Causes classified as longtermist were Biosecurity, Nuclear risk, AI risk, X-risk other and Other longtermist. 

Causes classified as neartermist were Mental health, Global poverty and Neartermist other.

Causes classified as Other were Animal Welfare, Cause Prioritization, EA movement building and Climate change.

I have to object to this. I don't think longtermism is best understood as a cause, or set of causes, but more as a justification for working on certain causes over others. e.g.:

  • Working on Nuclear Risk could be seen as near-termist. You can have a person-affecting view of morality and think that, given the track record of nuclear near-miss incidents, that it's a high priority for the wellbeing of people alive today
  • We just lived through a global pandemic, there is active concern about H5N1 outbreaks right now, so it doesn't seem obvious to me that many people (EA or not) would count biosecurity in the 'longtermist' bucket
  • Similarly, many working on AI risk have short timelines that have only gotten shorter over the past few years.[1]
  • Climate Change could easily be seen through a 'longtermist' lens, and is often framed in the media as being an x-risk or affecting the lives of future generations
  • Approaching Global Poverty from a 'growth > randomista' perspective could easily be justified from a longtermist lens given the effects of compounding returns to economic growth for future generations
  • EA movement building has often been criticised as focusing on 'longtermist' causes above others, and that does seem to be where the money is focused
  • Those concerned about Animal Welfare also have concerns about how humanity might treat animals in the future, and if we might lock-in our poor moral treatment of other beings

(I'm sure everyone can think of their own counter-examples)

I know the groupings came out of some previous factor analysis you did, and you mention the cause/justification difference in the footnotes, and I know that there are differences in community cause prioritisation, but I fear that leading with this categorisation helps to reify and entrench those divisions instead of actually reflecting an underlying reality of the EA movement. I think it's important enough not to hide the details in footnotes because otherwise people will look at the 'longtermist' and 'neartermist' labels (like here) and make claims/inferences that might not correspond to what the numbers are really saying.

I think part of this is downstream of 'longtermism' being poorly defined/understood (as I said, it is a theory about justifications for causes rather than specific causes themselves), and the 'longtermist turn' having some negative effects on the community, so isn't a result of your survey. But yeah, I think we need to be really careful about labelling and reifying concepts beyond the empirical warrant we have, because that will in turn have causal effects of the community.

  1. ^

    In fact, I wonder if AI was separated ut from the other 3 'longtermist' causes, what the others might look like. I think a lot of objections to 'longtermism' are actually objections to prioritising 'AI x-risk' work.

JWS
15
7
0
1

Hi Remmelt, thanks for your response. I'm currently travelling so have limited bandwidth to go into a full response, and suspect that it'd make more sense for us to pick this up in DMs again (or at EAG London if you'll be around?)

Some important points I think I should share my perspective on though:

  1. One can think that both Émile and 'Fuentes' behaved badly. I'm not trying to defend the latter here and they clearly aren't impartial. I'm less interested in defending Fuentes than trying to point out that Émile shouldn't be considered a good-faith critic of EA. I think your concerns about Andreas, for example, apply at least tenfold to Émile.
  2. I don't consider myself an "EA insider", and I don't consider myself having that weight in the Community. I haven't worked at an EA org, I haven't received any money from OpenPhil, I've never gone to the Co-ordination Forum etc. I think of A-E, the only one I'm claiming support for is D - if Émile is untrustworthy and often flagrantly wrong/biased/inaccurate then it is a bad sign to not recognise this. The crux then, is whether Émile is that wrong/biased/inaccurate, which is a matter on which we clearly disagree.[1] One can definitely support other critiques of EA, and it certainly doesn't mean EA is immune to criticism or that it shouldn't be open to hearing them.

I'll leave it at that for now. Perhaps we can pick this up again in DMs or a Calendly call :) And just want to clarify that I do admire you and your work even if I don't agree with your conclusions. I think you're a much better EA critic (to the extent you identify as one) than Émile is.

  1. ^

    I really don't want to have to be the person to step up and push against them, but it seems like nobody else is willing to do it

Load more