Bio

Pro-pluralist, pro-bednet, anti-Bay EA

Posts
8

Sorted by New
4
JWS
· · 1m read

Sequences
3

Against the overwhelming importance of AI Safety
EA EDA
Criticism of EA Criticism

Comments
289

Hey Steven! As always I really appreciate your engagement here, and I’m going to have to really simplify but I really appreciate your links[1] and I’m definitely going to check them out 🙂

I think François is right, but I do think that work on AI safety is overwhelmingly valuable.

Here’s an allegory:

I think the most relevant disagreement that we have[2]is the beginning of your allegory. To indulge it, I don't think we have knowledge of the intelligent alien species coming to earth, and to the extent we have a conceptual basis for them we can't see any signs of them in the sky. Pair this with the EA concern that we should be concerned about the counterfactual impact of our actions, and that there are opportunities to do good right here and now,[3] it shouldn't be a primary EA concern. 

Now, what would make it a primary concern is if Dr S is right and that the aliens are spotted and that they're on their way, but I don't think he's right. And, to stretch the analogy to breaking point, I'd be very upset that after I turned my telescope to the co-ordinates Dr S mentions and seeing meteors instead of spaceships, that significant parts of the EA movement were still wanting to have more funding to construct the ultimate-anti-alien-space-laser or do alien-defence-research instead of buying bednets.

(When I say “AGI” I think I’m talking about the same thing that you called digital “beings” in this comment.)

A secondary crux I have is that a 'digital being’ in the sense I describe, and possibly the AGI you think of, will likely exhibit certain autopoietic properties that make it significantly different from either the paperclip maxermiser or a 'foom-ing' ASI. This is highly speculative though, based on a lot of philosophical intuitions, and I wouldn’t want to bet humanity’s future on it at all in the case where we did see aliens in the sky.

To be clear, you can definitely find some people in AI safety saying AGI is likely in <5 years, although Ajeya is not one of those people. This is a more extreme claim, and does seem pretty implausible unless LLMs will scale to AGI.

My take on it, though I admit driven by selection bias on Twitter, is that many people in the Bay-Social-Scene are buying into the <5 year timelines. Aschenbrenner for sure, Kokotajlo as well, and even maybe Amodei[4] as well? (Edit: Also lots of prominent AI Safety Twitter accounts seem to have bought fully into this worldview, such as the awful 'AI Safety Memes' account) However, I do agree it’s not all of AI Safety for sure! I just don’t think it that, once you take away that urgency and certainy of the probelm, it ought to be considered the world's “most pressing problem”, at least without further controversial philosophical assumptions.

  1. ^

    I remember reading and liking your 'LLM plateau-ist' piece.

  2. ^

    I can't speak for all the otheres you mention, but fwiw I do agree with your frustrations at the AI risk discourse on various sides

  3. ^

    I'd argue through increasing human flourishing and reducing the suffering we inflict on animals, but you could sub in your own cause area here for instance, e.g. 'preventing nuclear war' if you thought that was both likely and an x-risk

  4. ^

    See the transcript with Dwarkesh at 00:24:26 onwards where he says that superhuman/transformative AI capabilities will come within 'a few years' of the interview's date (so within a few years of summer 2023)

Yeah it's true, I was mostly just responding of the empirical question of how to identify/measure that split on the Forum itself.

As to dealing with the split and what it represents, my best guess is that there is a Bay-concentrated/influenced group of users who have geographically concentrated views, which much of the rest of EA disagree with/to varying extents find their beliefs/behaviour rude or repugnant or wrong.[1] The longer term question is if that group and the rest of EA[2] can cohere together under one banner or not.

I don't know the answer there, but I'd very much prefer it to be discussion and mutual understanding rather than acrimony and mutual downvoting. But I admit I have been acrimonious and downvoted others on the Forum, so not sure those on the other side to me[3] would think I'm a good choice to start that dialogue.

  1. ^

    Perhaps the feeling is mutual? I don't know, certainly I think many members of this culture (not just in EA/Rationalist circles but beyond in the Bay) find 'normie' culture morally wrong and intelorable

  2. ^

    Big simplification I know

  3. ^

    For the record, as per bio, I am a 'rest of the world/non-Bay' EA

Agreed, and I think @Peter Wildeford has pointed that out in recent threads - it's very unlikely to be a 'conspiracy' and much more likely that opinions and geographical locations are highly correlated. I can remember some recent comments of mine that swung from slighty upvoted to highly downvoted and back to slightly upvoted

This might be something that the Forum team is better placed to answer, but if anyone can think of a way to try to tease this out using data on the public API let me know and I can try and investigate it

I wish Clara had pushed Jason more in this interview about what EA is and what Jason's issues with it are in more specific detail. I think he's kind-of attacking an enemy of his own making (linking @jasoncrawford so he can correct me). For example:

  • He presents a potted version of EA history, which Clara pushes back on/corrects, and Jason never acknowledges that. But to me he was using that timeline as part of the case for 'EA was good but has since gone off track' or 'EA has underlying epistemic problems which lead it to go off track'
  • He seems to connect EA to utilitarianism, but never elaborates on his issue with this. I think he's perhaps upset at naïve utilitarianism, but again many EAs have written against this. When he talks about his scepticism about what the long-term future holds as a separation point to EA is false. Many EAs, including myself and Clara in the interview feel this way, and Jason doesn't respond to it at all!
    • One moral point that does come up is the Drowning Child thought experiment. Clara rejects its implications because of empirical effectiveness (which is odd because I'm sure Singer believes that this is true as well, but the fact that we have identified charities that can save lives makes the analogy hold). I'm much less sure what Jason's disagreement consists on, if it's from a similar empirical angle or a rejection of moral universalism.
  • A bunch of the funding to get progress studies, and in particular Roots of Progress (Jason's org) seems to have come from EA sources. So this is clearly a case of EA doing the 'fund something and see what happens' approach. I guess I don't have a clear sense of where RoP funding does come from and how it evaluates stuff though.
  • In practice, I'm not sure that I'd want to say that Progress Studies is the movement of the people and EA is the movement of elites. I think that they demographically appeal to very similar types of people, so I'm not sure what that point is meant to prove.
  • Even though Jason admits he is oversimplifying, I wish he could have provided more receipts. He often talks about what EAs are like, but I don't know if he has any data apart from vibes and intuition.

My impression is that Jason is rhetorically trying to set EA up as a poor alternative to Progress Studies/Progress movement/whatever so that he can knock it down. (e.g. see this twitter thread of his for an example - of note here he uses Helen Toner as an example of an EA driven to a terrible decision by EA ideology, whereas now it seems to be a case of a playing a high-stakes power-struggle and losing. I wonder if he has made a correction.) This article is Jason presenting his take on what the differences are, and I don't think that it's an unbiased one or one that's devoid of strategic intent.

tl;dr - I don't really recognise the EA Jason is presenting here that much,[1] and I think he's using it deliberately as a foil to increase the stature of the 'Progress Community'

  1. ^

    Maybe it's a Bay vs UK thing, I don't know

I think I disagree with this perspective because, to me, the doing is the identity in a certain importance sense.

Like I think everyone GWWC Pledger should reasonably be expected to be identified as an EA, even if they don't claim the self identity. If MacAskill or Moskovitiz's behaviour changed 0% apart from they stopped self-identifying as an EA, I still think it'd make sense to consider them EAs.

What really annoys me with the 'EA = Specific EA Community' is takes like this or this - the ideas part of EA is what matters. If CEA and OpenPhil disbanded I'd still be donating to effective charities because of the ideas involved, and the 'self-identification/specific community lineage' explanation cannot really explain this imho.

(p.s. not trying to go in too hard on you David, I was torn about whether to respond to this thread or @Karthik Tadepalli's above. Perhaps we should meet and have a chat about it sometime if you think that's productive at all?)

JWS
0
1
1
1

I go on holiday for a few days and like everything community-wise explodes. Current mood.

JWS
20
8
7

Edit: I retracted the below because I think it is unkind and wasn't truth-seeking enough. I apologise if I caused too much stress to @Dustin Moskovitz or @Alexander_Berger, even if I have disagreements with GVF/OP about things I very much appreciate what both of you are doing for the world, let alone 'EA' or its surrounding community.

Wait what, we're (or GV is) defunding animal stuff to focus more on AI stuff? That seems really bad to me, I feel like 'PR' damage to EA is much more coming from the 'AI eschaton' side than the 'help the animals' side (and also that interventions on animal welfare are plausibly much more valuable than AI)[1]

  1. ^

    e.g. here and here

I think if you subscribe to a Housing-Theory-Of-Everything or a Lars Doucet Georgist Persepctive[1] then YIMBY stuff might be seen as an unblocker to good political-economic outcomes in everything else.

  1. ^

    Funny story version here

Which particular resolution criteria do you think it's unreasonable to believe will be met by 2027/2032 (depending on whether it's the weak AGI question or the strong one)?

Two of the four in particular stand out. First, the Turing Test one exactly for the reason you mention - asking the model to violate the terms of service is surely an easy way to win. That's the resolution criteria, so unless the Metaculus users think that'll be solved in 3 years[1] then the estimates should be higher. Second, the SAT-passing requires "having less than ten SAT exams as part of the training data", which is very unlikely in current Frontier models, and labs probably aren't keen to share what exactly they have trained on.

it is just unclear whether people are forecasting on the actual resolution criteria or on their own idea of what "AGI" is. 

No reason to assume an individual Metaculus commentator agrees with the Metaculus timeline, so I don't think that's very fair.

I don't know if it is unfair. This is Metaculus! Premier forecasting website! These people should be reading the resolution criteria and judging their predictions according to them. Just going off personal vibes on how much they 'feel the AGI' feels like a sign of epistemic rot to me. I know not every Metaculus user agrees with this, but it is shaped by the aggregate - 2027/2032 are very short timelines, and those are median community predictions. This is my main issue with the Metaculus timelines atm.

I actually think the two Metaculus questions are just bad questions. 

I mean, I do agree with you in the sense that they don't fully match AGI, but that's partly because 'AGI' covers a bunch of different ideas and concepts. It might well be possible for a system to satisfy these conditions but not replace knowledge workers, perhaps a new market focusing on automation and employment might be better but that also has its issues with operationalisation.
 

  1. ^

    On top of everything else needed to successfully pass the imitation game

The Metaculus timeline is already highly unreasonable given the resolution criteria,[1] and even these people think Aschenbrenner is unmoored from reality.

  1. ^

    Remind me to write this up soon

Load more