JWS

2525 karmaJoined Jan 2023

Bio

Kinda pro-pluralist, kinda anti-Bay EA.

I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.

(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)

Posts
5

Sorted by New
4
JWS
· 1y ago · 1m read

Sequences
1

Criticism of EA Criticism

Comments
227

JWS
1h11
4
0

I sympathise with your NB at the beginning, but to be honest in the absence of specific examples or wider data it's hard for me to ground this criticism or test its validity. Ironically, it's almost as if this post is too fundamental instead of action-guiding for me.

Doesn't mean you're wrong per se, but this post is almost more of a hypothesis than an argument.

JWS
18d7
0
0
1

Actually, In Chapter 1 of What We Owe The Future, reintroducing this distinction is something that MacAskill does!

In practice, people mean "eutopia" when they say "utopia", and in a Wittgensteinian sense 'meaning is use', so changing language won't actually result in much.

JWS
18d11
10
11
1

(warning: some parts contain sardonic tone. maybe de-snarkify through ChatGPT if you don't like that)

Ok I have a lot of issues with this post/whole affair,[1] so after this I'm going to limit how much I respond (though Mikhail if you want to reach out via private message then please do so, but don't feel obliged to)

  1. This feels like a massive storm in a teacup over what honestly feels like quite a minor issue to me. I feel like a lot of this community heat and drama could have been avoided with better communication in private from the both of you, perhaps with the aid of a trusted third party involved.
  2. I also get a large gut impression that this falls into the broad category of "Bay Area shenanigans that I don't care about". I encourage everyone too caught up in it to take a breath, count to five, donate to AMF/GiveDirectly/concrete-charity-of-your-choice, and then go about their day.
  3. I don't think you understand how communities function. They don't function by dictat. Do you want CEA to scan every press release from every EA-related org? Do you want to disallow any unilateral action by people in the community? That's nonsense. Social norms are community enforcement mechanisms, and we're arguing about norms here. I think the organisers made a mistake, you think she violated a deontological rule. I think this has already gone too far, you think it needs a stronger response. We argue about norms, persuade each other and/or observers, and then norms and actions changes. This is the enforcement mechanism.[2] 
  4. In any case, I'm much more interested in norms/social mechanisms for improving community error-correction than I am avoiding all mistakes from the community (at least, mistakes below a certain bar). And my impression[3] is that the organisers have tried to correct the mistake to the extent that they believes they made a mistake, and anything else is going to be a matter of public debate. Again, this is how communities work.
  5. I also think you generally overestimate your confidence about what the consequences of the protest will be, which deontological norms were broken and if so how bad they were, and how it will affect people's impressions in general. I agree that I wouldn't necessarily frame the protest in the way it was, but I think that's going to end up being a lot less consequential in the grand scheme of things than a) the community drama this has caused and b) any community enforcement mechanisms you actually get set up
  1. ^

    This doesn't seem to be the first time Holly has clashed with rationalist norms, and in general when this has happened I tend to find myself generally siding with her perspective over whichever rationalist she's questioning, fair warning.

  2. ^

    What did you think it looked like? Vibes? Papers? Essays?

  3. ^

    Which could ofc be wrong, you're in possession of private messages I don't have

JWS
18d22
8
2
1

Sorry Mikhail, but this:

I believe there’s a chance that protest organisers understood their phrasing could potentially cause people to have an impression not informed by details but kept the phrasing because they thought it suited their goals better...I think it’s likely enough they were acting deceptively.

Is accusing someone in the community of deliberately lying, and you seem to equivocate on that in other comments. Even earlier in this thread you say to Holly that "To be clear, in the post, I’m not implying that you, personally, tried to deceive people." But to me this clearly is you implying that quite obviously, even with caveats. To then go back and talk about how this refers to the community as a whole feels really off to me.

I know that I am very much a contextualiser instead of a decoupler[1] but using a term like 'deception' is not something that you can neatly carve out from its well-understood social meaning as referring to a person's character and instead use it to talk about a social movement as an agent.

I'd very much suggest you heed Jason's advice earlier in the thread.

  1. ^

    At least in EA-space, I think I'm fairly average for the general population if not maybe more decoupling than average

JWS
19d5
2
0
2

Hey Holly, I hope you're doing ok. I think the Bay Area atmosphere might be particular unhealthy and tough around this issue atm, and I'm sorry for that. For what it's worth, you've always seemed like someone who has integrity to me.[1] 

Maybe it's because I'm not in the thick of Bay Culture or super focused on AI x-risk, but I don't quite see why Mikhail reacted so strongly (especially the language around deception? Or the suggestions to have the EA Community police Pause AI??) to this mistake. I also know you're incredibly committed to Pause AI, so I hope you don't think what I'm going to say is insensitive, but I even some of your own language here is a bit storm-in-a-teacup?

The mix-up itself was a mistake sure, but every mistake isn't a failure. You clearly went out of your way to make sure that initial incorrect impression was corrected. I don't really see how that could meet a legal slander bar, and I think many people will find OpenAI reneging on a policy to work with the Pentagon highly concerning whether or not it's the charter.

I don’t really want to have a discussed about California defamation law. Mainly, I just wanted to reach out and offer some support, and say that from my perspective, it doesn't look as bad as it might feel to you right now.

  1. ^

    Even when I disagree with you!

  2. ^

    If he publishes then I'll read it, but my prior is sceptical, especially given his apparent suggestions Turns out Mikhail published as I was writing! I'll give that a read

  3. ^

    I know you want to hold yourself to a higher standard, but still.

Answer by JWSFeb 10, 202412
1
0

I think this is a critical question for EA right now. I do want to try to make some distinctions about what the 'EA Community' means though:

  • I think the "EA brand' or "public/elite perception of EA" should be evaluated as separate from the "EA Community", otherwise I think it's too broad. And I think this is doing very badly, in fact in many areas that used to be friendly-ish to EA (e.g. silicon valley and highly-educated-online-twitter) it seems to be absolutely toxic right now.
    • I also think the public perception of EA is almost unrelated to the truth. People seem to think that ~95%+ of funding goes to bonkers galaxy-brained AI research, instead of most AI research being mech interp, and most EA Funding still going to GH&D.
  • “the group of people who identify as effective altruists” is also quite loose. Like, I'm aware of people who hang around EAs and donate almost exclusively to GiveWell recommended charities who personally 'wouldn't identify as an effective altruists',[1] so I don't know how meaningful that is either.
    • My particular beef here is that I think the 'EA Community' has been used unfairly as a punching bag over the last year. The EAs I interact with are basically all kind, humane, thoughtful, not totalising or involved in any of the shenanigans that I see mostly coming from the Bay Area and Bay Culture rather than EA at all.
  • 'EA Leadership' is also a vague and nebulous term but I also want to highly distinguish that from the broader EA Community. People disagree about who these are, and why they matter, and what they believe, and how much it should matter! I agree that there seems to be a lot of changing of the guard, which is interesting, and a lack of people stepping up to be 'leaders' in the community in the Long 2023.

tl;dr - agree that these are very important questions, I'd like people (including myself) to be more precise about what they mean when they talk about the Community, as I think that would be more likely to lead to productive changes

  1. ^

    whereas to me, if it talks like an EA and if it donates like an EA...

Awesome work Ricardo! And take even more applause for making the code open source!!!

Minor correction, both on the website and on the Grant Database CSV they provide, the numbers I have don't quite match yours? It especially seems to undercount GH&D, they seem to have made two funds in 2023 totalling £3.35m, and 8 in 2022 totalling £11.14m. Maybe they've updated this since you extracted your data, or perhaps there's some difference between the csv and the endpoint you requested data from? Or maybe you apply a filter in your code that removes them accidentally?

I think that's probably a point EA Funds could do, is say when the grant database is fully up-to-date historically with what has been granted.

I'd be very happy to have some discussion on these topics with you Matthew. For what it's worth, I really have found much of your work insightful, thought-provoking, and valuable. I think I just have some strong, core disagreements on multiple empirical/epistemological/moral levels with your latest series of posts.

That doesn't mean I don't want you to share your views, or that they're not worth discussion, and I apologise if I came off as too hostile. An open invitation to have some kind of deeper discussion stands.[1]

  1. ^

    I'd like to try out the new dialogue feature on the Forum, but that's a weak preference

So I think it's likely you have some very different beliefs from most people/EAs/myself, particularly:

  1. Thinking that humans/humanity is bad, and AI is likely to be better
  2. Thinking that humanity isn't driven by ideational/moral concerns[1]
  3. That AI is very likely to be conscious, moral (as in, making better moral judgements than humans), and that the current/default trend in the industry is very likely to make them conscious moral agents in a way humans aren't

I don't know if the total utilitarian/accelerationist position in the OP is yours or not. I think Daniel is right that most EAs don't have this position. I think maybe Peter Singer gets closest to this in his interview with Tyler on the 'would you side with the Aliens or not question' here. But the answer to your descriptive question is simply that most EAs don't have the combination of moral and empirical views about the world to make the argument you present valid and sound, so that's why there isn't much talk in EA about naïve accelerationism.

Going off the vibe I get from this view though, I think it's a good heuristic that if your moral view sounds like a movie villain's monologue it might be worth reflecting, and a lot of this post reminded me of the Earth-Trisolaris Organisation from Cixin Liu's Three Body Problem. If someone's honest moral view is "Eliminate human tyranny! The world belongs to Trisolaris AIs!" then I don't know what else there is to do except quote Zvi's phrase "please speak directly into this microphone".

Another big issue I have with this post is that some of the counter-arguments just seem a bit like 'nu-uh', see: 

But why would we assume AIs won't be conscious?

Why would humans be more likely to have "interesting" values than AIs?

But it would also be bad if we all died from old age while waiting for AI, and missed out on all the benefits that AI offers to humans, which is a point in favor of acceleration. Why would this heuristic be weaker?

These (and other examples) are considerations for sure, but they need to be argued for. I don't think they can just be stated and then say "therefore, ACCELERATE!". I agree that AI Safety research needs to be more robust and the philosophical assumptions and views made more explicit, but one could already think of some counters to the questions that you raise, and I'm sure you already have them. For example, you might take a view (ala Peter Godfrey-Smith) that a certain biological substrate is necessary for conscious.

Similarly on total utilitarianism emphasis larger population sizes, agreed to the extent that the greater population increase the population utility, but this is the repugnant conclusion again. There's a stopping point even in that scenario where an ever larger population decreases total utility, which is why in Parfit's scenario it's full of potatoes and muzak rather than humans crammed into battery cages like factory-farmed animals. Empirically, naïve accelerationism may tend toward the latter case in practice, even if there's a theoretical case to be made for it.

There's more I could say, but I don't want to make this reply too long, and I think as Nathan said it's a point worth discussing. Nevertheless it seems our different positions on this are built on some wide, fundamental divisions about reality and morality itself, and I'm not sure how those can be bridged, unless I've wildly misunderstood your position.

  1. ^

    this is me-specific

Don't know why this is being disagree-voted. I think point 1 is basically correct - it doesn't take going for from being a "hardcore classic hedonist utilitarian" to not support the case Matthew makes in the OP

Load more