Hide table of contents

I was a judge on the Criticism and Red-teaming Contest, and read 170 entries. It was overall great: hundreds of submissions and dozens of new points. 

Recurring patterns in the critiques

But most people make the same points. Some of them have been made from the beginning, like 2011. You could take that as an indictment of EA's responsiveness to critics, proof that there's a problem, or merely as proof that critics don't read and that there's a small number of wide basins in criticism space. (We're launching the EA Bug Tracker to try and distinguish these scenarios, and to keep valid criticisms in sight.[1])

Trends in submissions I saw:

(I took out the examples because it was mean. Can back em up in DMs.)

 

  • Academics are stuck in 2015. It's great that academics are writing full-blown papers about EA, and on average I expect this to help us fight groupthink and to bring new ideas in. But almost all of the papers submitted here are addressing a seriously outdated version of EA, before the longtermist shift, before the shift away from public calculation, before the systemic stuff. 
    Some of them even just criticise Singer 2009 and assume this is equivalent to criticising EA. 
    (I want to single out Sundaram et al as an exception. It is steeped in current EA while maintaining some very different worldviews.)
  • Normalisation. For various reasons, many suggestions would make EA less distinctive. Whether that's intentional PR skulduggery, retconning a more mainstream cause into the tent, adding epicycles to show that mainstream problem x is really the biggest factor in AI risk, or just what happens when you average intuitions (the mode of a group will reflect current commonsense consensus about causes and interventions, and so not be very EA). This probably has some merit. But if we implemented all of these, we'd be destroyed. 
  • Schism. People were weirdly enthusiastic about schisming into two neartermist and longtermist movements. (They usually phrase this as a way of letting neartermist things get their due, but I see this as a sure way to doom it to normalisation instead.)
  • Stop decoupling everything. The opposite mistake is to give up on decoupling, to allow the truism that 'all causes are connected' swamp focussed efforts.    
  • Names. People devote a huge amount of time to the connotations of different names. But obsessing over this stuff is an established EA foible.
  • Vast amounts of ressentiment. Some critiques are just disagreements about cause prioritisation, phrased hotly as if this gave them more weight.
  • EAs underestimate uncertainty in cause prioritisation. One perennial criticism which has always been true is that most of cause prioritisation, the heart of EA, is incredibly nonobvious and dependent on fiddly philosophical questions.[2] And yet we don't much act like we knew this, aside from a few GPI economist-philosophers. This is probably the fairest criticism I hear from non-EAs.
     

Fundamental criticism takes time

Karnofsky, describing his former view: "Most EA criticism is - and should be - about the community as it exists today, rather than about the “core ideas.” The core ideas are just solid. Do the most good possible - should we really be arguing about that?" He changed his mind!

Really fundamental challenges to your views don't move you at the time you read them. Instead they set dominoes falling; they alter some weights a little, so that the next time the problem comes up in your real life, you notice it and hold it in your attention for a fraction more of a second. And then over about 3 years, you become a different person, - and no trace of the original post remains, and no gratitude will accrue.

If the winners of the contest don't strike you as fundamental critiques, this is part of why. (The weakness of the judges is another part, but a smaller part than this, I claim. Just wait!)

My favourite example of this is 80k arguing with some Marxists in 2012. We ended up closer than you'd have believed!

My picks

Top for changing my mind

  • Aesthetics as Epistemic Humility.
    I usually view "EA doesn't have good aesthetics" as an incredibly shallow critique - valuable for people doing outreach but basically not relevant in itself. Why should helping people look good? And have you seen how much most aesthetics cost?

    But this post's conception is not shallow. Aesthetics as an incredibly important kind of value, to some - and conceivably a unifying frame for more conventionally morally significant values. I still don't want to allocate much money to this, but I won't call it frivolous again.
     
  • EvN on veganism
    van Nostrand's post is fairly important in itself - she is a talented health researcher, and for your own sake you should heed her. (It will be amazing if she does the blood tests.) But I project much greater importance onto it. Context: I was vegan for 10 years.

    The movement has been overemphasising diet for a long time. This focus on personal consumption is anti-impact in a few ways: the cognitive and health risks we don't warn people about, the smuggled deontology messing up our decisions, the bias towards mere personal action making us satisfice at mere net zero. 

    There is of course a tradeoff with a large benefit to animals and a "taking action / sincerity / caring / sacrificing" signal, but we could maintain our veganism while being honest about the costs to some people. (Way more contentious is the idea of meat options at events as welcoming and counter-dogmatic. Did we really lose great EAs because we were hectoring them about meat when it wasn't the main topic? No idea, but unlike others she doesn't harp on about it, just does the science.) As you can see from the email she quotes, this post persuaded me that we got the messaging wrong and plausibly did some harm. (Net harm? Probably not, but cmon.)
     

Top 5 for improving EA   

  • Bad Omens
    This post was robbed (came in just under the prize threshold). But all of you have already read it. I beg you to go look again and take it to heart. Cause-agnostic community building alienates the people we need most, in some areas. CBs should specialise. We probably shouldn't do outreach with untrained people with a prewritten bottom line. 
  • Are you really in a race? 
    The apparent information cascade in AI risk circles has been bothering me a lot. Then there's the dodgy effects of thoughtless broadcasting, including the "pivotal acts" discourse. This was a nice, subtle intervention to make people think a bit on the most important current question in the world.
  • Obviously Froolow and Hazelfire and Lin
  • Effective altruism in the garden of ends
    Alterman's post is way too long, and his "fractal" idea is incredibly underspecified. Nonetheless he describes how I live, and how I think most of us who aren't saints should live. 
  • Red teaming a model for estimating the value of longtermist interventions
  • The very best criticism wasn't submitted because it would be unseemly for the author to win.
     

Top for prose

Top for rigour   

Top posts I don't quite understand in a way which I suspect means they're fundamental 

Top posts I disagree with

  • Vinding on Ord. Disagree with it directionally but Ord's post is surprisingly weak. Crucial topic too.
  • Zvi. Really impressed with his list of assumptions (only two errors).
  • you can't do longtermism because the complexity class is too hard. Some extremely bad arguments (e.g. Deutsch on AI) take the same form as this post - appeal to a worst-case complexity class, when this often says very little about the practical runtimes of an algorithm. But I am not confident of this.
  • Private submission with a bizarre view of gain of function research.
  • Sundaram et al
     

Process

One minor side-effect of the contest: we accidentally made people frame their mere disagreements or iterative improvements as capital-c Criticisms, more oppositional than they maybe are. You can do this for anything - the line between critique and next iteration is largely to do with tone, an expectation of being listened to, and whether you're playing to a third party audience.

 

  1. ^

    Here's a teaser I made in an unrelated repo.

  2. ^

    AI (i.e. not AI alignment) only rises above this because, at this point, there's no way that it's not going to have some major impact even if that's not existential.

170

0
0

Reactions

0
0

More posts like this

Comments32
Sorted by Click to highlight new comments since: Today at 2:56 PM

Update on the nutritional tests: 5 tests have been ordered, at least 3 completed,  2 have results back, 1 of which speaks to the thesis (the other person wasn't vegan but was very motivated). I won't have real results until people have gone through the full test-supplement-retest cycle, but so far it's 1 of 1 vegans having one of the deficiencies you'd expect. This person had put thought into their diet and supplements and it seems to have worked because they weren't deficient in any of the things they were supplementing, but had missed one.

 

I have no more budget for covering tests for people but if anyone would like to pay their own way ($613 for initial test) and share data I'm happy to share the testing instructions and the what-I'd-do supplementation doc (not medical advice,  purely skilled-amateur level "things to consider trying").

What's the easiest way to do a nutritional test if I want to do one myself?

Draft instructions here, look for "Testing"

I have a few years of data from when I was vegan; any use?

I probably can't combine it with the trial data since it's not comparable enough, but seems very useful for estimating potential losses from veganism.

I enjoyed this post a lot! 

I'm really curious about your mention of the "schism" pattern because I both haven't seen it and I sort of believe a version of it. What were the schism posts? And why are they bad? 

I don't know if what you call "schismatics" want to burn the commons of EA cooperation (which would be bad), or if they just want to stop the tendency in EA (and really, everywhere) of people pushing for everyone to adopt convergent views (the focus of "if you believe X you should also believe Y" which I see and dislike in EA, versus "I don't think X is the most important thing, but if you believe X here are some ways you could can do it more effectively" which I would like to see more). 

Though I can see myself changing my mind on this, I currently like the idea of a more loose EA community with more moving parts that has a larger spectrum of vaguely positive-EV views. I've actually considered writing something about it inspired by this post by Eric Neyman https://ericneyman.wordpress.com/2021/06/05/social-behavior-curves-equilibria-and-radicalism/ which quantifies, among other things, the intuition that people are more likely to change their mind/behavior in a significant way if there is a larger spectrum of points of view rather than a more bimodal distribution.

It seems bad in a few ways, including the ones you mentioned. I expect it to make longtermist groupthink worse, if (say) Kirsten stops asking awkward questions under (say) weak AI posts. I expect it to make neartermism more like average NGO work. We need both conceptual bravery and empirical rigour for both near and far work, and schism would hugely sap the pool of complements. And so on.

Yeah the information cascades and naive optimisation are bad. I have a post coming on a solution (or more properly, some vocabulary to understand how people are already solving it).

DMed examples.

I'm the author of a (reasonably highly upvoted) post that called out some problems I see with all of EA's different cause areas being under the single umbrella of effective altruism. I'm guessing this is one of the schism posts being referred to here, so I'd be interested in reading more fleshed out rebuttals. 

The comments section contained some good discussion with a variety of perspectives - some supporting my arguments, some opposing, some mixed - so it seems to have struck a chord with some at least. I do plan to continue making my case for why I think these problems should be taken seriously, though I'm still unsure what the right solution is. 

Good post!

I doubt I have anything original to say. There is already cause-specific non-EA outreach. (Not least a little thing called Lesswrong!) It's great, and there should be more. Xrisk work is at least half altruistic for a lot of people, at least on the conscious level. We have managed the high-pay tension alright so far (not without cost). I don't see an issue with some EA work happening sans the EA name; there are plenty of high-impact roles where it'd be unwise to broadcast any such social movement allegiance. The name is indeed not ideal, but I've never seen a less bad one and the switching costs seem way higher than the mild arrogance and very mild philosophical misconnotations of the current one.

Overall I see schism as solving (at really high expected cost) some social problems we can solve with talking and trade.

This might be the best feedback I've ever gotten on a piece of writing (On the Philosophical Foundations of EA). Thanks for reading so many entries and helping make the contest happen!

Even though you disagreed with my post, I was touched to see that it was one of the "top" posts that you disagreed with :). However, I'm really struggling to see the connection between my argument and Deutsch's views on AI and universal explainers. There's nothing in the piece that you link to about complexity classes or efficiency limits on algorithms. 

You are totally right, Deutsch's argument is computability, not complexity. Pardon!

Serves me right for trying to recap 1 of 170 posts from memory.

The basic answer is, computational complexity matters less than you think, primarily because it makes very strong assumptions, and even one of those assumptions failing weakens it's power.

The assumptions are:

  1. Worst case scenarios. In this setting, everything matters, so anything that scales badly will impact the overall problem.

  2. Exactly optimal, deterministic solutions are required.

  3. You have only one shot to solve the problem.

  4. Small advantages do not compound into big advantages.

  5. Linear returns are the best you can do.

This is a conjunctive argument, where if one of the premises are wrong, than the entire argument gets weaker.

And given the conjunction fallacy, we should be wary of accepting such a story.

Link to more resources here:

https://www.gwern.net/Complexity-vs-AI#complexity-caveats

Got opinions on this? (how 80k vet jobs and their transparency about it)

It wasn't officially submitted to the contest

Nice work, glad to see it's improving things.

I sympathise with them though - as an outreach org you really don't want to make public judgments like "infiltrate these guys please; they don't do anything good directly!!". And I'm hesitant to screw with the job board too much, cos they're doing something right: the candidates I got through them are a completely different population from Forumites. 

Adding top recommendations is a good compromise.

I guess a "report job " [as dodgy] button would work for your remaining pain point, but this still looks pretty bad to outsiders.

Overall: previous state strikes me as a sad compromise rather than culpable deception. But you still made them move to a slightly less sad compromise, so hooray.

Ah and regarding "infiltrate these guys please" - I am not voicing an opinion on this making sense or not (it might) - but I am saying that if you want person X to infiltrate an org and do something there - at least TELL person X about this.

wdyt?

Yeah maybe they could leave this stuff to their coaching calls

Thanks,

How about the solution/tradeoff of having a link saying "discuss this job here"?

on the 80k site? seems like a moderation headache

I'd run the discussion in the forum by default

ah, cool

So.. would you endorse this? [I'm inviting pushback if you have it]

got none

Wait, it's a small thing, but I think I have a different understanding of decoupling (even though my understanding is ultimately drawn from the Nerst post that's linked to in your definitional link); consequently, I'm not 100% sure what you mean when you say a common critique was 'stop decoupling everything'. 

You define the antonym of decoupling as the truism that 'all causes are connected'. This implies that a common critique was that, too often, EA takes causes that are interconnected, separates them and, as a result, undermines its efforts to make progress. 

I can imagine this would be a common critique. However, my definition of the antonym is quite different.

I would describe the antonym of decoupling to be a lack of separating an idea from its possible implications. 

For example, a low-decoupler is someone who is weirded out by someone who says, 'I don’t think we should kill healthy people and harvest their organs, but it is plausible that a survival lottery, where random people are killed and their organs redistributed, could effectively promote longevity and well-being'. A low-decoupler would be like, 'Whoa mate, I don't care how much you say you don't endorse the implications of your logic, the fact you think this way suggests an unhealthy lack of an empathy and I don't think I can trust you'.   

Are you saying that lots of critiques came from that angle? Or are you saying that lots of critiques were of the flavour, 'Too often, EA takes causes that are interconnected, separates them and, as a result, undermines its efforts to make progress'? 

Like I said, it's a minor thing, but I just wanted to get it clear in my head :) 

Thanks for the post! 

Your read makes sense! I meant the lumping together of causes, but there was also a good amount of related things about EA being too weird and not reading the room. 

Thanks for the clarification!

Thanks, this was fun to read and highlighted several interesting posts I wouldn't have otherwise found!

On the vegan thing:

I'm not actively involved in EA but sometimes I read this forum and try to live a utilitarian lifestyle (or like to tell myself that at least). I hope to become mostly vegan at some point, but given the particular moment I am at in my career/life, it strikes me as a terrible idea for me to try to be vegan right now. I'm working 100+ hours per week with virtually no social life. Eating and porn are basically the only fun things I do. 

If I were to try to go vegan, it would take me a lot longer to eat meals because I'd have to force it down, and I would probably not get full and would end up being hungry and thus less productive throughout the day. I think I would also lose up mental energy and willpower by removing fun from my day and would be less productive. If I am productive now, I can eventually potentially make a big impact on various things including something like animals stuff or other things.

 Is this just selfish rationalization? I don't think so, though there is some of that.

I try to look for good veggie/vegan dishes/restaurants and have ~2/3 of my meals vegan but the remainder just doesn't seem even close to worth it right now. Since I have very little social contact and am not "important" yet, the signaling value is low. 

I think it's great that people have made being vegan work for them, but I don't think it's right for everyone at every time in their lives.

I struggled a lot with it until I learned how to cook in that particular style (roughly: way more oil, MSG, nutritional yeast, two proteins in every recipe). Good luck!

If I had to make a criticism, it's that EA's ideas of improving morality only exist if moral realism is true.

Now to define moral realism, I'm going to define it as moral rules that are crucially mind independent, ala how physical laws are mind- independent.

If it isn't true (which I suspect will happen with 50% probability) than EA has no special claim to morality, although everyone else doesn't either. But moral realism is a big crux here, at least for universal EA.

I see this criticism a lot, but I don't understand where it cashes out. In the 50% case where moral realism is false, then the expected value of all actions is zero. So the expected value of our actions is determined only by what happens in the 50% case where moral realism is true, and shrinking the EV of all actions by 50% doesn't change our ordering of which actions have the highest EV. More generally than EV-based moralities, any morality that proposes an ordering of actions will have that ordering unchanged by a <100% probability that moral realism is false. So why does it matter if moral realism is false with probability 1% or 50% or 99%?

Admittedly that is a good argument against the idea that moral realism actually matters too much, albeit I would say that the EV of your actions can be very different depending on your perspective (if moral realism is false).

Also, this is a case where non-consequentialist moralities fail badly at probability, because it's asking for an infinite amount of evidence in order to update one's view away from the ordering, which is equivalent to asking for mathematical proof that you're wrong.

More from Gavin
Curated and popular this week
Relevant opportunities