I keep seeing EA[1] accused of being "techno-utopian," which I think means something like, "They may not talk much about it, but ultimately the thing that's driving all their work is the dangerous/naive/selfish/capitalist/colonial/male vision of a spacefaring civilisation of happy sentient beings made possible by differential technological development."

If we likewise try to oversimplify their motives for a moment, what's their vision?

I often find myself assuming that it's either something like "Direct democracy everywhere"[2][3] or that there isn't really one (because critics are rarely expected to provide fleshed out alternatives to the thing criticised). But I haven't given it much thought and I'm curious to hear others' impressions.

I don't think a group needs to have confident consensus on a comprehensive vision of the future to have productive moral debate with others. But I do think it would be helpful to get a bit more clarity on what our respective visions might be, because they seem to be closer to where the main cruxes are than where the debate usually takes place.

 

  1. ^

    Perhaps "longtermism" or "core EA" would be more accurate, as I think I've seen EAs make this accusation of longtermist/core EAs a fair bit too.

  2. ^

    I.e. for all adult human beings alive today for all non-trivial decisions. Maybe with some attempt to represent the interests of domesticated nonhuman vertebrates or human beings in the next 100 years max.

  3. ^

    And then the hugely simplified picture in my mind of what's going on when EAs argue is one side saying, "But can we just agree that hell is overwhelmingly bad and heaven is overwhelmingly good?" and the other saying, "But can we just agree that that line of reasoning has a mixed-at-best track record even by its own lights?" over and over again.

29

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

There's not going to be a one-size-fits-all answer to this. EA (implicitly and explicitly) criticises how many other worldviews see the world, and as such we get a lot of criticism back. However, it is a topic I've thought a bit about, so here are some best guesses at the 'visions' of some of our critics put into four groups. [Note: I wrote this up fairly quickly, so please point out any disagreements, mistakes, or suggest additional groups that I've missed]

1: Right-of-centre Sceptics: Critics from this school may think kind of well of EAs intentions, but think we are naïve and/or hubristic, and place us in the a tradition of thought that relies on central planning rather than market solutions. They'd argue along the lines of the most efficient interventions being the spread of markets and the rule of law rather than charities. They may also, if on the more social conservative end, believe that social traditions capture cultural knowledge than can't be captured by quantification or first-principles reasoning. Example critic: Tyler Cowen

2: Super Techno-Optimistic Libertarians: This set thinks that EA has been captured by 'wokeness'/'AI doomers'/whatever Libertarian boogeyman you can think of here. Generally dismissive of EAs, EA institutions, and not really willing to engage on object-level discussions in my experience. Their favoured intervention is probably cutting corporate taxes, removing regulations, and increased funding on AI capabilities so we can go as fast as possible to reap the huge benefits they expect.

In a way, this group acts as a counter-point to some other EA critics, who don't see a true distinction between us and this group, perhaps because many of them live in the Bay and are socially similar to/entangled with EAs there. Example critic: Perry Metzger/Mark Andreessen 

3: Decentralised Democrats: There are some similarities to group 1 here, in the sense that critics in this group think that EAs are too technocratic. Sources of disagreement here include pragmatic ones: they are likely to believe that social institutions are not adapted to the modern world to such a degree that fixing them is higher priority than 'core EA' think, normative ones: they likely believe that decisions that will have a large impact over the future deserve the consent of as much of the world as possible and not just the acceptance of whatever EA thinks, and sociological ones: if I had to guess, I'd say they're more central-left/liberaltarian than other EA critics. Very likely to think that distinguishing from EA-as-belief and EA-as-institutions is a false distinction, and very supportive of reforms to EA including community democratisation. Example critic: E. Glen Weyl/Zoe Cremer

4: Radical Progressives/Anti-capitalists: This group is probably the one that you're thinking of in terms of 'our biggest critics', and they've been highly critical of EA since the beginning. They generally believe EA to be actively harmful, and usually ascribe this to either deliberate design or EA being blind to its support of oppressive ideologies/social structures. There's probably a lot of variation in what kind of world they do want, but it's likely to be a very radical departure, probably involving mass cultural and social change (perhaps revolutionary change), ending capitalism as it is currently constituted, and more money, power, and support being given to the State to bring about positive changes.

There is a lot of variation in this group, though you can pick up on some common themes (e.g. a more Hickel-esque view of human progress, compared to a more 'Pinkerite' view that EA might have), common calls-to-action (climate change is probably the largest/most important cause area here). I suggest you don't take my word for it and read them yourself,[1] but I think you won't find much in terms of practical policy suggestions - perhaps because that's seen as "working within a fatally flawed system", but some in this group are more moderate. Example critic: Alice Crary/Emile Torres/Jason Hickel

  1. ^

    Though I must admit, I find reading criticism from this group very demotivating - lots of it seems to me to be bad faith, shallowly researched, assuming bad intentions from EAs, or avoiding object-level debates on purpose. YMMV though.

This reply is really thorough and I appreciate the clarity in world views you describe (without strawmanning!) in addition to examples of specific critics. Thank you!

I think this is generally right but misunderstands how 3 and 4 are often a continuum. I think the biggest change post-FTX is that people who are on the high-status left (e.g. Amia Srinivasan who wrote a critical but collegial critique in the LRB in 2015) now have switched to a more critical tack (e.g. the prelude to Crary's book). 

There's a version of the critique that is a soft-left critique of Effective Altruism being too friendly to capitalism and existing power structures versus a critique of EA as actively disingenuous and bad faith (e.g. Torres). 

If you find reading criticism from the last group demotivating and "bad faith", try this podcast with the great Habiba Banu on EA and the Left:

https://forum.effectivealtruism.org/posts/6NnnPvzCzxWpWzAb8/podcast-the-left-and-effective-altruism-with-habiba-islam

I think it does a great job pointing out both agreements and disagreements between EA and the Left.

This isn't answering the question you ask (sorry), but one possible response to this line of criticism is for some people within EA / longtermism  to more clearly state what vision of the future they are aiming towards. Because this tends not to happen, it means that critics can attribute particular visions to people that they don't have. In particular, critics of WWOTF often thought that I was trying to push for some particular narrow vision of the future, whereas really the primary goal, in my mind at least, is to keep our options open as much as possible, and make moral progress in order to figure out what sort of future we should try to create.

Here are a couple of suggestions for positive visions. These are what I'd answer if asked: "What vision of the future are you aiming towards?":

"Procedural visions"
(Name options:  Viatopia - representing the idea of a waypoint, and of keeping multiple paths open - though this mixes latin and greek roots. Optiotopia, though is a mouthful and mixes latin and greek roots. Related ideas: existential security, the long reflection.)

These doesn't have some vision of what we ultimately want to achieve. Instead they propose a waypoint that we'd want to achieve, as a step on the path to a good future. That waypoint would involve: (i) ending all obvious grievous contemporary harms, like war, violence and unnecessary suffering; (ii) reducing existential risk down to a very low level; (iii) securing a deliberative process for humanity as a whole, so that we make sufficient moral progress before embarking on potentially-irreversible actions like space settlement.  

The hope could be that almost everyone could agree on this as a desirable waypoint.

"Utopia for everyone"
(Name options: multitopia or pluritopia, but this mixes latin and greek roots; polytopia, but this is the name of a computer game. Related idea: Paretopia.)

This vision is where a great diversity of different visions of the good are allowed to happen, and people have choice about what sort of society they want to live in. Environmentalists could preserve Earth's ecosystems; others can build off-world societies. Liberals and libertarians can create a society where everyone is empowered to act autonomously, pursuing their own goals; lovers of knowledge can build societies devoted to figuring out the deepest truths of the universe; philosophical hedonists can create societies devoted to joy, and so on.

The key insight, here, is that there's just a lot of available stuff in the future, and that scientific, social and moral progress will potentially enable us to produce great wealth with that stuff (if we don't destroy the world first, or suffer value lock-in). Plausibly, if we as a global society get our act together, the large majority of moral perspectives can get most of what they want. 

Like the procedural visions, spelling this vision out more could have great benefits today, via greater collaboration: if we could agree that this is what we'll aim for, at least in part, then we could reduce the chance of some person or people with some narrow view trying to grab power for itself.

(I write about these a little bit about both of these idea in a fictional short story, here.)

I'd welcome name ideas for these, especially the former. My best guesses so far are "viatopia" and "multitopia", but I'm not wedded to them and I haven't spent lots of time on naming. I don't think that the -topia suffix is strictly necessary.

What's wrong with the Long Reflection and Paretopia? I think they're great!

A name doesn't have to reference all key aspects of the thing - you can just pick one. And reflecting is what people will actually be doing, so it's a good one to pick. We can still talk about the need for the Long Reflection to be a time of existential security, keeping options open and ending unnecessary suffering.

And then Paretopia just sounds like a better version of Paretotopia. 

But if you're sure these won't work, I vote Pretopia and Potatopia.

Curated and popular this week