Thank you for your comments.
I wouldn't say that I believe engineered pandemics or AI mis-alignment or whatever are implausible. It’s simply that I think I’ll get a better handle on whether they are real threats by seeing if there’s a consensus view among respected experts that these things are dangerous than if I try to dive in to the details myself. Nuclear weapons are a good example because everyone did agree that they were dangerous and even during the Cold War the superpowers co-operated to try to reduce the risks (hotline, arms treaties), albeit after a shaky start, as you say.
I also agree with you that there is no prohibition on considering really bad but unlikely outcomes. In fact, I think this is one of the good things EA has done – to encourage us to look seriously at the difference between very very bad threats and disastrous, civilisation-destroying threats. The sort of thing I have in mind is: “let’s leave some coal in the ground in case we need to re-do the Industrial Revolution”. Also, things like seed banks. These kinds of ‘insurance policies’ seem like really sensible – and also really conservative – things to think about. That’s the kind of ‘expect the best, prepare for the worst’ conservatism that I fully endorse. Just like I recommend you get life insurance if your family depend on your income, although I have no reason to think you won’t live to a ripe old age. Whatever the chances of an asteroid strike or nuclear war or an engineered pandemic are, I fully support having some defences against them and/or building capacity to come back afterwards.
I suppose I’d put it this way: I’m a fan of looking out for asteroids, thinking about how they could be deflected and preparing a space craft that can shoot them down. But I wouldn’t suggest we all move underground right now – and abandon our current civilisation – just to reduce the risk. I’m exaggerating for effect, but I hope you see my point.
That's an interesting point. There's a lot of thinking about how we judge the output of experts in other fields (and I'm not an expert in that), but I'll give you my thoughts. In short, I'm not sure you can engage with all the arguments on the object level. Couple of reasons:
(1) There are lots of people who know more about X than I do. If they are trying to fool me about X, they can; and if they are honestly wrong about X then I've got no chance. If some quantum physicist explains how setting up a quantum computer could trigger a chain reaction that could end human life, I've got no chance of delving into the details of quantum theory to disprove that. I've got to with ... not just vibes, exactly, but a kind of human approach to the numbers of people who believe things on both sides of the argument, how plausible they are and so on. That's the way I deal with Flat Earth, Creationism and Global Warming arguments: there are guys out there who know much more than me, but I just don't bother looking at their arguments.
(2) People love catastrophes and apocalypses! Those guys who keep moving the doomsday clock so that we are 2 seconds to midnight or whatever; the guys who thought the Cold War was bound to end in a nuclear holocaust; all the sects who have thought the world is going to end and gathered together to await the Rapture or the aliens or whatever - there are just too many examples of prophets predicting disaster. So I think it's fair to discount anyone who says the End is Nigh. On the other hand, the civilisation we have behind us has got us to this state, which is not perfect, but involves billions of decently-fed people living long-ish lives, mostly in peace. There's a risk (a much less exciting risk that people don't get so excited about) that if you make radical changes to that then you'll make things much worse.
Lots of good points here - thank you.
I'm happy to discuss moral philosophy. (Genuinely - I enjoyed that at undergraduate level and it's one of the fun aspects of EA.) Indeed, perhaps I'll put some direct responses to your points into another reply. But what I was trying to get at with my piece was how EA could make some rough and ready, plausibly justifiable, short cuts through some worrying issues that seemed to be capable of paralysing EA decision-making.
I write as a sympathiser with EA - someone who has actually changed his actions based on the points made by EA. What I'm trying to do is show the world of EA - a world which has been made to look foolish by the collapse of SBF - some ways to shortcut abstruse arguments that look like navel-gazing, avoid openly endorsing 'crazy train' ideas, resolve cluelessness in the face of difficult utilitarian calculations and generally do much more good in the world. Your comment "Somewhere, someone has to be doing the actual work" is precisely my point: the actual work is not worrying about mental bookkeeping or thinking about Nazis - the actual work is persuading large numbers of people and achieving real things in the real world, and I'm trying to help with that work.
As I said above, I don't claim that any of my points above are knock-down arguments for why these are the ultimately right answers. Instead I'm trying to do something different. It seems to me that EA is (or at least should be) in the business of gaining converts and doing practical good in the world. I'm trying to describe a way forward for doing that, based on the world as it actually is. The bits where I say 'that's how get popular support' are a feature, not a bug: I'm not trying to persuade you to support EA - you're already in the club! - I'm trying to give EA some tools to persuade other people, and some ways to avoid looking as if EA largely consists of oddballs.
Let me put it this way. I could have added: "and put on a suit and tie when you go to important meetings". That's the kind of advice I'm trying to give.
Thanks for your comments!
What specifically is being recommended? Good question. I would say two things.
(1) Think about issues of recruitment, leadership, public messaging, public association with an eye to virtues such as statesmanship & good judgment. There’s no shortage of prophets in EA; it’s time for some kings.
But that’s really vague & unhelpful too! Ok, true. I’m no statesman but how about something like this:
(2) Choose one disease and eliminate it entirely. People will say that eliminating disease X is doing less good than reducing disease Y by 40% (or something like that). Ignore them. Eliminating disease X shows the world that EA is a serious undertaking that achieves uncontroversially good things. Maybe disease X would have mutated and caused something worse; maybe not – who knows! We’re clueless! But it would show that EA is not just earnest young people following a fad but a real thing that really matters. That’s the kind of achievement that could take EA to the next level.
(Obviously, don’t give up on the existential risks & low-chance/high-impact stuff. I just mean that concrete proof of effectiveness is a great recruiting & enthusing tool.)
On whether EA appeals enough to conservatives
(1) It’s not bad, but could be a lot better. Frankly, EA is a good fit with major world religions that encourage alms-giving (Christianity & Islam spring to mind) and ought to be much bigger there than it is.
(2) This anecdote from Tyler Cowen’s talk: “And after my talk, a woman went up to me and she came over and she whispered in my ear and she said, you know, Tyler, I actually am a social conservative and an effective altruist, but please don't tell anyone.” Hmm.
English-language novels are the best counter-example, I agree. A large part of that is the product of writers of Indian/sub-continental extraction. A Fine Balance, for example, is extremely good, Midnight's Children too (and of course there are many). I think I over-stated my case on that one - thank you. But, given the number of people involved nowadays - the whole literate population of India + the Commonwealth + the US, we surely have to accept that per capita output, even for novels, is way down on what it was.
Yes, this is a good point, but it points to a deeper one.
Much of the appeal of EA, in my view, is contingent on the circumstances we live in. These include, e.g., the fact that many people are rich enough to be able to live comfortable lives even after giving away sizeable amounts of money: if we were all subsistence farmers then EA just wouldn't appeal as a practical option. But the key circumstance for the purposes of your essay is the lack of plausible alternative ways of making a significant contributions to civilisation.
For whatever reason, the fact is that Western culture, right now, is not producing cultural achievements of lasting worth. If you were an intelligent, well-educated young person in 1650, 1750 or 1850 then there was a decent chance that you would be able to make a serious contribution to the accumulated cultural inheritance of mankind. But not now. You know, as I do, that no one has written a symphony of the standard that was common in the 18th, or a novel of the standard common in the 19th century, for a long time - and it's not going to happen anytime soon, no matter how many well-fed literate and educated billions there are.
If you are a serious-minded young person now, hoping to do something worthwhile with your life, you're not going to become a composer or a poet. So what's left? Something to do with reducing suffering seems pretty good. Scientific/medical/social/logistic advances are still happening, unlike cultural ones, so that seems like a good way to spend your life.
Now, of course, relieving suffering is a very good way to spend one's life! But things would look very different to you if it looked as if you might be able to spend your life instead building another Chartres Cathedral or writing Beethoven's symphonies or painting Raphaels.
Or let me put the point the other way: we don't look back and criticise Beethoven because he spent too much time composing and not enough time distributing malaria nets. That's because, utilitarianism (even "minus all the controversial bits") just doesn't seem like a sensible way of evaluating a civilisation in which Beethoven, Goethe, Byron, Blake, David, Goya, Rossini etc were all working at the same time. The fact that utilitarianism appears at all plausible now demonstrates the lack of new excellence on display or reasonably attainable. A philosophy for swine? Maybe. But what if we are swine?
I tried to make some of these points before here: https://furtheroralternatively.blogspot.com/2022/07/on-effective-altruism.html .
I see that point. My take is that it is a low-hanging fruit (it's a fun kind of project that may attract many people who would be put off by something more like a new vaccine) that is likely overlooked because of the reputation that zebras have in the West. But one of my hopes in putting this forward is that someone might take it and run with it, perhaps adding it to something your list of best policies for Sub-Saharan Africa.
Mine is much less radical than that one! But have you seen the film Downsizing? Highly recommend it. Intelligent (and entertaining) exploration of these issues. I reviewed it here: http://furtheroralternatively.blogspot.com/2018/05/four-film-reviews.html .
The comparison is between wild animals and animals in captivity. There is reason to think that the latter have better lives overall (the ready supplies of food and protection from predators provided by humans being obvious examples).
Some were treated well, some badly pre-1900. But in the near future domesticated zebras would be essentially pets, as horses are today, and would have great lives . They are not needed for work now - we have mechanised transport - but they might be needed in the future.