My name is Bill, I have been into effective altruism for a few years, dipping in and out of the community.
I work as a policy researcher for a think tank, where my work focuses on UK innovation, productivity and transportation. My educational background is a mix of philosophy, mathematics and data science.
This is a really neat idea. I used to debate a lot during my undergrad, and while I have quite a lot of negative feelings towards the sport from my own experience, I do agree that debaters are a very receptive audience for EA ideas for the reasons you mention.
One major challenge I see is getting debaters to take more than a superficial interest in ideas that sound interesting. Debates about AI, global health, animal rights and many other EA issues were fairly popular motions when I debated, but didn't necessarily lead to deeper consideration than simply asking "how can I win a debate about this issue?"
Obviously, the lecture series was useful for introducing people to EA who were unfamiliar, but hopefully, it also prompted participants to think about the issues in a deeper way - not just in the context of debating the topics. I think combining debate with these additional engagement resources is a smart way to run these kinds of events going forward.
P.S. Hi Dan, I think I adjudicated a final with you back in Belgrade 2016 - very cool to see an old familiar name on here.
Comments about moral uncertainty and wild animal suffering are valid, but I think kind of unneccessary. I don't think the argument works at all in its current form.
I think the argument is something like this:
If so, the conclusion is invalid. At most, it shows the world would be better on net if humans suddenly stopped existing. But there is something quite absurd about trying to protect animals from the risks of anthropogenic extinction... via anthropogenic extinction. The more obvious thing to do would be to reduce the risks of anthropogenic extinction.
So for the argument to work, you need to believe that it's not possible to significantly reduce anthropogenic risk (implausible I think), but it is possible to engineer a human extinction event that is, in expectation, much less risky to animal life than an accidential human extinction event. Engineering such an extinction might well be possible, but since you only get one shot, you would surely need an implausibly high level of confidence.
But how do we estimate the EV of estimating the EV of general intellectual progress?
On a less facetious note, it's about the average effect of intellectual progress on innovation right? What EV comes from general intellectual progress that is not a result of innovation?
So you try to causally estimate the effect of innovation on things you value (e.g. GDP), and you try to create measures of general intellectual progress to see how those causally impact innovation. That's obviously easier said than done.
We did not use it in a name calling way but rather as a neutral term to describe the intellectual movement.
I have no doubt that the term was used in good faith. I apologise that my post was worded a bit poorly, so it sounded like I was accusing you of name-calling.
What's your basis for claiming that 'randomista' is a non-neutral term?
The '-ista' suffix sounds pejorative to me in English,like someone who is a zealous dogmatic advocate. Corbynista was the example I referred to, which is a term used often to in the UK to bash the left.
Etymologically, it sounds like my suspicion was correct (see Hauke's post above). Of course these words often get reclaimed, and it appears that's happened here too, hence why I asked whether the RCT proponents call themselves that.
It's obviously not that important, and I don't want to start a battle over words, but David makes a good point about how you engage your critics.
Interesting post, very stimulating. A couple of thoughts:
Isn't factory farming a clear-cut case of injustice? A pretty standard view of justice is that you don't harm others, and if you are harming them then you should stop and compensate for the harm done. That seems to describe what happens to farmed animals. In fact, as someone who finds justice plausible, I think it creates a decent non-utilitarian argument to care about domestic animal suffering more than wild animal suffering.
As my last sentence suggests, I do think that justice views are likely to affect cause prioritisation. I think you're right that justice may lead you to different conclusions about inter-generational issues, and is worth a deeper look.
I think high amounts of concern for wild animals is actually a bit of a defect in utilitarianism. A quite compelling reason for caring more about factory farmed animals is that we are inflicting a massive injustice against them, and that isn't the case for wild animals generally. We do often feel moral obligations to wild animals when we are responsible for their suffering (think oil spills for example). That's not to say wild animals don't matter, but they might be further down our priority list for that reason.
I think the visualization is great. I think the exploding red dots is very powerful, demonstrates just an immense amount of bloodshed.
Thank you for sharing this Holly. Have you read Strangers Drowning by Larissa MacFarquhar? It's a book full of stories of extraordinarily committed "do-gooders" (some effective altruists, some not), as well as some interesting analysis on the mixed reaction that they receive from society. I think there's a lot of overlap with some of what you've written and the experiences of the individuals in Strangers Drowning, so you're definitely not alone.
I suppose the extent that anyone experiences any of these 8 challenges really depends on how motivated they are by morality. I think most people think it's important that they have a positive impact on the world (or at least, don't have a negative one), but they think it's less important to maximize their positive impact. Even being convinced of EA doesn't necessarily change this: it might just lead you to conclude that you can have a much greater positive impact on the world at little cost to yourself, so you might as well...
I guess personally I think that morality should be my most important motivator abstractly, but just looking at my behaviour, it clearly isn't in practice (at least right now). I suppose I'm glad that I don't find altruism very emotionally difficult, but I also suppose that I feel slightly guilty about not feeling very guilty about not doing more.
I am a little confused about the purpose of this post, because surely meta-EA is just EA? I feel like the major innovation of EA is the idea that altruists can and should compare the value of different interventions (which you appear to consider meta-EA). In other words, EA is meta-altruism.
The content might be useful as a road-map, but I think that the terminology is a bit misleading. What these areas have in common is that they are indirect, as opposed to having some kind of abstract meta-ness property.