Though I guess it somewhat filters for intelligence, which correlates a bit with those things
As someone who went to a top private school I would agree with this, although admit it's not a perfect correlation.
Indeed I think we could do some targeting within top schools as you get a variety of students with different interests. You will get those who want to go to debating society and discuss the big issues of the day. You will get those are bored in every class waiting to get to the sports pitch. You will get maths geniuses who are pretty consumed by pursuing pure maths without much thought about the impact they will have. And you will get some who don't really want to be there.
So potential targeting could look like - EA outreach for students that actually sign up for it (obviously they will be interested then), for those taking philosophy A-level, for those in debating society (we could even arrange for a debate on a relevant topic), for those participating in Maths Olympiads etc. This targeting could bear more fruit than just 'outreach to Etonians'.
Interesting, what sort of content do your videos cover and how can I check them out?
Agreed but maybe there’s scope for spreading some of the core ideas of EA (a la Doing Good Better) as opposed to more specific career advice from 80K?
I'm not 100% sure but we may be defining opportunity cost differently. I'm drawing a distinction between opportunity cost and personal cost. Opportunity cost relates to the fact that doing something may inhibit you from doing something else that is more effective. Even if going vegan didn't have any opportunity cost (which is what I'm arguing in most cases), people may still not want to do it due to high perceived personal cost (e.g. thinking vegan food isn't tasty). I'm not claiming there is no personal cost and that is indeed why people don't go / stay vegan - although I do think personal costs are unfortunately overblown.
Without addressing all of your points in detail I think a useful thought experiment might be to imagine a world where we are eating humans not animals. E.g. say there are mentally-challenged humans of a comparable intelligence/capacity to suffer to non-human animals and we farm them in poor conditions and eat them causing their suffering. I'd imagine most people would judge this as morally unacceptable and go vegan on consequentialist grounds (although perhaps not and it would actually be deontological grounds?). If you would go vegan in the thought experiment but not in the real world then you're probably speciesist to some degree which I ultimately don't think can be defended.
I think the EA schtick is more like "we'll think things through really carefully and tell you what the most efficient ways to do good are". And so I think that if it's presented as "you want to be an EA now? great! how about ve*anism?"
EA is sometimes described as doing the most good (most common definition) or I suppose is sometimes described as finding the most effective ways to do good. These can be construed as two different things. I would say under the first definition that being vegan naturally becomes part of the conversation for the reasons I have mentioned (little to no opportunity cost).
Also, we may be fundamentally disagreeing on the scale of the benefits on consequentialist grounds of going vegan as well - I think they are quite considerable. Indeed "signalling caring" as you put it can then convince others to consider veganism in which case you can get a snowball of positive effects. But that's a whole other discussion.
P.S. I agree we can probably improve the way veganism is messaged in EA and it's possible I am part of the problem!
I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!
I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA. If something is a good thing, and provided it doesn't really have an opportunity cost, then it seems to me that a consequentialist EA should do it regardless of how good it is.
To illustrate my point, one can say it's a good thing to donate to a seeing eye dog charity. In a way it is, but an EA would say it isn't because there is an opportunity cost as you could instead donate to the Against Malaria Foundation for example which is more effective. So donating to a seeing eye dog charity isn't really a good thing to do.
Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different. It doesn't stop you doing something else. Therefore even if it realises a small benefit it seems worth it (and for the record I don't think the benefit is small).
Or perhaps you just think the personal cost to you of being ve*an is substantial enough to offset the harm to the animals. From a utilitarian view I'd imagine this is unlikely to be true. I happen to think avoiding the suffering of even one animal is significant, similarly to the fact that we think it would be highly significant to save just one human life. And following a vegan diet for a while will benefit way more than just one animal anyway.
Ah OK thanks, that makes sense. Certainly seems worthwhile to have more research into this
Thanks for this detailed reply! I appreciate these aren't questions with simple answers.
research into what type of activities have good long-term returns for longtermists
Do you mind elaborating slightly on what you mean here? To me this just reads as finding out the best activities to do if you're a longtermist, but given that you say it's a "small slice of our portfolio" I suspect this is this more specific.
Would you currently prefer a marginal resource to be used by an impatient longtermist (i.e. to reduce existential risk) or by a patient longtermist (i.e. to invest for the future)? Assume both would spend their resource as effectively as possible
Where do you think the impatient longtermist would spend their resource and where do you think the patient longtermist would spend their resource?
Finally, how do you best think we should proceed to answer these questions with more certainty?
P.S. there may well have been a much simpler way to formulate these questions, feel free to reformulate if you want to!
I think there's another difference between:
a) Thinking that a speaker shouldn't be allowed to speak at an event
b) Deciding not to attend an event with a confirmed speaker because you don't like their ideas
For the first half of your comment I thought you fell into camp b) but not camp a). However your last paragraph seems to imply you fall into both camps.
Personally I would not want a person to speak at an EA event if I thought they were likely to cause reputational damage to EA. In this particular case I (tentatively) don't think Hanson would have. Sure he's said some questionable things, but he was being invited to talk about tort law and I fail to see how allowing that signals condoning his questionable ideas. Therefore I would probably have let him speak and anyone who didn't want to hear him would obviously have been free to not attend.
It seems to me that people often imply that personally finding a speaker beyond the pale means that the speaker shouldn't be allowed to speak to anyone. I've always found this slightly odd.
Thanks. I agree that specific organisations must be doing some of their own outreach analysis. I do wonder if it would be helpful for someone to investigate whether there are any groups that EA as a whole needs to do more to reach out to. The analysis may not throw up anything revolutionary but it could still be worth doing to see. I may try something myself at some point.