I am working as an Economist in impact assessment and wellbeing, and previously worked in management consulting. I’m still figuring out how to do the most good with my career.
I’m yet to fully form my EA views and am eager to keep learning. In particular, I am looking forward to further progress being made on foundational questions in global priorities research, including longtermism. For now, I identify as an anti-speciesist, classical, total utilitarian (with slightly less confidence in the classical and total aspects). I am also an atheist.
I’m happy to chat have a chat sometime. Feel free to connect and message me on LinkedIn: https://www.linkedin.com/in/jack-malde/
Yes, I agree. I actually think this model could work well if we do multiple funnelling exercises, one for each type of cause area.
The only reason I was perhaps slightly forceful in my comment is because from this post and the previous post (Big List of Cause Candidates) I have got the impression that there is going to be a single funnelling exercise that aims to directly compare shorttermist vs longtermist areas including on their 'scale'.
Nuno - I don't want to give the impression that I fundamentally don't like your idea because I don't, I just think some care has to be taken.
You're right. "Personal" wasn't the best choice of word, I'm going to blame my 11pm brain again.
I sort of think you've restated my position, but worded it somewhat better, so thanks for that.
Yeah absolutely, this was my tired 11pm brain. I meant to refer to extinction risk whenever I said x-risk. I'll edit.
That's all fair. I would endorse that rewording (and potential change of approach)
My shorter (and less strong) comment concerns this:
Implementation stage: It has been determined that it's most likely a good idea to start a charity to work in a specific cause. Now the hard work begins of actually doing it.
I don't believe that every cause area naturally lends itself to starting a charity. In fact, many don't. For example, if one wants to estimate the philanthropic discount rate more accurately, one probably doesn't need to start a charity to do so. Instead, one may want to do an Econ PhD.
So I think viewing the end goal as charity incubation may not be helpful, and in fact may be harmful if it results in EA dismissing particular cause areas that don't perform well within this lens, but may be high-impact through other lenses.
Thanks for this. I have two initial thoughts which I'll separate into different comments (one long, one short - guess which one this is).
OK so firstly, I think in your evaluation phase things get really tricky, and more tricky than you've indicated. Basically, comparing a shorttermist cause area to a longtermist cause area in terms of scale seems to me to be insanely hard, and I don't think heuristics or CEAs are going to help much, if at all. I think it really depends on which side you fall on with regards to some tough, and often contentious, foundational questions that organisations such as GPI are trying to tackle. To give just a few examples:
Basically my point is, depending on answers to questions such as the above, you may think a longtermist cause area is WAY better than a shorttermist cause area, or vice versa, and we haven't even gone near a CEA (which I'm not sure would help matters). I can't emphasise that 'WAY' enough.
To some significant extent, I just think choice of cause area is quite personal. Some people are longtermists, some aren't. Some people think it's good to reduce x-risk, some don't etc. The question for you, if you're trying to apply a funnel to all cause areas, is how do you deal with this issue?
Most research organisations deal with this issue by not trying to apply a funnel to all cause areas in the first place. Instead they focus on a particular type of cause area and prioritise within that e.g. ACE focuses on near-term animal suffering, and GiveWell focuses on disease. Therefore, for example, GiveWell can make certain assumptions about those who are interested in their work - that they aren't worried by complex cluelessness, that they probably aren't (strong) longtermists etc. They can then proceed on this basis. A notable exception may be 80,000 Hours that has funnelled from all cause areas, landing on just longtermist ones and
So part of me thinks your project may be doomed from the start unless you're very clear about where you stand on these key foundational questions. Even in that case there's a difficulty, in that anyone who disagrees with your stance on these foundational questions would then have the right to throw out all of your funnelling work and just do their own.
I would be interested to hear your thoughts on all of this.
I wonder, is it worth cooperating with each other to ensure a decent number of the most promising 'EA approved' ideas get submitted?
It isn't really clear if a single person/team is allowed to submit more than one idea. If not, then cooperation could be particularly useful.
Chris Meacham? I'm starstruck!
In all seriousness, good point, I think you're right but I would be interested to see what Arden/Michelle say in response.
Since both W1 and W2 will yield harm while W3 won’t, it looks like W3 will come out obligatory. I can see why one might worry about this.
I thought I'd take this opportunity to ask you: do you hold the person-affecting view you outlined in the paper and, if so, do you then in fact see ensuring extinction as obligatory?
Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.
Just a few comments:
I'm hoping we're nearing a good enough understanding of each other's views that we don't need to keep discussing for much longer, but I'm happy to continue a bit if helpful.
Absolutely, every stage is important.
And reading back what I wrote, it was perhaps a little too strong. I would quite happily adopt MichaelA's suggested paragraph in place of my penultimate one!