jackmalde

I am working as an Economist in impact assessment and wellbeing, and previously worked in management consulting. I’m still figuring out how to do the most good with my career.

I’m yet to fully form my EA views and am eager to keep learning. In particular, I am looking forward to further progress being made on foundational questions in global priorities research, including longtermism. For now, I identify as an anti-speciesist, classical, total utilitarian (with slightly less confidence in the classical and total aspects). I am also an atheist.

I’m happy to chat have a chat sometime. Feel free to connect and message me on LinkedIn: https://www.linkedin.com/in/jack-malde/

Comments

A Funnel for Cause Candidates

Absolutely, every stage is important.

And reading back what I wrote, it was perhaps a little too strong. I would quite happily adopt MichaelA's suggested paragraph in place of my penultimate one!

A Funnel for Cause Candidates

Yes, I agree. I actually think this model could work well if we do multiple funnelling exercises, one for each type of cause area.

The only reason I was perhaps slightly forceful in my comment is because from this post and the previous post (Big List of Cause Candidates) I have got the impression that there is going to be a single funnelling exercise that aims to directly compare shorttermist vs longtermist areas including on their 'scale'.

Nuno - I don't want to give the impression that I fundamentally don't like your idea because I don't, I just think some care has to be taken.

A Funnel for Cause Candidates

You're right. "Personal" wasn't the best choice of word, I'm going to blame my 11pm brain again.

I sort of think you've restated my position, but worded it somewhat better, so thanks for that.

A Funnel for Cause Candidates

Yeah absolutely, this was my tired 11pm brain. I meant to refer to extinction risk whenever I said x-risk. I'll edit.

A Funnel for Cause Candidates

That's all fair. I would endorse that rewording (and potential change of approach)

A Funnel for Cause Candidates

My shorter (and less strong) comment concerns this:

Implementation stage: It has been determined that it's most likely a good idea to start a charity to work in a specific cause. Now the hard work begins of actually doing it.

I don't believe that every cause area naturally lends itself to starting a charity. In fact, many don't. For example, if one wants to estimate the philanthropic discount rate more accurately, one probably doesn't need to start a charity to do so. Instead, one may want to do an Econ PhD. 

So I think viewing the end goal as charity incubation may not be helpful, and in fact may be harmful if it results in EA dismissing particular cause areas that don't perform well within this lens, but may be high-impact through other lenses.

A Funnel for Cause Candidates

Thanks for this. I have two initial thoughts which I'll separate into different comments (one long, one short - guess which one this is).

OK so firstly, I think in your evaluation phase things get really tricky, and more tricky than you've indicated. Basically, comparing a shorttermist cause area to a longtermist cause area in terms of scale seems to me to be insanely hard, and I don't think heuristics or CEAs are going to help much, if at all. I think it really depends on which side you fall on with regards to some tough, and often contentious, foundational questions that organisations such as GPI are trying to tackle. To give just a few examples:

  • How seriously do you take the problem of complex cluelessness and how should one respond to it? If you think it's a problem you might then give every 'GiveWell-type' cause area a scale rating of "Who the hell knows", funnel them all out immediately, and then simply consider cause areas that arguably don't run into the cluelessness problem - perhaps longtermist cause areas such as values spreading or x-risk reduction (I acknowledge this is just one way to respond to the problem of cluelessness)
  • Do you think influencing the long-run future is intractable? If so you may want to funnel out all longtermist cause areas (possibly not including extinction-risk cause areas)
  • Are you convinced by strong longtermism? If so you may just want to funnel out all 'short-termist' cause areas because they're just a distraction
  • Do you hold a totalist view of population ethics? If you don't you may want to funnel out all extinction-risk reduction charities

Basically my point is, depending on answers to questions such as the above, you may think a longtermist cause area is WAY better than a shorttermist cause area, or vice versa, and we haven't even gone near a CEA (which I'm not sure would help matters). I can't emphasise that 'WAY' enough.

To some significant extent, I just think choice of cause area is quite personal. Some people are longtermists, some aren't. Some people think it's good to reduce x-risk, some don't etc. The question for you, if you're trying to apply a funnel to all cause areas, is how do you deal with this issue?

Most research organisations deal with this issue by not trying to apply a funnel to all cause areas in the first place. Instead they focus on a particular type of cause area and prioritise within that e.g. ACE focuses on near-term animal suffering, and GiveWell focuses on disease. Therefore, for example, GiveWell can make certain assumptions about those who are interested in their work - that they aren't worried by complex cluelessness, that they probably aren't (strong) longtermists etc. They can then proceed on this basis. A notable exception may be 80,000 Hours that has funnelled from all cause areas, landing on just longtermist ones and 

So part of me thinks your project may be doomed from the start unless you're very clear about where you stand on these key foundational questions. Even in that case there's a difficulty, in that anyone who disagrees with your stance on these foundational questions would then have the right to throw out all of your funnelling work and just do their own. 

I would be interested to hear your thoughts on all of this.

RISC at UChicago is Seeking Ideas for Improving Animal Welfare

I wonder, is it worth cooperating with each other to ensure a decent number of the most promising 'EA approved' ideas get submitted? 

It isn't really clear if a single person/team is allowed to submit more than one idea. If not, then cooperation could be particularly useful.

Critical summary of Meacham’s "Person-Affecting Views and Saturating Counterpart Relations"

Chris Meacham? I'm starstruck!

In all seriousness, good point, I think you're right but I would be interested to see what Arden/Michelle say in response.

Since both W1 and W2 will yield harm while W3 won’t, it looks like W3 will come out obligatory. I can see why one might worry about this.

I thought I'd take this opportunity to ask you: do you hold the person-affecting view you outlined in the paper and, if so, do you then in fact see ensuring extinction as obligatory?

Big List of Cause Candidates

Thanks for the clarifications in your previous two comments. Helpful to get more of an insight into your thought process.

Just a few comments:

  • I strongly don't think a charity to work on philosophy in schools would be helpful and I don't like that way of thinking about it. My suggestions were having prominent philosophers join (existing) advocacy efforts for philosophy in the curriculum, more people becoming philosophy teachers (if this might be their comparative advantage), trying to shift educational spending towards values-based education, more research into values-based education (to name a few).
  • This is a whole separate conversation that I'm not sure we have to get into right now too deeply (I think I'd rather not) but I think there are severe issues with development economics as a field to the extent that I would place it near the bottom of the pecking order within EA. Firstly the generalisability of RCT results is highly questionable (for example see Eva Vivalt's research). More importantly and fundamentally, the problem of complex cluelessness (see here and here).  It is partly considerations of cluelessness that makes me interested in longtermist areas such as moral circle expansion and broadly promoting positive values, along with x-risk reduction.

I'm hoping we're nearing a good enough understanding of each other's views that we don't need to keep discussing for much longer, but I'm happy to continue a bit if helpful.

Load More