JackM

I am working as an economist at the Confederation of British Industry (CBI) and previously worked in management consulting.

I am interested in longtermism, global priorities research and animal welfare

Please get in touch if you would like to have a chat sometime.

Feel free to connect with me on LinkedIn.

Wiki Contributions

Comments

A guided cause prioritisation flowchart

Almost nobody? I'd imagine at least some people are interested in making an informed decision on cause area and would be interested in learning.

You might be right though. I'm not getting a huge amount of positive reception on this post (to put it lightly) so it may be that such a guided flowchart is a doomed enterprise.

EDIT: you could literally make it that you click on a box and guidance pops up so it could theoretically be very easy to engage with it.

A guided cause prioritisation flowchart

Just to clarify this was just my first attempt with no outside review and it is far from final, so I'm open to the possibility that there are problems with the flowchart itself.

Also, as I have said to other commenters my idea of a guided flowchart is that nuances and explanations would be in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple.

On your specific question, my thinking was:

  • If we cannot become safe (achieve existential security) then we will hit an existential catastrophe eventually. In this case we can either focus on the near-term or perhaps on the middle-term. Focusing on middle-term (accepting we cannot reduce x-risk) could entail speeding up sustainable economic growth. So tackling climate change would be a good thing to do. Focusing on near term would actually send you to things like global health, animal welfare etc. so now I'm now thinking it's already clear that my flowchart is very incomplete even from my point of view as you may need further questions after "We can become safe?".
  • If you're not willing to bet on small probabilities of success I think that reducing x-risk is not for you, as there is a very small probability that our efforts will counterfactually avert an existential catastrophe. In this case it seems that tackling climate change is the next best longtermist option as we can reliably reduce expected global warming for example through green technology investment.

I guess my main point though is that this flowchart is far from final and there are certainly improvements that can be made! Also that accompanying guidance would be essential for such a flowchart.


 

A guided cause prioritisation flowchart

Thanks for this, you raise a number of useful points. 

A widely used model that is not frequently updated could do a lot of damage by spreading outdated views. Unlike large collections of articles, a simple model in a graphic form can be spread really fast, and once it's spread out on the Internet it can't be taken back.

I guess this risk could be mitigated by ensuring the model is frequently updated and includes disclaimers. I think this risk is faced by many EA orgs, for example 80,000 Hours, but that doesn't stop them from publishing advice which they regularly update.

A model made by a few individuals or some central organisation may run the risk of deviating from the view of majority EAs; instead a more "democratic" way (not too sure what this means exactly) of making the model might be favored.

I like that idea and I certainly don't think my model is anywhere near final (it was just my preliminary attempt with no outside help!). There could be a process with engagement with prominent EAs to finalise a model.

Views in EA are really diverse, so one single model likely cannot capture all of them.

Also fair. However it seems that certain EA orgs such as 80,000 Hours do adopt certain views, naturally excluding other views (for which they have been criticised). Maybe it would make more sense for such a model to be owned by an org like 80,000 Hours which is open about their longtermist focus for example, rather than CEA which is supposed to represent EA as a whole.

e.g. What does it mean to agree that "humans have special status"? This can be refering to many different positions (see below for examples) which probably lead to vastly different conclusions.

As I said to alexjrl, my idea for a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple.

Yes-or-no answers usually don't serve as necessary and sufficient conditions.

I don't think a flowchart can be 100% prescriptive and final, there are too many nuances to consider. I just want it to raise key considerations for EAs to consider. For example, I think it would be fine for an EA to end up at a certain point in the flowchart and then think to themselves that they should actually choose a difference cause area because there is some nuance that the flowchart didn't consider that means they ended up in the wrong place. That's fine - but it would still be good to have systematic process in my opinion that ensures EAs consider some really key considerations.

e.g. I think "most influential time in future" is neither necessary nor sufficient for prioritizing "investing for the future".

Feedback like this is useful and could lead to updating the flowchart itself. I have to say I'm not sure why the most influential time being in the future wouldn't imply investing for that time though - I'd be interested to hear your reasoning.

I think there have been some discussions going on about EA decoupling with consequantialism, which I consider worthy. Might be good to include non-consequentialist considerations too.

Fair point. As I said before if an org like 80,000 Hours owned such a model perhaps they wouldn't have to go beyond consequentialism. If CEA did I would suspect that they should.

 

A guided cause prioritisation flowchart

Absolutely agree with that. 

My idea of a guided flowchart is that nuances like this would be explained in the accompanying guidance, but not necessarily alluded to in the flowchart itself which is supposed to stay fairly high-level and simple. It may be however that that box can be reworded to something like "Are future people (even in millions of years) of non-negligible moral worth" or something like that.

Ideally someone would read the guidance for each box to ensure they progressing through the flowchart correctly.

Important Between-Cause Considerations: things every EA should know about

FYI I have had a go at a new flowchart here. I'd be interested to hear your thoughts.

Nathan Young's Shortform

Anyone can comment on a post and upvote comments so I don't see why a question would be better in that regard.

Also the post contained a lot of information on potential megaprojects which is not only quite interesting and educational but also prompts discussion.

Convergence thesis between longtermism and neartermism

A common criticism of economic growth and scientific progress is that it entails sped up technological development which could mean greater x-risk. This is why many EAs prefer differential growth/progress and focusing on specific risks.

On the other hand there are arguments that economic growth and technological development could reduce x-risk and help us achieve existential security e.g. here and Will MacAskill alludes to a similar argument in his recent EA Global fireside chat at around the 7 minute mark.

Overall there seems to be disagreement amongst prominent EAs and it's quite unclear overall. 

With regards to IIDM I don't see why that wouldn't be net positive.

Democratising Risk - or how EA deals with critics

eg The solution you propose of having some probability threshold below which we can ignore more speculative risks also has many issues. For instance, this would seem to invalidate many arguments for the rationality of voting or for political advocacy, such as canvassing for Corbyn or Sanders: the expected value of such activities is high even though the payoff is often very low (eg <1 in 10 million in most US states). Advocating for degrowth also seems extremely unlikely to succeed given the aims of governments across the world and the preferences of ordinary voters.

You seem to assume that  voting / engaging in political advocacy are all obviously important things to do and that any argument that says don't bother doing them falls prey to a reductio ad absurdum,  but it's not clear to me why you think that.

If all of these actions do in fact have incredibly low probability of positive payoff such that one feels they are in a Pascal's Mugging when doing them, then one might rationally decide not to do them.

Or perhaps you are imagining a world in which loads of people stop voting such that democracy falls apart. At some point in this world though I'd imagine voting would stop being a Pascal's Mugging action and would be associated with a reasonably high probability of having a positive payoff.

Movie review: Don't Look Up

Yeah it seems quite polarising. I watched it with my flatmate (not EA, not American) and he absolutely loved it. I thought it was pretty mediocre.

Convergence thesis between longtermism and neartermism

On needing short feedback loops - I'd be interested to hear what longtermist work/interventions you think the community is doing  that doesn't achieve these feedback loops. I'd then find it easier to evaluate your point.

I'm worried that relying on short feedback loops would then mean not doing interventions to avoid existential catastrophes, because in a sense the only way to judge the effectiveness of such interventions is to observe their counterfactual impact on reducing the incidence of such catastrophes, which is pretty impossible as if one catastrophe happens we're done for.

Load More