A

asolomonr

34 karmaJoined May 2021

Comments
7

Ah I see now,  thank you so much!

Thank you so much for putting this together, I really appreciate seeing this type of work getting such serious attention! Sorry if I'm missing something obvious, but are the sources compiled somewhere? I thought that maybe they would be linked in the spreadsheet, but at least for me whenever I click on a hyperlink in the spreadsheet it just opens the same sheet in a different tab. Thanks!

Hi everyone! I've been reading the forum for over a year now but have only very recently joined. I'm an undergrad student in the US studying Public Health and Earth Science. I was first introduced to EA through my interest in animal welfare advocacy, and now my current top focuses are international stability, civilizational collapse/resilience, and biosecurity. I have experience with EA community building, working on issues related to food security and civilizational collapse, and paleoclimate research. I hope that joining the forum will be a way to connect with more people in the broader EA community and for me to explore current uncertainties in my cause prioritization and possible career trajectory.

I wonder if a heavy dose of skepticism about longtermist-oriented interventions wouldn't result in a somewhat similar mix of near termist and longtermist prioritization in practice. Specifically, someone might reasonably start with a prior that most interventions aimed at affecting the far future (especially those that don't do so by tangibly changing something in the near term so that there could be strong feedbacks) come out as roughly a wash. This might then put a high burden of evidence on these interventions so that only a few very well founded ones would stand out above near termist-oriented actions. While in this view supposed flow through affects of near termist interventions would also be regarded with strong skepticism and so their long term impact might generally be judged to also come out as a wash, you'd at least get the short term benefit. So then one might often favor near term causes because gathering evidence on them is comparatively easy, but for longtermist interventions that are moderately well grounded, the standard reasoning favoring them would kick in. I think this is often roughly what happens, and might be another explanation for the observation that even proponents of strong longtermism don't generally appear fanatical

This is piggybacking a bit off of Darius_Meissner's early comment that distinguishes between the axiological and deontic claims of strong longtermism (to borrow the terminology of Greaves and MacAskill's and paper). Many have pointed out that accepting the former doesn't have to lead to the latter, and this is just a particular reasoning for why. But I wonder why there is a need to have a philosophical basis for what seems like a bottom line that could be reached in practice even by neglecting moral uncertainty but just embracing empirical uncertainty and incorporating Bayesian Priors in EV thinking (as opposed to naive EV reasoning).

The Food Systems Handbook was started at the beginning of the pandemic as an EA aligned organization (branching off of ALLFED). Not to speak too much for the org, but roughy its initial focus was collating case studies, new articles, and other useful resources about the emerging (pandemic-induced) food crisis, targeting decision makers. Now much of the focus is investigating future drivers of food insecurity (i.e. various aspects of climate change, trade restrictions, conflict, etc.) and doing lit reviews and talking with experts to see what policies they can advocate for to most effectively reduce future levels of malnutrition. There was a post about the ensuing humanitarian emergency from about a year ago, and here's a post about the FSH specifically. 

(I'm putting this as a comment and not an answer to reflect that I have a few tentative thoughts here but they're not well developed)

A really useful source that explains a Bayesian method of avoiding Pascal's mugging is this GiveWell Post. TL;dr much of the variation in EV estimates for situations that we know very little about comes from "estimate error", so we'd have very low credence in these estimates. Even if the most likely EV estimate for an action seems very positive, if there's extremely high variance due to having very little evidence on which to base that estimate, then we wouldn't be very surprised if the actual value is net zero or even negative. The post argues that we should also incorporate some sort of prior about the probability distribution of impacts that we can expect from actions. This basically makes us more skeptical the more outlandish the claim is. As a result, we're actually less persuaded to take an action if it is motivated by an extremely high but unfounded EV estimate versus an action that is equally unfounded but has a less extreme EV estimate and so falls closer to our prior about what is generally plausible. This seems to avoid Pascal's mugging. (This was my read of the post, it's completely possible that I misunderstood something and or that persuasive critiques of this reasoning exist and I haven't encountered them so far).

I think that another point here is whether the very promising but difficult to empirically verify claims that you're talking about are made with consideration of a representative spectrum of the possible outcomes for an action. As a bit of a toy example (and I'm not criticizing any actual view point here, this is just hopefully rhetorically illustrative), if you think that improving institutional decision making is really positive, your basic reasoning might look like: taking some action to teach decision makers about rationality has x small probability of reaching a person who now has y small probability of being in a position to decide something that has z hugely positive impact if decided with consideration of rational principles. Therefore the EV of me taking this action is xy * z = really big positive number. This only considers the most positive value direction that this action could unfold since it's assumed that within the much bigger 1-xy probability there are only basically neutral outcomes. It's at least plausible, however, that some of those outcomes are actually quit bad (say that you teach a decision maker an incorrect principle or that you present the idea badly and so through idea inoculation you dissuade someone from becoming more rational and this leads to some significant negative outcome). The likelihood of doing something bad is probably not that high, but say there's k chance that your action leads to m very bad outcome, then the actual EV is (xy * z ) - (k * m), which might be much lower than if we only considered the positive outcomes of this action. This might suggest that the EV estimates of the types of x-risk mitigation actions you're expressing some skepticism about could be forgetting to account for the possibility that they have a negative impact, which could meaningfully lower their EV. Although people may be already factoring such considerations in and just not necessarily making that explicit.