All of bshumway's Comments + Replies

If you had to choose just three long-termist efforts as the highest expected value, which would you pick and why?

6
MichaelA
3y
(Just my personal views, as always) Roughly in line with Peter's statement that "I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize", I recently argued (with some caveats and uncertainties) that marginal longtermist donations will tend to be better used to support "fundamental" rather than "intervention" research. On what those terms mean, I wrote:  See that post's "Key takeaways" for the main arguments for and against that overall position of mine. I think I'd also argue that marginal longtermist research hours (not just donations) will tend to be better used to support fundamental rather than intervention research. (But here personal fit becomes quite important.) And I think I'd also currently tend to prioritise "fundamental" research over non-research interventions, but I haven't thought about that as much and didn't discuss it in the post. So the highest-EV-on-the-current-margin efforts I'd pick would probably be in the "fundamental research" category. Of course, these are all just general rules, and the value of different fundamental research efforts, intervention research efforts, and non-research efforts will vary greatly.  In terms of specific fundamental research efforts I'm currently personally excited about, these include analyses, from a longtermist perspective, of:  * totalitarianism/dystopias, * world government (see also), * civilizational collapse and recovery, * "the long reflection", and/or * long-term risks from malevolent actors Basically, those things seem like variables that might (or might not!) matter a great deal, and (as far as I'm aware) haven't yet been looked into from a longtermist perspective much. So I expect there could be some valuable low-hanging fruit there. Maybe if I had to pick just three, I'd bundle the first two together, and then stamp my feet and say "But I want four!" (I have more thoughts on this that I may write

(Speaking for myself and not others on the team, etc)

 At a very high level, I think I have mostly "mainstream longtermist EA" views here, and my current best guess would be that AI Safety, existential biosecurity, and cause prioritization (broadly construed) are the highest EV efforts to work on overall, object-level. 

This does not necessarily mean that marginal progress on these things are the best use of additional resources, or that they are the most cost-effective efforts to work on, of course.

9
Peter Wildeford
3y
This is not a satisfying answer but right now I think the longtermist effort with the highest expected value is spending time trying to figure out what longtermist efforts we should prioritize. I also think we should spend a lot more resources on figuring out if and how much we can expect to reliably influence the long-term future, as this could have a lot of impact on our strategy (such as becoming less longtermist or more focused on broad longtermism or more focused on patient longtermism, etc.). I don't have a third thing yet, but both of these projects we are aiming to do within Rethink Priorities.

I dont think that all flow through effects are (or even can be) quantified by this sort of analysis. It is surely much easier to measure the growth from an investment account than to quantify the impact of a human life or the downstream effects of improving a human life. As far as I am aware, efforts to quantify this sort of effect are not yet available to the level that I would feel very confident about a side by side comparison. If you are aware of something you believe allows for such comparison, I would be very interested in reading that.

Also donating 10% to honor my GWWC pledge and investing the rest for future giving. This year was a little different than our planned giving as I was able to find several unique opportunities to donate through a social worker contact in India. The economic impact of the pandemic initially created some crisis type situations where migrant workers were unable to work or return home and many of the most vulnerable in urban slums experienced truly critical conditions. Because of our direct contact to people doing work with those people, we were reasonably conf... (read more)

I find it interesting that "giving opportunities whose primary route to impact is making more financial or human resources available to be “spent” on the highest-impact opportunities at a later point in time," were intentionally excluded. One might argue that from a longtermist perspective that the primary route to impact of most EA interventions (including those typically viewed as short-termist) will manifest most of their impact via flow through effects.  The compounding effects of investing in the untapped human potential of the global poor now is... (read more)

Thanks for your comment. Please note though that most types of "flow-through effects" (including those in your example, if I understand you correctly) are included in the analysis.

Investment-like giving opportunities (as defined in the report) are only a very small subset of interventions with substantial flow-through effects, namely those whose gains are reprioritised towards the highest-impact opportunities at a later point in time. Giving to them is similar to investing to give in that both can benefit from exogenous learning.

I also think well-being is not the ideal metric for what type of development would reduce x-risk either. When I mention the Gross Nation Happiness metric this is just one measure currently being used which actually includes things like good governance and environmental impact among many other things. My point was that growth in GDP is a poor measure of success and that creating a better metric of success might be a crucial step in improving the current system. I think a measure which attempts to quantify some of the things you mention would be wonderful to

... (read more)
3
MichaelA
4y
Oh, ok. I knew of "gross national happiness" as (1) a thing the Bhutan government talked about, and (2) a thing some people mention as more important than GDP without talking precisely about how GNH is measured or what the consequences of more GNH vs more GDP would be. (Those people were primarily social science teachers and textbook authors, from when I taught high school social science.)  I wasn't aware GNH had been conceptualised in a way that includes things quite distinct from happiness itself. I don't think the people I'd previously heard about it from were aware of that either. Knowing that makes me think GNH is more likely to be a useful metric for x-risk reduction, or at least that it's in the right direction, as you suggest.  At the same time, I feel that, in that case, GNH is quite a misleading term. (I'd say similar about the Happy Planet Index.) But that's a bit of a tangent, and not your fault (assuming you didn't moonlight as the king of Bhutan in 1979).

I already think technology is at a point where welfare does not have to depend on fossil fuel consumption. This is why the efforts to have low carbon or carbon neutral development like the Global Green New Deal and other international efforts are crucial. I don’t think the western world is a model to be followed as much as it is a warning of what not to do in many ways. But yeah, I think we are already at a place where development doesn’t have to require a larger carbon footprint, we may just lack the political willpower to implement those technologies at

... (read more)
2
MichaelA
4y
A useful concept here might be that of an "environmental Kuznets curve":  There is both evidence for and against the EKC. I'm guessing the evidence varies for different aspects of environmental quality and between regions. I'm not an expert on this, but that Wikipedia section would probably be a good place for someone interested in the topic to start. I think I broadly agree, but that it's also true that present-day welfare is cheaper if we use fossil fuels than low/no carbon fuels (if we're ignoring things like carbon taxes or renewables subsidies that were put in place specifically to address the externalities). I think carbon mitigation is well worth the price (including the price of enacting e.g. carbon taxes) when we consider future generations, and perhaps even when we consider present generations' entire lifespans (though I haven't looked into that). But there are some real tensions there, for people who are in practice focused on near-term effects.

I think my point is that we don’t know all that much about what that would look like. I have my own theories but they may be completely off base because this research is fairly uncommon and neglected. I think economic growth may not even be the best metric for progress but maybe some derivative of the Gross National Happiness Index or something of that nature. I do think that there may be bidirectional benefit from focusing at the intersection of x-risk and global development. I love the work ALLFED is doing BTW! 

2
MichaelA
4y
Epistemic status: I've only spent perhaps 15 minutes thinking about these specific matters, though I've thought more about related things. I'd guess that happiness levels (while of course intrinsically important) wouldn't be especially valuable as a metric of how well a global health/development intervention is reducing existential risks. I don't see a strong reason to believe increased happiness (at least from the current margin) leads to better handling of AI risk and biorisk. Happiness may correlate with x-risk reduction, but if so, it'd probably due to other variables affecting both of those variables. Metrics that seem more useful to me might be things like: * quality of reasoning and evidence used for key political and corporate decision-making * Though operationalising this is of course difficult * willingness to consider not just risks but also benefits of technological and economic development * This is tricky because I think people often overestimate or overweight the risks from various developments (e.g., GMO crops), especially if our focus is on just the coming years or decades. So we'd want to somehow target this metric to the "actually" risky developments, or to "considering" risks in a reasonable way rather than just in general. * levels of emissions * levels of corruption The last two of those metrics might be "directly" important for existential risk reduction, but also might serve as a proxy for things like the first two metrics or other things we care about.
2
Denkenberger
4y
Thanks!

Yes, the US pandemic response in particular is evidence that the wealth of a country does not seem to be the most important factor in effective response to threats. Also, the “boring apocalypse“ scenario seems much more probable to me than any sort of “bang” or rapid extinction event and I think there is a lot that could be done in the realm of global development to help create a world more robust to that kind of slow burn. 

This is great! I’m glad these things are at least on the agenda. I will be following with interest to see what comes of this.

Yes that is a major risk with this kind of thing and I cited that article in the disclaimer. I think there is almost certainly a real convergence but the strength of that convergence is what is debatable. Finding the bits where these cause areas intersect may really be a great opportunity though which EA is uniquely capable of researching and making a positive impact. So it is good to be skeptical of the convenient convergence phenomenon but that shouldn’t make us blind to ways where real convergence may be a convenient opportunity to have an outsized impact.