All of jackva's Comments + Replies

My sense is that it is not a big priority.

However, I would also caution against the view that expected climate risk has increased over the past years.

Even if impacts are faster than predicted, most GCR-climate risk does probably not come from developments in the 2020s, but on emissions paths over this century.

And the big story there is that the expected cumulative emissions have much decreased (see e.g. here).

As far as I know no one has done the math on this, but I would expect that the decrease in likelihood of high warming futures dominates somewhat high... (read more)

Even if one is skeptical of the detailed numbers of a cost effectiveness analysis like this (as I am), I think it is nonetheless pretty clear that this 1M spent was a pretty great bet:

  1. When I talked to ITIF in 2020, they were pretty clear how transformative the Let's Funds campaign had been for their fundraising. 
  2. Given the amount of innovation-related decision making that occurred in the run-up to and early Biden administration -- what became the IIJA, CHIPS, and IRA, probably the largest expansion of energy innovation activity in decades -- significan
... (read more)

"Pyramid scheme" has a new meaning.

I am also just beginning to think about this more, but some initial thoughts:

  • Path dependency from self-ampliying processes -- Thinking about model generations as forks where significant changes in the trajectory become possible (e.g. crowding in a lot more investment, as has happened with ChatGPT/GPT4, but also, as has also happened, a changed Overton window). I think overall this introduces a dynamic where the extremes of the scenario space become more likely, with social dynamics such as strong increase in investment or, on the other side, stricter regul
... (read more)
jackva
22d17
3
0
1
2

I agree with you that the 2018 report should not have been used as primary evidence for CATF cost-effectiveness for WWOTF (and, IIRC, I advised against it and recommended an argument more based on landdscaping considerations with leverage from advocacy and induced technological change). But this comment is quite misleading with regards to FP's work as we have discussed before:

  1. I am not quite sure what is meant with "referencing it", but this comment from 2022 in response to one of your earlier claims already discusses that we (FP) have not been using that e
... (read more)
9
MatthewDahlhausen
22d
(For those in the comments, you can track prior versions of these conversations in EA Anywhere's cause-climate-change channel). 1. Last time I checked, GG's still linked to FP's CATF BOTEC on nuclear advocacy. Yes, I understand FP no longer uses that estimate. In fact, FP no longer publishes any of its BOTECs publicly. However, that hasn't stopped you from continuing to assert that FP hits around $1/ton cost-effectiveness, heavily implying CATF is one such org, and its nuclear work being the likely example of it. The BOTEC remains in FP's control, and it has yet to include a disclaimer. Please stop saying you can hit $1/ton based on high speculative EV calcs with numbers pulled out of thin air. It is not credible and is embarrassing to those of us who work on climate in EA. 2. I never intended to assert that FP still endorses REDD+. Merely to point out that the 2018 FP analysis of REDD+ (along with CCS and nuclear advocacy) was a terrible basis for Will to use in WWOTF for the $1/ton figure. While FP no longer endorses REDD+, FP's recent reports contain all the same process errors that Lief points out about the 2018 report - lack of experience, over-reliance on orgs they fund, best guesses, speculation.

It seems like that this number will increase by 50% once FLI (Foundation) fully comes online as a grantmaker (assuming they spend 10%/year of their USD 500M+ gift)

https://www.politico.com/news/2024/03/25/a-665m-crypto-war-chest-roils-ai-safety-fight-00148621

Interesting, thanks for clarifying!

Just to fully understand -- where does that intuition come from? Is it that there is a common structure to high impact? (e.g. if you think APs are good for animals you also think they might be good for climate, because some of the goodness comes from the evidence of modular scalable technologies getting cheap and gaining market share?)

8
Arepo
23d
Partly from a scepticism about the highly speculative arguments for 'direct' longtermist work - on which I think my prior is substantially lower than most of the longtermist community (though I strongly suspect selection effects, and that this scepticism would be relatively broadly shared further from the core of the movement). Partly from something harder to pin down, that good outcomes do tend to cluster in a way that e.g. Givewell seem to recognise, but AFAIK have never really tried to account for (in late 2022, they were still citing that post while saying 'we basically ignore these'). So if we're trying to imagine the whole picture, we need to have some kind of priors anyway.* Mine are some combination of considerations like * there are a huge number of ways in which people tend to behave more generously when they receive generosity, and it's possible the ripple effects of this are much bigger than we realise (small ripples over a wide group of people that are invisibly small per-person could still be momentous);  * having healthier, more economically developed people will tend to lead to more having more economically developed regions (I didn't find John's arguments against randomistas driving growth persuasive - e.g. IIRC it looked at absolute effect size of randomista-driven growth without properly accounting for the relative budgets vs other interventions. Though if he is right, I might make the following arguments about short term growth policies vs longtermism);  * having more economically countries seems better for global political stability than having fewer, so reduce the risk of global catastrophes;  * having more economically developed countries seems better for global resilience to catastrophe than having fewer, so reduce the magnitude of global catastrophes; * even 'minor' (i.e. non-extinction) global catastrophes can have a substantial reduction on our long-term prospects, so reducing their risk and magnitude is a potentially big deal

I don't think these examples illustrate that "bewaring of suspicious convergence" is wrong.

For the two examples I can evaluate (the climate ones), there are co-benefits, but there isn't full convergence with regards to optimality.

On air pollution, the most effective intervention for climate are not the most effective intervention for air pollution even though decarbonization is good for both.
See e.g. here (where the best intervention for air pollution would be one that has low climate benefits, reducing sulfur in diesel; and I think if that chart were... (read more)

6
Arepo
23d
Hey Johannes :) To be clear, I think the original post is uncontroversially right that it's very unlikely that the best intervention for A is also the best intervention for B. My claim is that, when something is well evidenced to be optimal for A and perhaps well evidenced to be high tier for B, you should have a relatively high prior that it's going to be high tier or even optimal for some related concern C. Where you have actual evidence available for how effective various interventions are for C, this prior is largely irrelevant - you look at the evidence in the normal way. But when all interventions targeting C are highly speculative (as they universally are for longtermism), that prior seems to have much more weight.

Fascinating stuff!

I am curious how you think about integrating social and political feedback loops into timeline forecasts.

Roughly speaking, (a) when we remain in the paradigm of relatively predictable progress (in terms of amount of progress, not specific capabilities) enabled by scaling laws, (b) we put significant probability on being fairly close to TAI, e.g. within 10 years, (c) it remains true that model progress is clearly observable by the broader public,

then it seems that social and political factors might drive a large degree in the variance of e... (read more)

3
Zershaaneh Qureshi
22d
Hi Jack, thanks for your comment! I think you've raised some really interesting points here.  I agree that it would be valuable to consider the effect of social and political feedback loops on timelines. This isn't something I have spent much time thinking about yet - indeed, when discussing forecast models within this article, I focused far more on E1 than I did on E2. But I think that (a) some closer examination of E2 and (b) exploration of the effect of social/political factors on AI scenarios and their underlying strategic parameters - including their timelines! - are both within the scope of what Convergence's scenario planning work hopes to eventually cover. I'd like to think more about it! If you have any specific suggestions about how we could approach these issues and explore these dynamics, I'd be really keen to hear them. 

but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions.

I think this is fundamentally the crux -- many of the most valuable philanthropic actions in domains with large government spending will likely be about challenging / advising / informationally lobbying the government in a way that governments cannot self-fund.

Indeed, when additional government funding does not reduce risk (does not reduce the importance of the problem) but is affectable, there can probably be cases where you should get more excited about philanthropic funding to leverage as public funding increases.

Yeah, that's true, though in Luke's treatment both are discussed and described as roughly equal -- there's no indication given that either should be more promising on priors and, as you say, they will often overlap.

(Last comment from me on this for time reasons)

  • I think if you look at philanthropic neglectedness, the total sums across types of capital are not a good proxy. E.g., as far as I understand the nuclear risk landscape, it is both true that government spending is quite large but also that there is almost no civil society spending. This means that additional philanthropic funding should be expected to be quite effective on neglectedness grounds. Many obvious things are not done.
  • The numbers on nuclear risk spending by 80k are entirely made up and not described
... (read more)
2
Vasco Grilo
2mo
Thanks for elaborating. I got this was your point, but I am not convinced it holds. I would be curious to understand which empirical evidence informs your views. Feel free to link to relevant pieces, but no worries if you do not want to engage further. I do not think this necessarily qualifies as satisfy empirical evidence that philanthropic neglectedness means high marginal returns. There may be non-obvious reasons for the ovious interventions not having been picked. In general, I am thinking that for any problem it is always possible to pick a neglected set of interventions, but that a priori we should assume diminishing returns in the overall spending, otherwise the government would fund the philanthropic interventions. For reference, here is some more context on 80,000 Hours' profile: The spending of 4.04 G$ I mentioned is just 4.87 % (= 4.04/82.9) on the cost of maintaining and modernising nuclear weapons in 2022 of 82.9 G$. Good point. I guess the quality-adjusted contribution from those sources is currently small, but that it will become very significant in the next few years or decades. Agreed. I estimated a difference of 8 OOMs (factor of 59.8 M) in the nearterm annual extinction risk per funding. Agreed. On the other hand, I would rather see discussions move from neglectedness towards cost-effectiveness analyses.

I can't open the GDoc on AI safety research.

But, in any case, I do not think this works, because philanthropic, private, and government dollars are not fungible, as all groups have different advantages and things they can and cannot do.

If looking at all resources, then 80M for AI safety research also seems an underestimate as this presumably does not include the safety and alignment work at companies?
 

2
Vasco Grilo
2mo
I think I should be considering all sources of funding. Everything else equal, I expect a problem A which receives little philanthropic funding, but lots of funding from other sources, to be less pressing than a problem B which receives little funding from both philanthropic and non-philanthropic sources. The difference between A and B will not be as large as naively expected because philanthropic and non-philanthropic spending are not fungible. However, if one wants to define neglectedness as referring to just the spending from one source, then the scale should also depend on the source, and sources with less spending will be associated with a smaller fraction of the problem. In general, I feel like the case for using the importance, tractability and neglectedness framework is stronger at the level of problems. Once one starts thinking about considerations within the cause area and increasingly narrow sets of interventions, I would say it is better to move towards cost-effectiveness analyses. Yet, given the above, I would say one should a priori expect efforts to decrease AI extinction risk to be more cost-effective at the current margin than ones to decrease nuclear extinction risk. Note: the sentence just above already includes the correction I will mention below. Sorry! I have fixed the link now. It actually did not include spending from for-profit companies. I thought it included because I had seen they estimated just a few tens of millions of dollars coming from them: Company nameNumber of employees [1]AI safety team size (estimated)Median gross salary (estimated)Total cost per employee (estimated)Total funding contribution (estimated)DeepMind17225-20$200k$400k$1.6-15mOpenAI12685-20$290k$600k$2.9-20mAnthropic16410-40$360k$600k$6.2-32mConjecture215-15$150k$300k$1.2-5.5mTotal    $32m I have now modified the relevant bullet in my analysis to the following: My point remains qualitatively the same, as the spending on decreasing AI extinction risk only increa

Nuclear risk philanthropy is about 30M/y, it seems you are comparing overall nuclear risk effort to philanthropic effort for AI?

In terms of philanthropic effort AI risk strongly dominates nuclear risk reduction.

2
Vasco Grilo
2mo
Hi Johannes, My intention is comparing quality-adjusted spending on decreasing nuclear and AI extinction risk, accounting for all sources (not just philanthropic ones).

Sorry for not being super clear in my comment, it was hastily written. Let me try to correct:

I agree with your point that we might not need to invest in govt "do something" under your assumptions (your (1)).

I think the point I disagree with is the implicit suggestion that we are doing much of what would be covered by (1). I think your view is already the default view. 

  • In my perception, when I look at what we as a community are funding and staffing, > 90% of this is only about (2) -- think tanks and other Beltway type work that is focused on make ac
... (read more)

Thanks, Jamie! Indeed quite helpful to know that there's nothing obvious I am missing.

Yes, agree on the last point -- I am just surprised this has not been done as EA grant makers frequently face the decision, I think.

Is there a process for more time-sensitive grants (where a decision would be needed earlier)?

3
eleanor mcaree
2mo
Unfortunately not, we disburse grants from late June and throughout July. This is because we award grants by comparing all of the applications we receive (typically ~200) against each other, rather than assessing applications one by one as they are submitted. This process takes several months to complete. 

This seems right to me on labs (conditional on your view being correct), but I am wondering about the government piece -- it is clear and unavoidable that government will intervene (indeed, already is) and that AI policy will emerge as a field between now and 2030 and that decisions early on likely have long-lasting effects. So wouldn't it be extremely important also on your view to now affect how government acts?

4
Linch
2mo
I want to separate out: 1. Actions designed to make gov'ts "do something" vs 2. Actions designed to make gov'ts do specific things. My comment was just suggesting that (1) might be superfluous (under some set of assumptions), without having a position on (2).  I broadly agree that making sure gov'ts do the right things is really important. If only I knew what they are! One reasonably safe (though far from definitely robustly safe) action is better education and clearer communications:  > Conversely, we may be underestimating the value of clear conversations about AI that government actors or the general public can easily understand (since if they'll intervene anyway, we want the interventions to be good).

Thanks, good shout!

From what I've seen, their work does not quite fit what I am looking for -- they are not comparative and they are also more narrowly focused on left-leaning protest movements, which is more narrow than what I am trying to get at here.

I think it's useful to add some quantitative intuitions here:

Quick BOTEC:

  • In the US there's a 50% chance a candidate not committed to democracy will become President, in Germany a party not being committed to democracy is part of the next government is very low, certainly not more than 0.5% (>=100x difference). There are certainly other ways to think about this, but I think the basic captured intuition - despite all the turmoil the threat to German democracy is low, in absolute terms and comparatively - seems correct (and, indeed, the recent protests see
... (read more)

In my experience, many of those arguments are bad and not cause-neutral, though to me your take seems too negative -- cause prioritization is ultimately a social enterprise and the community can easily vet and detect bad cases, and having proposals for new causes to vet seems quite important (i.e. the Popperian insight, individuals do not need to be unbiased, unbiasedness/intersubjectivity comes from open debate).

3
Joseph Lemien
2mo
You make a good point. I probably allow myself to be too affected by claims (such as "saving the great apes should be at the center of effective altruism"), when in reality I should simply allow the community sieve to handle them.

How do you think about the relevance of evidence from pre-1800/pre-industrialized societies to questions of whether climate change will induce civilizational collapse going forward?

To me, I am always confused why people do these studies because society has clearly changed so dramatically that there seems to be very little to learn from how these societies responded to climate anomalies.

1
FJehn
2mo
That's the big question of contemporary history. I discussed this a bit more here: https://existentialcrunch.substack.com/p/lessons-from-the-past-for-our-global But in general I think that while our societies have changed a lot, many things have stayed the same. Especially, the food system and its importance for societies is still very similar. Also, if you read a lot of history, you come to realize that humans often tend to follow the same trajectories, in the sense of "history does not repeat, but it rhymes".  Views like the one from Lenton that I discussed in the post are also independent of the industrial revolution shift and if we could validate them, I think this would make a stronger case for the validity of historical comparisons. 

I don't think IPBES is relevant evidence here because ~no one in the US cares about biodiversity as a national policy issue. It has no salience whatsoever, it is not something that can be polarized.

1
niplav
2mo
Thank you! His name was somewhat hard to google, because of another (apparently more Google-famous) David Goldberg.

I agree on that. My point is more forward-looking and in terms of counterfactuals: when there is an opportunity to shape an issue now making it institutionally look more like climate with an IPCC-equivalent is risky given the political environment now.

5
KyleGracey
2mo
I'd agree that it doesn't seem that the IPCC caused political polarization. My observation is that it has been a victim or target of that polarization (as have most efforts at climate action become subject to polarization attempts). If the IPCC hadn't existed, I think there would still be just as much effort to polarize climate action. There would just be one less target/victim of that polarization. On the topic of the current political environment and whether it makes sense to create institutions today: It's worth noting that the IPCC isn't the only such institution, and some of these were created much more recently, in a time when, according to this argument, there was more polarization. For example, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) has a similar structure to the IPCC, but was created in 2012. It has not been credibly accused, to my knowledge, of polarizing conversations on biodiversity. Rather, it is generally viewed as positively contributing to an understanding of the scientific consensus, and remaining uncertainties, around biodiversity and ecosystem services and the available policy options.

That's why I specified "inside-climate", yes those considerations you mention are out of scope for stuff I can fund.

This is an aside, but I would not trust CCC on climate.

5
Vasco Grilo
3mo
I see; sorry for the misunderstanding. I was thinking that figuring out whether marginal emissions are good/bad would still be "within climate", whereas comparing climate interventions with ones in other areas would be "outside climate". I would be curious to know why. I know there are concerns around the founder, but would be keen to know about specific criticism of CCC's cost-benefit analyses of climate interventions.

Thanks, Danny!

I think this is a misunderstanding.

I am not saying the IPCC caused polarization by something they did but rather by what they represent:

  1. The IPCC and similar-style organizations can be used by rising populist anti-globalist, referring to an international scientific body as a reason to do something seems politically very risky when the reputation of science and of international institutions is lower than it used to be and a significant part of the electorate actively resents those authorities.

  2. Insofar as the constraint on GCR risk-reductio

... (read more)
1
DannyBressler
3mo
I agree that what you’re saying could in principal be a problem, but I don’t think that’s how its actually played out in the case of the IPCC. I think there are many reasons why climate change is a politically polarized issue, and I personally don’t think that the IPCC played a material role in increasing polarization directly or indirectly (and IMO their impact went in the other direction for the reasons I outlined above).

Are you interested in within-climate cruxes?

2
Vasco Grilo
3mo
Thanks for asking, Johannes! I would be curious to know more about the cruxes you have in mind, but I guess I would not have much interest. I suppose your cruxes within climate are about prioritising across interventios with the intention of decreasing emissions, whereas I mainly wonder about whether decreasing CO2eq emissions is good/bad at the margin. My best guess is that it is good, but it is not that resilient. I have argued that more global warming might be good, but I no longer endorse the premises of my analysis. It relied on minimising the existential risk from climate change and the food shocks caused by abrupt sunlight reduction scenarios (e.g. nuclear winter), but I would now say these pose astronomically low extinction risk. As a result, I think it makes more sense to analyse my question about emissions in terms of figuring out the optimum emissions trajectory to improve nearterm welfare or boost nearterm economic growth, as proxied by e.g. global disease burden until and real GDP in 2050. However, I assume I would not add much value here. Those types of questions are much less neglected than the ones I was trying to answer in my original post, and my sense is that there is already scepticism about current climate policies being optimal from these perspective. For example: * David D. Friedman argues against a much higher social cost of CO2eq. * The Copenhagen Consensus Centre (CCC) estimated interventions to mitigate climate change are not that effective to boost economic growth. The prints below have the benefit-to-cost ratios for trade, health and climate change interventions[1]. 1. ^ I do not know whether the differences across areas would be smaller if one compared just top interventions, but a priori I would expect the CCC to provide similarly representative intervention across the various areas they considered.

Interesting idea, thanks for writing this!

How do you think about the risk of this kind of move, modeled loosely after the IPCC for climate change? In particular, that it will make GCR mitigation more like climate change politically?

1. The IPCC emerged / became strong (obviously IPCC was founded in 1988 before the end of the Cold War, but most of its success came after) at a time where there was a lot of appetite for global cooperation and scientific input in environmental policy-making and, despite that, it failed to meaningfully shape the trajectory of cl... (read more)

3
DannyBressler
3mo
Thanks for this comment! I am definitely in favor of country-level efforts to address GCRs and to produce reports like this one. The same way that the U.S. produces the National Climate Assessment despite there also being IPCC reports. In this case, I think those two efforts are more complimentary than cannibalistic. E.g., folks that work on the National Climate Assessment in the US often also work on the IPCC and doing the work of organizing/prepping for one helps with organizing/prepping for another. And having an international IPGCR effort may also encourage countries to undertake their own national GCR efforts.  Also, my sense is that the IPCC tends to be more conservative in its findings/statements compared to the National Climate Assessment because it requires a level of buy-in and sign off from its 195 member countries, whereas the National Climate Assessment is produced by a single country. My hypothesis is that a similar dynamic may end up being the case here, where the IPGCR may produce findings that are more conservative than what a single country might produce. But this is still helpful because it gives a sense of which sorts of policies may be palatable globally. In terms of this effort being a potential cause of political polarization on GCRs, my sense is that in the climate case, the IPCC has not been a driver of the political polarization we've seen. Of course, there has been a lot of political division on climate action, but my sense is that the IPCC itself has played little role in causing this. On the contrary, I've seen the IPCC as playing a major role in effectively establishing a set of basic knowledge (and corresponding levels of confidence, as their findings are always given with a level of confidence) around climate change, that those skeptical of climate action find it hard to argue with. There is a lot more that is less certain those those skeptical of climate action now debate over (e.g. the benefits and costs of climate action, ability

Thanks for the update, Will!

As you are framing the choice between work on alignment and work on grand challenges/non-alignment work needed under transformative AI, I am curious how you think about pause efforts as a third class of work. Is this something you have thoughts on?

We agree on economics, it's more that techno-economic analysis is quite different (just had someone on my team do techno-economic work that would be relevant to this list, but she is maximally far from a social scientist in skillset and self-identification).

I think for some parts of social psychology it might be considered a social science, though in general most social scientists would say the definition of social science is something like "the dependent variable are societal-level phenomena" by which economics, political science, sociology etc. are socia... (read more)

4
MichaelStJules
3mo
Social sciences are concerned with much more than societal level phenomena. Relationships and interactions between people count, too. FWIW, my experience in Canada has been that psychology is typically part of the faculty of social sciences (or a combined one with humanities and/or arts).

I love that you are doing this!

I think a broader title might be more helpful, though -- many of the questions you list are not really social science questions, but, for example, about consumer psychology (a behavioral science) or techno-economics (e.g. the alt protein R&D return questions).

I.e. there is a broader set of people who might help answer these questions, many of which would not understand themselves as social scientists.

6
MichaelStJules
3mo
I'd consider psychology and economics in general to be social sciences, and that would include consumer psychology and at least parts of techno-economics. However, almost all of the questions in Other questions are definitely not social science questions, and instead animal behaviour/ethology, animal cognition, zoology more generally, ecology and philosophy, although some approaches in common with social sciences might be useful. Also "What are the most tractable and cost-effective interventions to improve wild animal welfare?" seems more like a generalist and/or interdisciplinary research question, although it could involve some social science.

(I am pretty unsure I understood this correctly, so this comment might be a mistake, posting anyway as it might be clarifying for others as well if so)

It seems to me that there are two dimensions here:

(a) whether or not a statement is comparative (b) whether or not a statement is confounded by an unobservable

Comparative statements can be confounded when the comparison standard is not made explict, which seems to be your main critique. If I understand you correctly, you see the main response in non-comparative first order evaluations.

But shouldn't, in many ... (read more)

Thanks, and sorry if I was too nitpicky then.

I am not an GHD expert but I would expect someone who has a high school diploma in the richest country in Africa to be a lot better off than the typical GD recipient which seems to be from the poorest strata of the poorest countries.

And so, yeah, I agree one would probably a 50-100x expected multiplier to make this work. I am not saying this is not possible, I just thought the bar stated here was significantly too optimistic.

5
Karthik Tadepalli
4mo
I picked South Africa because Harambee works there, but the same issue - employers don't know who is good to hire so job seekers struggle to find jobs - is true across Africa and for much poorer populations than high school educated workers. But the point would have been better demonstrated with livelihood interventions for farmers.

I suspect that it shouldn't be too hard to find one where spending $1 generates more than $10 in income, which is roughly the bar for a GiveWell top charity.

 

This seems wrong to me in that both of your examples are constituencies that are quite a bit better off than Give Directly recipients for which that would hold, i.e. the actual multiplier would need to be a lot higher or apply to constituencies as poor as GD-recipients. 

2
Karthik Tadepalli
4mo
Yeah, hence the caveat with roughly. I actually don't think they're much better off - the former group are unemployed and thus have basically no income! - but I feel pretty sanguine about generating $50 or $100 in income per $1 spent if your intervention operates at scale, just because the unit costs of solving an information friction seem trivially small. (Also, business operators are better off but the potential to multiply business income is way higher.) The easiest way to get this would be through agricultural livelihood interventions. Farmers are the extreme poor, and they have tons of frictions to market transactions, so you are targeting the right population and also getting market-based leverage.

Hi Vasco,

Thanks for your thoughtful comment!

It took me a while to fully parse, but here are my thoughts, let me know if I misunderstood something.

I/ Re the 3000x example, I think I wasn't particularly clear in the talk and this is a misunderstanding resulting from that. You're right to point out that the expected uncertainty is not 3000x.

I meant this more to quickly demonstrate that if you put a couple of uncertainties together it quickly becomes quite hard to evaluate whether something meets a given bar, the range of outcomes is extremely large (if on reg... (read more)

4
Vasco Grilo
4mo
Thanks for the clarifications, Johannes! I meant we should in theory just care about r = E("CE of A")/E("CE of B")[1], and pick A over B if the expected cost-effectiveness of A is greater than that of B (i.e. if r > 1), even if A was worse than B in e.g. 90 % of the worlds. In practice, if A is better than B in 90 % of the worlds (in which case the 10th precentile of "CE of A"/"CE of B" would be 1), r will often be higher than 1, so focussing on r or E("CE of A"/"CE of B") will lead to the same decisions. If r is what matters, to investigate whether one's decision to pick A over B is robust, the aim of the sensitivity analysis would be ensuring that r > 1 under various plausible conditions. So, instead of checking whether the CE of A is often higher than the CE of B, one should be testing whether the expected CE of A if often higher than the expected CE of B. In practice, it might be the case that: * If r > 1 and A is better than B in e.g. 90 % of the worlds, then the conclusion that r > 1 is robust, i.e. we can be confident that A will continue to be better than B upon further investigation. * If r > 1 and A is better than B in e.g. just 25 % of the worlds, then the conclusion that r > 1 is not robust, i.e. we cannot be confident that A will continue to be better than B upon further investigation. How do you think about adaptation (e.g. economic growth, adoption of air conditioning, and migration)? I forgot to finish this sentence in my last comment. 1. ^ Note E(X/Y) is not equal to E(X)/E(Y).

Thanks, Luke!

Uncertainty
As we frequently point out, one should take the estimates with a grain of salt and consider the reported uncertainty (e.g. the old estimate had something like 0.1 USD/tCO2e to 10 USD/tCO2e) and, IIRC, the impact report also reports that these estimates are extremely uncertain and reports wide ranges.

As we discussed in our recent methodology-focused update, we think large uncertainty is unavoidable when operating in climate as a global decadal challenge with the most effective interventions inherently non-RCT-able (FWIW, I would thin... (read more)

It seems good to me if the forum team took more action here against this post, for example removing the section on Ben Pace that can clearly be interpreted as retaliatory. I don't see why we would assume good faith for that part of the post.

The reaction here of the moderation seems a bit unbalanced.

Thanks, Vasco, for the great comment, upvoted! I am traveling for work right now, but we'll try to get back to you by ~mid-week.

AI might kill us all but in the meantime there will be some great summaries.

Thanks for doing this!

As a climate person trying to have a balanced perspective on this, to me the framing of climate here does not come across as very balanced. @John G. Halstead might have more detailed comments on this, but it seems that examples are selectively chosen in one direction (motivating the severity of the risk).

[anonymous]5mo63
9
0

I think it is a very hard area to provide an accurate outline of, and I think to do that you need to go beyond reading the abstracts of papers and to look at the assumptions in those paper which typically combine very pessimistic warming, very pessimistic economic growth, limited or no adaptation. I think a lot of your analysis errs in a pessimistic direction. 

  1. [edit: misread the first point]: "The IPCC’s 6th Assessment Report predicts that, even if we fail to undertake significant further action, it’s very unlikely that we’ll reach 3°C or more of warm
... (read more)

I am extremely pro alternative proteins (see e.g. here) but I think we still need to be more honest about the climate impacts of agriculture, both in terms of epistemic hygiene but also in terms of argumentative strategy (I don’t think we need to exaggerate the case for APs – the case is good already! – and by exaggerating some claims we are making the whole thing less believable).

In the beginning of the interview it is discussed as a huge, huge contributor to climate change, a major driver, without presenting any numbers.

The exact numbers would depen... (read more)

Thanks so much! I know the problem of late answers :)

I think even for something that seems quite certain on the intervention level (if you think that is true for malaria vaccine) then one needs to account for funding and activity additionality which make this more uncertain and, relatively speaking, lowers the estimate to GD where the large size of the funding gap ensures funding and activity additionality to be near 1 (i.e. no discount).

Given that Open Philanthropy seems to believe that typical GiveWell recommendations are dominated by more leveraged ones (e.g.using advocacy, induced technological change) at least for risk-neutral donors, I am a bit confused by the anchoring on GiveWell charities.

Even if GD were closer to AMF than GiveWell thinks, this would not put GD close to the best thing one can do to improve human welfare unless one applies a very narrow frame (risk-aversion, highly scalable based on existing charities right now).

Or, put a bit differently:

  • (1) We live in a world that
... (read more)
6
NickLaing
6mo
Love this and I mostly agree with your points. I do think though that GiveWell is the easiest thing to compare to in this case, and that's probably fair enough. Comparing to very different and harder-to-measure policy and tech work is less easy to understand and feels a bit disparate. My only tiiiiiiny nicpick would be your point 2 - I don't think its that hard to positively shape the trajectory of malaria vaccines (although yes trade policy and influencing development aid is hard-ish). The uncertainties are high yes, but especially with malaria vaccines I would hazard a guess that even the lower end of the effect range might compete with GiveDirectly. Can't be bothered trying to calculate that right now though :D :D :D 

Thanks for doing this and kudos for publishing results that are in tension with your (occasional) employer.

6
Vasco Grilo
6mo
Hi Johannes, I have the impression you are quite honest about (not overestimating) the risk from climate change, so thanks for that too!

Interesting to see a clear statement by OP on the expected dominance of advocacy and other leveraged interventions over traditional direct delivery work.

(Full disclosure: I sometimes work out of the same coworking space as Justus and Vegard and we occassionally have team lunches. Given that they were a potential grantee for some time (and indeed became a grantee for a small grant in 2023), I've avoided further socializing beyond those office contexts. They also don't know I am writing this.)

This is an exciting broadening of work!

I haven't always agreed with the underlying theory of change of the climate work, but I've consistently experienced the team of Future Matters as quite thoughtful about policy change and social movements and cultivating an expertise that is quite rare in EA and seems underprovided.

I think the idea of an energy descent is extremely far outside the expert consensus on the topic, as Robin discusses at length in his replies to that post.

This is nothing we need to worry about.

Load more