All of StevenKaas's Comments + Replies

Since somebody was wondering if it's still possible to participate without having signed up through alignmentjam.com:

Yes, people are definitely still welcome to participate today and tomorrow, and are invited to head over to Discord to get up to speed.

Note that Severin is a coauthor on this post, though I haven't been able to find a way to add his EA Forum account on a crosspost from LessWrong.

We tried to write a related answer on Stampy's AI Safety Info:

How could a superintelligent AI use the internet to take over the physical world?

We're interested in any feedback on improving it, since this is a question a lot of people ask. For example, are there major gaps in the argument that could be addressed without giving useful information to bad actors?

Thanks for reporting the broken links. It looks like a problem with the way Stampy is importing the LessWrong tag. Until the Stampy page is fixed, following the links from LessWrong should work.

There's an article on Stampy's AI Safety Info that discusses the differences between FOOM and some other related concepts. FOOM seems to be used synonymously with "hard takeoff" or perhaps with "hard takeoff driven by recursive self-improvement"; I don't think it has a technical definition separate from that. At the time of the FOOM debate, it was taken more for granted that a hard takeoff would involve recursive self-improvement, whereas now there seems to be more emphasis by MIRI people on the possibility that ordinary "other-improvement" (scaling up and... (read more)

OK, thanks for the link. People can now use this form instead and I've edited the post to point at it.

Like you say, people who are interested in AI existential risk tend to be secular/atheists, which makes them uninterested in these questions. Conversely, people who see religion as an important part of their lives tend not to be interested in AI safety or technological futurism in general. I think people have been averse to mixing AI existential ideas with religious ideas, for both epistemic reasons (worries that predictions and concepts would start being driven by meaning-making motives) and reputational reasons (worries that it would become easier for cr... (read more)

5
Geoffrey Miller
1y
Hi Steven, fair points, mostly. It might be true at the moment that many religious people tend not to be interested in AI issues, safety, or AI X-risk. However, as the debate around these issues goes more mainstream (as it has been in the last month or so), enters the Overton window, and gets discussed more by ordinary citizens, I expect that religious people will start making their voices heard more often. I think we should brace for that, because it can carry both good and bad implications for EAs concerned about AI X-risk. Sooner or later, religious leaders will be giving sermons about AI to their congregations. If we have no realistic sense of what they're likely to say, we could easily be blindsided by a lot of new arguments, narratives, metaphors, ethical concerns, etc. that we haven't ever thought about before (given the largely-atheist composition of both AI research and AI safety subcultures).

Thank you! I linked this from the post (last bullet point under "guidelines for questioners"). Let me know if you'd prefer that I change or remove that.

2
Yonatan Cale
1y
I have a preference that you use your own form if you're ok with managing it (forms.new, and "don't collect email address")

As I understand it, overestimation of sensitivity tails has been understood for a long time, arguably longer than EA has existed, and sources like Wagner & Weitzman were knowably inaccurate even when they were published. Also, as I understand it, although it has gotten more so over time, RCP8.5 has been considered to be much worse than the expected no-policy outcome since the beginning despite often being presented as the expected no-policy outcome. It seems to me that referring to most of the information presented by this post as "news" fails to adequ... (read more)

[anonymous]2y16
0
0

I think you are right that a lot of these points have been around in the scientific literature for a while. What has changed now is that they are definitely mainstream. The Sherwood et al paper has really helped to formalise the findings of the Annan and Hargreaves paper from years ago, and that has all now been recognised by the IPCC. James Annan told me that he did raise the point about priors with Weitzman a while ago but didn't get anywhere.  

One thing that has changed in recent years is that whereas the IEA and others used to estimate that RCP6 was the most likely emissions scenario, it looks like RCP4.5 is the most likely scenario, on current policy. And even that may be too pessimistic

What does an eventual warming of six degrees imply for the amount of warming that will take place in (as opposed to due to emissions in), say, the next century? The amount of global catastrophic risk seems like it depends more on whether warming outpaces humanity's ability to adapt than on how long warming continues.

2[anonymous]4y
I agree this is important. I'll try to get to this when I have a bit more time.

I was thinking e.g. of Nordhaus's result that a modest amount of mitigation is optimal. He's often criticized for his assumptions about discount rate and extreme scenarios, but neither of those is causing the difference in estimates here.

According to your link, recent famines have killed about 1M per decade, so for climate change to kill 1-5M per year through famine, it would have to increase the problem by a factor of 10-50 despite advancing technology and increasing wealth. That seems clearly wrong as a central estimate. The spreadsheet based o... (read more)

I think the upper end of Halstead's <1%-3.5% x-risk estimate is implausible for a few reasons:

1. As his paper notes and his climate x-risk writeup further discusses, extreme change would probably happen gradually instead of abruptly.

2. As his paper also notes, there's a case that issues with priors and multiple lines of evidence imply the tails of equilibrium climate sensitivity are much less fat than those used by Weitzman. As I understand it, ECS > 10 would imply paleoclimate estimates are highly misleading and estimates based on the inst... (read more)

Ah, it looks like I was myself confused by the "deaths/year" in line 20 and onward of the original, which represent an increase per year in the number of additional deaths per year. My apologies. At this point I don't understand the GWWC article's reasoning for not multiplying by years an additional time.

My prior was that, since economists argue over the relative value of mitigation (at least beyond low hanging fruit) and present consumption, and present consumption isn't remotely competitive with global health interventions, a cal... (read more)

1
mchr3k
5y
Do you have any particular sources in mind for this? My understanding is that economists are in strong agreement that action now is much cheaper than action in future. Re: 1. I think it's useful to consider concrete examples from history which have killed a large number of people. As per my writeup, in the 20th century, the largest famines killed 10-20M people/decade, so 1-2M people/year, all of which happened when the world had fewer than 4 billion people [source]. So if you think that 1-2M people is implausible, then you're saying that climate change isn't likely to cause the same kind of agricultural issues as we've previously faced, without serious climate issues.

(edit: I no longer endorse this comment)

We don’t expect to be able to recapture most emitted CO2, so a very conservative value to use would be to attribute 50 years of increased deaths to each emission. Hence, this increases the estimate of lives saved by a factor of 50x.

This seems to be the key disagreement between your estimate and GWWC's. As I understand it, if we reduce emissions for the year X by 1%, different things happen in the two calculations:

  • In GWWC's calculation, every year Y for decades, we prevent 1% of the deaths during the
... (read more)
2
mchr3k
5y
I'm sorry but I don't follow your argument. I'll try and explain my own logic and perhaps you can point out the key step where I'm going wrong. The 2014 WHO paper provides an estimate for the number of climate attributed deaths in 2030 and 2050. Let's imagine that these estimates were 30 deaths and 50 deaths. The GWWC approach then assumes a linear relationship between CO2 emissions and deaths, producing a straight line passing through these estimates. So 2030 sees 30 deaths, 2031 sees 31 deaths, 2032 sees 32 deaths etc. The GWWC approach then subtracts the 2030 estimate from the 2050 estimate to give the change per year in the climate attributes deaths. In this toy example, that would be a figure of 1 death / year / year. Now imagine that global emissions drop to zero for a single year in 2030, and that climate response was instantaneous - then we'd expect to see 30 deaths in 2030, 30 deaths in 2031, 31 deaths in 2032, 32 deaths in 2033 etc. So over a 50 year period, we'd see 50 saved lives. However, the original GWWC spreadsheet simply takes a fraction of the deaths / year / year figure, and declares that the resulting total is the number of deaths averted over all time.

A piece such as this should engage with the direct cost/benefit calculations that have been done by economists and EAs (e.g. Giving What We Can), which make it seem hard to argue that climate change is competitive with global health as a cause area.

How much it would take to stay under a mostly arbitrary probability of a mostly arbitrary level of temperature change is a less relevant statistic than how much future temperatures would change in response to reduced emissions.


3
mchr3k
5y
Okay, I've just posted an analysis of the four relevant impact/cost-effectiveness estimates that I'm aware of. You can see my conclusions here - https://forum.effectivealtruism.org/posts/ynRG6JBvARS2cHu63/review-of-climate-cost-effectiveness-analyses
2
mchr3k
5y
I’ll definitely take a look at the cost effectiveness calculations and see if I can work references to these into my draft. In particular, I’m interested to find out what assumptions they are based on. The other blog post you shared looks to me to have a key flaw - it models emissions as having a sharp spike where they go from growing quickly to declining quickly. This seems very unlikely to me - and the smoother curve as growth slows and turns into decline implies a greater area under the curve and hence a much greater final impact of delay.

My nonconfident best guess at an interpretation is that, according to these estimates, for every tonne of carbon:

Future Indians suffer damages utility-equivalent to the present population of India paying a total of $76

Future Americans suffer damages utility-equivalent to the present population of the USA paying a total of $48

Future Saudis suffer damages utility-equivalent to the present population of Saudi Arabia paying a total of $47

Next are China, Brazil, and the UAE, all with $24, and then a lot of other countries, and the sum of all these numbers is $4... (read more)

I was about to say this and then saw your comment. My impression from the paper is the $417 is a sum of costs to different countries, and for each of them the cost is a present value to the people in that country, with discounting being applied based on the expected amount of economic growth in that country. So I don't think it's calibrated to present-day Americans, but I don't think it's calibrated to the world's poorest either, and I agree the argument doesn't go through.

There's another problem with the quoted claim, wh... (read more)

A nuclear exchange may have the potential to ... possibly lead to the extinction of life on Earth.

I haven't seen anyone seriously argue for this claim and I don't think it's true or true-adjacent.

7
Luisa_Rodriguez
5y
Thanks, that's fair. Edited to say 'possibly lead to human extinction.'
3. The goal for climate change mitigation should be getting to net zero emissions as fast as possible, as anything other than that still causes warming, and this goal is absent from many EA and the 80,000 Hours write-up.

If there's already the goal of reducing emissions in general, with more reduction being better, is there any reason to add a goal about the zero level specifically? EA generally (and I think rightly) just cares about the expected amount of problem reduction, with exceptions where zero matters being things like diseases that can bounce back from a small number of cases.

5
DPiepgrass
5y
I think the zero-goal matters because (1) if you plan for, say, 50% reduction, or even 66%, you might end up with a very different course of action than if you plan for 100% reduction. Specifically, I'm concerned that a renewable-heavy plan may be able to reduce emissions 50% straightforwardly but that the final 25-45% will be very difficult, and that a course correction later may be harder than it is now; (2) most people and groups are focused on marginal emissions reductions rather than reaching zero, so they are planning incorrectly. I trust the EA/rationalist ethos more than any other, to help this community analyze this issue holistically, mindful of the zero-goal, and to properly consider S-risks and X-risks.
What is wrong with it?

If the claims made here from p.13 on are true, it seems like the model can't be reliable. This also disagrees. In general, it seems intuitively like it would be extremely hard to do this kind of statistics and extrapolate to the future with any serious confidence or rely on it for an estimate without a lot more thought. (I haven't tried to look for critiques of the critiques and don't claim to have a rigorous argument.)

Economic activity already goes to wherever it will be the most profitable. I don't see why we wo
... (read more)
3
kbog
5y
OK, CSS5 will address this by looking more broadly at the literature and the articles you cite, or maybe I will just focus more on the economist survey.

My understanding is there are two somewhat separate issues, one being the improper use of uniform priors and the other being a failure to give estimates that take all evidence (GCMs, recent temperatures, paleoclimate, etc) into account, with probability distributions from mostly-independent evidence sometimes having wrongly been taken as confirmation of the same uncertainty range instead of being combined into a narrower one. Do the estimates that you're eyeballing update on every line of evidence? Annan and Hargreaves under some assumptions find numb... (read more)

4[anonymous]5y
Yes I think you are in fact right that plausible priors do seem to exclude ECS above 5 degrees. You pick out a major problem in drawing conclusions about ECS - the IPCC does not explain how they arrive at their pdf of ECS and the estimate seems to be produced somewhat subjectively from various current estimates from instrumental and paleoclimatic data and from their own expert judgement as to what weight to give to different studies. I think this means that they give some weight to pdfs with a very fat tail, which seems to be wrong, given their use of uniform priors. This might mean that their tail estimate is too high

If the Burke et al. article that you're largely basing the 26% number on is accurate (which I strongly doubt), it seems like trying to cause economic activity to move to more moderate climates might be an extremely effective intervention.

2
kbog
5y
What is wrong with it? Economic activity already goes to wherever it will be the most profitable. I don't see why we would expect companies to predictably err. And, even if so, I don't share the intuition that it might be extremely effective.
This source suggests we’re on for 4.1-4.8C of warming by 2100, so it seems erroneous to assume 2-4C should be our baseline assumption.

It's hard to tell where this site is getting its numbers from, but my understanding is such claims are usually based on misrepresenting the RCP 8.5 emissions scenario as representative of business as usual even though it makes a number of pessimistic assumptions about other uncertainties and is widely considered as more like a worst case scenario than a median case scenario.

As far as I can tell, claims that extremely ... (read more)

1
tokugawa
5y
FYI, the CMIP6 models, to be used for the IPCC's AR6 reporting in 2021 are already producing prelim results. Quote from the linked article: "Early results suggest ECS values from some of the new CMIP6 climate models are higher than previous estimates, with early numbers being reported between 2.8C (pdf) and 5.8C. This compares with the previous coupled model intercomparison project (CMIP5), which reported values between 2.1C to 4.7C. The IPCC’s fifth assessment report (AR5) assessed ECS to be “likely” in the range 1.5C to 4.5C and “very unlikely” greater than 6C. (These terms are defined using the IPCC methodology.)" The IPCC experts actually toned down the projected temperature range from the Coupled Model Intercomparsion Project number 5 models. If they did so in a similar fashion in 2021, we'd get an IPCC AR6 ECS range of roughly 2.3 to 5.2 degrees Celsius, with a tail up to 7 degrees.
[anonymous]5y21
0
0

On Bayesianism - this is an important point. The very heavy tailed estimates all use a "zero information" prior with an arbitrary cut-off at eg 10 degrees or 20 degrees. (I discuss this in my write-up). This is flawed and more plausible priors are available which thin out the tails a lot.

However, I don't think you need this to get to there being substantial tail risk. Eyeballing the ECS estimates that use plausible priors, there's still something like a 1-5% chance of ECS being >5 degrees, which means that from 1.5 doublings of GHG concentrations, which seems plausible, there's a 1-5% of ~7 degrees