Much greenhouse gas emissions comes from uncontrolled underground coal fires. I can't find any detailed source on its share of global CO2 emissions; I see estimates for both 0.3% and 3% quoted for coal seam fires just in China, which is perhaps the world's worst offender. Another rudimentary calculation said 2-3% of global CO2 emissions comes from coal fires. They also seem to have pretty bad local health and economic effects, even compared to coal burning in a power plant (it's totally unfiltered, though it's usually diffuse in rural areas).

There are some methods available now and on the horizon to try and put the fires out, and some have been practiced - see the Wikipedia article. However the continued presence of so many of these fires indicates a major problem to be solved with new techniques and/or funding for the use of existing techniques.

Coal seam fires were not mentioned by Let's Fund or by GWWC. Coal seam fires were not mentioned in the Founders Pledge climate change report, but we can plug it into the methodology in the report. We can estimate that the world's fires will cause 30 gigatons of CO2e emissions by 2050 (based on 3% of current global emissions, and assuming that the absolute amount will be fixed through 2050) which gives 4 points for importance. I can't find any examples of philanthropic funding dedicated for the issue, so it gets 16 points for philanthropic neglectedness.

Meanwhile, the US seems to face a little more than $1bn in expenses for all completed, current and projected coal fire projects, which could mean a few tens or hundreds of millions of dollars per year. This op-ed from last year accuses environmentalists of wholly ignoring the problem of coal seam fires. Jay Inslee's 'Green New Deal' proposal (probably the most ambitious and detailed) does not explicitly mention the issue. So the the government and private sector even in all countries can be presumed to spend less than $4bn on the problem, giving it 16 points for non-philanthropic neglectedness too. This creates a score of 4 + 0.5*(16+16) = 20 points, tying it for 4th place among 7 climate change efforts, though I don't think this methodology is really accurate.

Another approach is to directly estimate the cost-effectiveness. The various early failed and rejected attempts to extinguish the Centralia, PN fire each cost $80,000-$360,000 in today's money according to the Wikipedia article, so we might imagine that an early, quick $500,000 of extra funding would have extinguished it, but this relies on hindsight (knowing that the existing efforts would fail, and knowing that the coal fire would grow so much) so let's assume that a strategy of going the extra mile in extinguishing ten potentially big coal fires (=$5 million) would have saved Centralia. (The US has probably already updated to much higher standards, but India/China/Indonesia may have plenty of similarly low-hanging fruit.) Then a proposed final solution to putting out the fire by literally excavating the whole thing was quoted at $660 million in 1984, which is $1.6 billion today (CPI).

I can't find estimates of the Centralia emissions but the Mulga, AL fire seems to put out a mean flux of 3,400 grams per square meter per day, and the Centralia fire covers 3,700 acres which is 15 million square meters, so if the flux is the same then 40 years of Centralia burning (note: it's actually expected to continue for centuries) creates 745 million metric tonnes of CO2. Thus, the gargantuan excavation project actually has a reasonably attractive effectiveness at $2.15 per metric ton CO2e averted, whereas a proactive early strategy would have a remarkable effectiveness of $0.007 per metric ton CO2e averted. For comparison, effective climate change charities are estimated to have impacts between $0.12 and $1 per metric tonne Co2e averted, the more widely quoted offset costs outside EA are mostly $3-10 per tonne, and the social cost of carbon (ignoring animal impacts) is usually estimated between $25 and $200 per tonne. With today's technology I imagine we might be able to do something much cheaper than excavating the entire burning area.

It's good to make sure that we have a causal story for why something would be inappropriately neglected, rather than just trusting numbers. In this case, extinguishing coal seam fires does not punish fossil fuel companies, which makes it less appealing to the public. In addition, the local victims are poor white American and foreign Chinese/Indian/Indonesian mining towns which (a) get comparatively little sympathy from the predominantly urban progressive Western environmentalist movement and (b) are not very environmentalist themselves. Finally, it doesn't involve cool-looking new technology construction or virtuous tree-planting.

In several of the articles on coal seam fires I have seen statements that a lack of money constrains efforts to extinguish them. However as far as I can tell there are no ready channels for donating money to this work. That gap would have to be overcome by philanthropic entrepreneurs and research funders. Also, we could ask our representatives to push a relevant bill (it seems like a good bipartisan measure that could be supported by both Republicans and Democrats).

Comments15


Sorted by Click to highlight new comments since:

I thought this post was interesting, thoroughly researched and novel. I don't really recall if I agree with the conclusion but I remember thinking "here's a great example of what the forum does well - a place for arguments about cause prioritisation that belong nowhere else"

This example of a potentially impactful and neglected climate change intervention seems like good evidence that EAs should put substantially more energy towards researching other such examples. In particular, I'm concerned that the neglect of climate change has more to do with the lack of philosophically attractive problems relative to e.g. AI risk, and less to do with marginal impact of working on the cause area.

My impression is that few people are researching new interventions in general, whether in climate change or other areas (I could name many promising ideas in global development that haven't been written up by anyone with a strong connection to EA).

I can't speak for people who individually choose to work on topics like AI, animal welfare, or nuclear policy, and what their impressions of marginal impact may be, but it seems like EA is just... small, without enough research-hours available to devote to everything worth exploring.

(Especially considering the specialization that often occurs before research topics are chosen; someone who discovers EA in the first year of their machine-learning PhD, after they've earned an undergrad CS degree, has a strong reason to research AI risk rather than other topics.)

Perhaps we should be doing more to reach out to talented researchers in fields more closely related to climate change, or students who might someday become those researchers? (As is often the case, "EAs should do more X" means something like "these specific people and organizations should do more X and less Y", unless we grow the pool of available people/organizations.)


An example of what I had in mind was focusing more on climate change when running events like Raemon's Question Answering hackathons. My intuition says that it would be much easier to turn up insights like the OP than insights of "equal importance to EA" (however that's defined) in e.g. technical AI safety.

Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same. And climate change does have some philosophical issues with model parameters like discount rates. Admittedly, they are a little more messy and applied in nature than talking about formal agent behavior.

Reducing global poverty, and improving farming practices, lack philosophically attractive problems (for a consequentialist, at least) - yet EAs work heavily on them all the same.

I think this comes from an initial emphasis towards short-term, easily measured interventions (promoted by the $x saves a life meme, drowning child argument, etc.) among the early cluster of EA advocates. Obviously, the movement has since branched out into cause areas that trade certainty and immediate benefit for the chance of higher impact, but these tend to be clustered in "philosophically attractive" fields. It seems plausible to me that climate change has fallen between two stools: not concrete enough to appeal to the instinct for quantified altruism, but not intellectually attractive enough to compete with AI risk and other long-termist interventions.



This is too ad hoc, dividing three or four cause areas into two or three categories, to be a reliable explanation.

I don't have much to contribute but I appreciated this writeup – I like it when EAs explore cause areas like this.

I especially appreciate the "causal story" section of the post! I'm not sure I fully believe the explanation*, but it's always good to propose one, rather than handwaving away the reasons that a good cause would be so neglected (an error I frequently see outside of EA, and occasionally in EA-aligned work on other new cause areas).

*The part that rings truest to me is "no ready channels for donation". Ignorance seems more likely than deliberate neglect; I can picture many large environmental donors being asked about coal seam fires and reacting with "huh, never thought about it" or "is that actually a problem?"

Could someone start a business putting these fires out and make money selling carbon credits?

If we had a cap-and-trade system then presumably it could allow for that (no idea if they actually do, in the few countries where cap-and-trade is implemented).

There are also many companies that sell carbon credits to commercial and individual customers who are interested in lowering their carbon footprint on a voluntary basis.

Wikipedia

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

It’s not often that I hear about a potential EA cause area that I’d literally never thought about before, but kbog’s post on coal seam fires gave me — and, I expect, many other readers — that rare experience.

Some especially good features of the post:

  • Provides strong evidence for the claim that coal seam fires are neglected: they’ve never been mentioned by the EA research organizations that do the most work on climate change, and they receive relatively little funding from outside of EA.
  • Includes enough analysis to show that the intervention is plausibly more efficient than other means of emissions reduction, without trying to argue that it is, say, the “best” climate-related intervention.
  • Ends with a causal story for why the cause might be neglected. If you encounter an important idea that no one else seems to have picked up on, it’s good to go a step further and think: “Wait — how did this happen? Why hasn’t someone picked up the $20 bill already?”
    • Philanthropy is far from an efficient market, of course, but it’s still worth thinking about why a promising intervention in a hugely popular cause area might be overlooked.

Centralia is in Washington State, where Jay Inslee is the governor. He’s billed himself as the climate change candidate, and has pushed for leading-edge anti-CC policy here. Might we worth really digging into the politics and budget of the state to look for explanations. It might be that he’s informed by environmental lobbying groups like the Sierra Club. If coal seam fires are off their radar, then it might never get seen by state government.

Overall, I’d recommend thinking about cause of neglect both from the standpoint of public bias and institutional chain of transmission.

[This comment is no longer endorsed by its author]Reply

A top Google hit for “extinguish coal seam fires” says the gov paid $42 million to relocate Centralians when their early attempts to put it out failed. That suggests to me that they had a much higher estimate than you about the cost of putting it out.

https://blog.globalforestwatch.org/fires/embers-under-the-earth-the-surprising-world-of-coal-seam-fires

[This comment is no longer endorsed by its author]Reply
More from kbog
136
kbog
· · 4m read
76
kbog
· · 36m read
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in