Christopher Chan

Research Associate @ Odyssean institute
117 karmaJoined Jun 2022Working (0-5 years)



  • Research Associate @ Odyssean Institute
  • Community Organiser @ Effective Geoscientists
  • I used to work as a ESG Data Scientist

My academic background lies in urban remote sensing, CNN application in remote sensing, and geostatistics. I aim to specialise in data pipeline engineering, management, architectures. When I'm distracted, I enjoy a board range of other topics from philosophy to anthropology to various languages. Outside my professional life, I like to climb, travel, cook, and secretly write more code.

I hope to marry Earth Observation, geospatial tech, and other alternative data with traditional professional services to tackle challenges in global development and catastrophic resilience.


Sorry for being 2 years late to the discussion.

There were also points I agree and disagree with after reading the post and the comment session. EAs tendency to quantify things and run BOTECs are probably the reason this post did not receive its deserved attention. WIth some footnotes and cavaets to this can better express the author's opinion.

While I generally agree with the comments made by aogara and timunderwood. I have to highlight my agreement with Deborah of "Do no harm" oath, however, I would frame it as a ratio of "Doing loads of good/Doing minimal harm", as the cases in the comments that some externalities maybe unavoidable let alone, LCA might also have blind-spots in identifying all the non-obvious externalities. But the missing "Doing loads of good/Doing minimal harm" is often missing as an epistemic status in EA thinking in favour of quick and brash utilitarianism BOTEC calculation instead of exploring the complexities as explored by Doing EA Better.

I thought the marginal damage of bednets chemically leaking is dangerous and contributes to the extinction event is probably under-assessed, although I have no expertise in commenting on chemical contamination. [Epistemic status: low]. One could potentially argue that this is an intermediate solution before the arrival of the Malaria vaccine, which on 2023 update is coming soon, then I suspect that AMF efforts is justifiable despite the environmental externalities. However, in light of this, I hope this is true and therefore will actually redirect most of AMF funding to rapidly speed-up and accelerate the vaccine trials and distribution like we had in COVID. This could be done to minimise the AMF externaliites. In a general scope, we wish near-termist charities to eventually disappear because this is a sign of a progressing human civilisation. We want less charities not more, more charities is an indication of problems becoming bigger.

I initially thought that the point on insect pesticide resilient was a good point and probably the most dangerous scenario until I considered Guy point, the number of mosquito in contact with the bednet in comparison to spraying is fair. 

I also think that climate change as not part of the x-risk funding scheme is although epistemically mistaken, however due to less-neglectedness aspect, I can see why this was the case.

One contradiction in the movement when considering longtermism (I work on nuclear risks X AI) is that we use future humans as part of the moral justification on a flourishing future, yet we do not account for them suffering-risks and future living organisms in the 6th mass extinction. Yes, nature hangs in the balance and that if you ever visited a tropical forest, it is a constant life and death battle, but surely the shear number of species drop and their potential offsprings up to an equilibrium balance has not been seen in our past BOTEC calculations. While we only account them when we talk about near-termism risk in animal farming. I thought this might be misguided and more in-depth evaluation shall be done here.

Hi Sarah,

Many thanks for pointing me to this. I had a brief look at the content and the comments, not yet the preceding posts and succesive posts. While I generally remain in agreement with the ALLFED teams on the neglectedness of the tail end, and how solving this also solves for other naturally occuring ASRS scenarios (albeit lower probabilities). Your argument reminds me of a perspective in animal welfare. If we improve the current condition of the billions of animal suffering, we have more of an excuse to slaughter them, in turn, empowering the meat companies, and thus it impedes our transition towards a cruelty free world.

Now I don't think I have anything to add that was not covered by the others a year ago, but I want to take this opportunitiy to steelman your case: If the lower neglectedness and higher tractability of civil movement / policy in denuclearisation to less than 300 nuclear weapons (approximate number for not causing a nuclear winter) > higher neglectedness and lower tractability of physical intervention (resilience food and supply chain resilience plan), you might be correct!

The tractability can be assessed on the civilian organisations (in and out of EA) that have been working on this and the success rate of reduced stockpile / dollar spent

But note that at least half of the nuclear weapon deployed are in the hands of authoritarian countries [Russia: 3859, China: 410, North Korea: 30] which does not have good track record in listening to civil societies. While you could argue that Russia had drastically reduced their stockpile at the end of the cold war, many non-alligned countries [non-NATO, non Russia Bloc] have only increased their stockpile absolutely. I suspect with low confidence that reducing stockpile by a lot is tractable but complete denuclearisation, squeezeing the last 10% is an extremely hard up-hill battle if not impossible as countries continue to look up for their interests. There's been a lot of recent talks about in hindsight Ukraine should not have given up their nuclear weapons and Russia just lifted their ban on testing them.

Interested in your thoughts here :)

I understand the desire to use cumulative probability to calculate probability of nuclear war before 2050, but if interdependency of base rate was not used (i.e. 0.0127 * 26 = 0.33, which is equivalent to metaculus), shouldn’t we already use a Beta conjugation of the base rate as each year pass-by?

- If detonation does not happen, Beta(1, 79)

- If detonation happen, Beta(2, 79)

- annual probability = 0.0127

- Cumulative probability of 21.843% by 2050

I saw you use Beta distribution for the CDF constraint the probability of a large nuclear war, defined using the metaculus question, I agree with this, I think this checks-out. I also like that you give less weighting to the metaculus question that ask for probability distribution as it will be less accurate than taking the Beta distribution of 100 to 1000, I learnt something about how to evaluate metaculus question here:

There seems to be 2 set of questions regarding nuclear impact and winter:

- The Nuclear Risk Horizon Project (no monetary incentive)

- Nuclear Risk Tournament ($ 2685.5 reward, and ends on 1st Feb 2024)

I wan to understand how do you calibrate the monetary incentive and limited time frame when weighing the 2 sets of questions for your research?

For example contrasting these 2 questions, which you have addressed in your post:

  1. How many nuclear weapons will be detonated offensively by 2050, if at least one offensive detonation occurs? [HORIZON, non-monetary]
  2. How many non-strategic nuclear weapons will be deployed at the end of 2023? (No recency weighted)? [TOURNAMENT, monetary]

The deployment mean is an orders of magnitude higher than predicted detonation. Surely, even 100 weapons is a very contained regional war scenario according to Hochmann et al (2021). And a very constrained exchange between Russia/China and NATO. I would think that the former question and prediction unrealistically low given how many test just NK have conducted recently. I think you have adequately modelled that with your beta-distribution, but that will be 3x higher than the latter question unweighted results which is about 112 at median, and 161 weapons at the 75th percentile (11 Tg soot), and the 95th percentile of your calculation of 1.81k is 3x the latter question’s distribution, do you think there’s a need to reconcile that?.

How do you feel about taking expected value of such numbers (4 Bil * 0.45) when this seems so far lower than numbers proposed by more sophisticated modelling, esp the Rutgers Team. I am generally going on the heuristic on prediction market probably have an upper hand in counting weapons and predicting deployment and number and location of detonation, but not on long drag-out nuclear winter affects (crop yield, trade, famine numbers).

I still need time to engage with the soot calculation literature, so I will probably write a follow-up on that later next week or the week after if that’s okay, that will give me much more focus on asking the right questions and doing the right research.

I am not consciously aware of any centralised EA meta backlog. I am a little confused as to the connection between working on this and the number discrepancies between highly engaged EAs and subscribers on 80,000 hours. From what I know anecdotally 80,000 hours is although funded by CEA, many from other conventional NGO actually use the job board without realising this was part of an EA project, this might result in an inflated count.

Will the open task backlog be meta focused or cause areas focused, if cause areas focus, you might probably get more traction by joining a local org or professional org say High Impact Engineers?

Hi Emman,

While I am no expert in biosecurity, your posts on natural and bioengineered fungi pathogens piqued my interests, particularly on topics of crop resilience. I think your posts have unique values that can be added to the Western dominant EA world, and I love seeing more diversity on the EA forum. Unfortunately, I have a feeling that the formatting of your posts might be undermining your very important message. Revewing your several posts I would consider:

- Adding a tl;dr if writing a longer posts
- Format your post with headers available while editing your EA forum draft
- Crosslink to important and relevant posts such as Max Görlitz map of biosecurity posts
- Writing in longer details in general that helps the non-biosecurity reader like me, set the reader up before getting to the main message.
- References to peer-review sources helps

Your posts deserve a lot more karma than many others, yet I felt like they fall short on readership because of the above reasons, here are just my 2 cents.

Thank you for in-text citation and quality post, felt like quality posts backed by peer-review has been absence from EA Forum lately.

Where is catastrophic resilience from volcanic and nuclear risks, biosecurity and pandemic preparedeness?

[This comment is no longer endorsed by its author]Reply

I would love to see more catastrophic resilience interview after a string of AI safety interviews. Perhaps volcanologists and nuclear security folks:

Mike Cassidy: Associate Professor in Volcanology, authored the best fallout post here.

Follow up with ALLFED and David Denkenberger.

Emad Kiyaei and Kolfinna Tómasdóttir: Setting up an AI risk in Nuclear Weapon civil consortium...

Load more