Hi, I’m Florian. I am enthusiastic about working on large scale problems that require me to learn new skills and extend my knowledge into new fields and subtopics. My main interests are climate change, existential risks, feminism, history, hydrology and food security.


Thanks David. I think the paper you are referring to might be the one I cited. At least Herrington also looked at the rate of change as well (Table 2). There you can see that the current trajectory and rate of change is most similar to the CT and the BAU2 scenario. CT being a scenario like you described (we innovated ourselves out of limits to growth), while BAU2 being a scenario where we are still on a collapse trajectory, but the resources of Earth are 2x of the default limits to growth scenario. Therefore, I would argue that we still can't tell if we just had more resources on Earth than originally estimated or if we solved our problem with innovation. But if you have another paper that discusses this as well, I'd be happy to read it. 

Just out of curiosity: Where is the word "schlep" originating from in the context of AI? Don't think I ever came across it before reading this post. 

The food shock resulting from the Russian invasion of Ukraine ultimately turned out comparatively small. ALLFED is looking mainly at food shocks of >10% of global calories. For events below that, especially if they are regional, it is much more cost efficient to trade grain globally. ALLFED's work is about what we could do if this current mechanism fails. Therefore, Ukaine and the resulting food problems are not really solvable with resilient foods, but more of a political problem. 

Though I agree with you that it would be great to test out many of the ALLFED solutions before a catastrophe. However, this would cost magnitudes more money than ALLFED currently has. 

Just a thought here. I am not sure if you can literally read this as EA being overwhelmingly left, as it depends a lot on your view point and what you define as "left". EA exists both in the US and Europe. Policy positions that are seen as left and especially center left in the US would often be more on the center or center right spectrum in Europe.

In my personal experience you always get downvotes/disagree votes for even mentioning any problems with gender balance/representation in EA, no matter what your actual point is. 

This is just another data point that the existential risk field (like most EA adjacent communities) has a problem when it comes to gender representation. It fits really well with other evidence we have. See, for example Gideon's comment under this post here: https://forum.effectivealtruism.org/posts/QA9qefK7CbzBfRczY/the-25-researchers-who-have-published-the-largest-number-of?commentId=vt36xGasCctMecwgi

While on the other hand there seems to be no evidence for your "men just publish more, but worse papers" hypothesis. 

Yeah good point. I'll probably do it differently if I revisit this next year. 

Yeah fair enough. I personally, view the Robock et al. papers as the "let's assume that everything happens according to the absolute worst case" side of things. From this perspective they can be quite helpful in getting an understanding of what might happen. Not in the sense that it is likely, but in the sense of what is even remotely in the cards. 

Just a side note. The study you mention as especially rigorous in 1) iii) (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017JD027331) was made in Los Alamos Labs, an organization who job it is to make sure that the US has a large and working stockpile of nuclear weapons. It is financed by the US military and therefore has a very clear inventive to talk down the dangers of nuclear winter. Due to this reason this study has been mentioned as not to be trusted by several well connected people in the nuclear space I talked to. 

An explanation of why it makes sense to talk down the risk of nuclear winter, if you want to have a working deterrence is describe here: https://www.jhuapl.edu/sites/default/files/2023-05/NuclearWinter-Strategy-Risk-WEB.pdf

What exactly confused you about the code? It only strips down the names and counts them. 

That the publications by someone are under counted makes sense, given how TERRA works, as likely not all publications are captured in the first place and probably not all publications were considered existential risk relevant. When I look at Bostrom's papers I see several that I would not count as directly x-risk relevant. 

Where exactly did you find the number for Torres? On their own website (https://www.xriskology.com/scholarlystuff) they have listed 15 papers and the list only goes to 2020. Since then Torres published several more papers, so this checks out. 

I personally did not exclude any papers. I simply used the existing TERRA database. Interestingly, the database only contains one paper by Whittlestone. Seems like the current key words used by TERRA did not catch Whittlestone's work. So, yes this is an undercount. 

Load more