Community Director | EA Netherlands
I have a background in (moral) philosophy,risk analysis, and moral psychology. I also did some x-risk research.
"2. Judgement calibration test
The Judgement Calibration test is supposed to do two things: first, make sure that students have really read the material and know its content; and second, test whether they can properly calibrate their confidence regarding the truth of their own answers."
This is really cool Simon, and awesome that you actually got permission to give actual grades by this mechanism. Curious how it works out in practice!
On 2: I know very little about the Chernobyl meltdown and meltdowns in general, but those numbers seem be the referring to the actual consequences of the meltdown. My understanding is that there was a substantial emergency reaction that liited the severity of the meltdown. I'm not sure, but I can imagine a completely unmanaged meltdown to be substantially worse?Also on 1: I have no idea how hard it is to turn a nuclear power plant off, but I doubt that it's very easy for outsiders with no knowledge (and that are worried about survival so don't have time to research how to do it safely?)
Sure, but the delta you can achieve with anything is small, depending on how you delineate an action. True, x-risk reduction is on the more extreme end of this spectrum, but I think the question should be "can these small deltas/changes add up to a big delta/change? (vs. the cost of that choice)" and the answer to that seems to be "yes."
Is your issue more along the following?
If so, I would reject 2, because I believe we shouldn't try to quantify things at those levels of precision. This does get us to your question "How does XR weigh costs and benefits?", which I think is a good question to which I don't have a great answer to. It would be something along the lines of "there's a grey area where I don't know how to make those tradeoffs, but most things do not fall into the grey area so I'm not worrying too much about this. If I wouldn't fund something that supposedly reduces x-risk, it's either that I think it might increase x-risk, or because I think there are better options available for me to fund". Do you believe that many more choices fall into that grey area?
That sounds like a better title to me :) Kudos on the adaptation.
Thanks for the highly detailed post! Seems like it was a cool event.
Nitpicking: this is the second time I see an evaluation described as "postmortem" and it puts me on the wrong foot. To me "postmortem" suggests the project was overall a failure, while it clearly wasn't! "Evaluation" seems like a better word?
I wrote some thoughts on risk analysis as a career path in my shortform here, which might be somewhat helpful. I echo people's concern that this program focuses overly much on non-anthropogenic risk.
I also know an EA that did this course - I'll send her details in a PM. :)
Giving Green was fortunate enough to receive a grant from the EA Infrastructure fund, with the express purpose of addressing this criticism, by bringing our methods closer in line to that of the EA community and implementing other suggestions.
This is really interesting! I am happy to see that the cooperative nature of that disagreement is being continued, and I look forward to the progress of the person that ends up taking this role. It sounds like a very high-level of qualifications (good researcher, good ops skills, communications, management..), so I hope you're able to find someone!
I think it stands for "depersonalisation" and "derealisation"
This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study something along the lines of risk analysis, and its an especially valuable career path for people with an engineering background.
Why I think risk analysis is useful:
EA researchers rely a lot on quantification, but use a limited range of methods (simple Excel sheets or Guesstimate models). My impression is also that most EAs don't understand these methods enough to judge when they are useful or not (my past self included). Risk analysis expands this toolkit tremendously, and teaches stuff like the proper use of priors, underlying assumptions of different models, and common mistakes in risk models.
The field of Risk Analysis
Risk analysis is a pretty small field, and most is focused on risks of limited scope and risks that are easier to quantify than the risks EAs commonly look at. There is a Society of Risk Analysis (SRA), which manages the Journal of Risk Analysis (the main journal of this field). I found most of their study topics not so interesting, but it was useful to get an overview of the field, and there were some useful contacts to make (1). The EA-aligned org GCRI is active and well-established in SRA, but no other EA orgs are.
Topics & advisers
I hoped to work on GCR/X-risk directly, which substantially reduced my options. It would have been useful to just invest in learning a method very well, but I was not motivated to research something not directly relevant. I think it's generally difficult to make an academic career as a general x-risk researcher, and it's easier to research 1 specific risk. However, I believes this leaves open a number of cross-cutting issues.I have a shortlist of potential supervisors I considered/contacted/was in conversation with, including in public policy and philosophy. I can provide this list privately on request.Best grad programs:
The best background for grad school seems to be mathematics or more specifically, engineering. (I did not have this, which excluded a lot of options). The following 2 programs seemed most promising, although I only investigated PRGS in depth:
(1) For example, I had a nice conversation with the famous psychology researcher Paul Slovic, who now does research into the psychology involved in mass atrocities. https://psychology.uoregon.edu/profile/pslovic/
Good points! I broadly agree with your assessment Michael! I'm not at all sure how to judge whether Sagan's alarmism was intentionally exaggerated or the result of unintentional poor methodology. And then, I think we need to admit that he was making the argument in a (supposedly) pretty impoverished research landscape on topics such as this. It's only expected that researchers in a new field make mistakes that seem naive once the field is further developed.
I stand by my original point to celebrate Sagan > Petrov though. I'd rather celebrate (and learn from) someone who acted pretty effectively even though it was flawed in a complex situation, than someone who happened to be in the right place at the right time. I'm sill incredibly impressed by Petrov though! It's just.. hard to replicate his impact.