Community Director | EA Netherlands
I have a background in (moral) philosophy,risk analysis, and moral psychology. I also did some x-risk research.
This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study something along the lines of risk analysis, and its an especially valuable career path for people with an engineering background.
Why I think risk analysis is useful:
EA researchers rely a lot on quantification, but use a limited range of methods (simple Excel sheets or Guesstimate models). My impression is also that most EAs don't understand these methods enough to judge when they are useful or not (my past self included). Risk analysis expands this toolkit tremendously, and teaches stuff like the proper use of priors, underlying assumptions of different models, and common mistakes in risk models.
The field of Risk Analysis
Risk analysis is a pretty small field, and most is focused on risks of limited scope and risks that are easier to quantify than the risks EAs commonly look at. There is a Society of Risk Analysis (SRA), which manages the Journal of Risk Analysis (the main journal of this field). I found most of their study topics not so interesting, but it was useful to get an overview of the field, and there were some useful contacts to make (1). The EA-aligned org GCRI is active and well-established in SRA, but no other EA orgs are.
Topics & advisers
I hoped to work on GCR/X-risk directly, which substantially reduced my options. It would have been useful to just invest in learning a method very well, but I was not motivated to research something not directly relevant. I think it's generally difficult to make an academic career as a general x-risk researcher, and it's easier to research 1 specific risk. However, I believes this leaves open a number of cross-cutting issues.I have a shortlist of potential supervisors I considered/contacted/was in conversation with, including in public policy and philosophy. I can provide this list privately on request.Best grad programs:
The best background for grad school seems to be mathematics or more specifically, engineering. (I did not have this, which excluded a lot of options). The following 2 programs seemed most promising, although I only investigated PRGS in depth:
(1) For example, I had a nice conversation with the famous psychology researcher Paul Slovic, who now does research into the psychology involved in mass atrocities. https://psychology.uoregon.edu/profile/pslovic/
Good points! I broadly agree with your assessment Michael! I'm not at all sure how to judge whether Sagan's alarmism was intentionally exaggerated or the result of unintentional poor methodology. And then, I think we need to admit that he was making the argument in a (supposedly) pretty impoverished research landscape on topics such as this. It's only expected that researchers in a new field make mistakes that seem naive once the field is further developed.
I stand by my original point to celebrate Sagan > Petrov though. I'd rather celebrate (and learn from) someone who acted pretty effectively even though it was flawed in a complex situation, than someone who happened to be in the right place at the right time. I'm sill incredibly impressed by Petrov though! It's just.. hard to replicate his impact.
Ah yes, that makes sense and I hadn't thought of that
Have you considered running different question sets to different people (randomly assigned)?
It could expand the range of questions you can ask.
I have a concept of paradigm error that I find helpful.
A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.
Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.
It is related to what I see as
Paradigm errors are one level higher: they are the wrong type of model.
Relevance to EA
I think a sometimes-valid criticism of EA is that it approaches problems with a paradigm that is not well-suited for the problem it is trying to solve.
I agree with this: a lot of the argument (and related things in population ethics) depends on the zero-level of well-being. I would be very interested to see more interest into figuring out what/where this zero-level is.
I have recently been toying with a metaphor for vetting EA-relevant projects: that of a mountain climbing expedition. I'm curious if people find it interesting to hear more about it, because then I might turn it into a post.
The goal is to find the highest mountains and climb them, and a project proposal consists of a plan + an expedition team. To evaluate a plan, we evaluate
To evaluate a team, we evaluate
Curious to hear what people think. It's got a bit of overlap with Cotton-Barratt's Prospecting for Gold, but I think it might be sufficiently original.
Great report! I have a two questions for you:
1. On the following:
There are already many ongoing and upcoming high-quality studies on psychedelic-assisted mental health treatments, and there are likely more of those to follow, given the new philanthropic funding that has recently come into the area. (p. 45-46)
Based on the report itself, my impression is that high-quality academic research into microdosing and into flow-through effects* of psychedelic use is much more funding-constrained. Have you considered those?
2. Did you consider more organisations than Usona and MAPS? It seems a little bit unlikely that these are the only two organisations lobbying for drug approval?
*The flow-through effects I'm most excited about are a reduction in meat consumption, creative problem solving, and an improvement in good judgment (esp. for high-impact individuals). Effects on long-term judgment seem very hard to research, though.
I was confused about the usage of the term drug development as it sounds to me like it's about the discovery/creation of new drugs, which clearly does not seem to be the high-value aspect here. But from the report:
Drug development is a process that covers everything from the discovery of a brand new drug for treatment to this drug being approved for medical use.
I speculate that the particulars of the psychedelic experience may drive rescaling like this in an intense way.
I also think that the psychedelic experience, as well as things like meditation, affect well-being in ways that might not be captured easily. I'm not sure if it's rescaling per se. I feel that meditation has not made me happier in the hedonistic sense, but I strongly believe it's made optimize less for hedonistic wellbeing, and in addition given me more stability, resilience, better judgment, etc.