SiebeRozendal's Shortform

5 comments, sorted by Highlighting new comments since Today at 2:19 PM
New Comment

This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!

I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up  not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study something along the lines of risk analysis, and its an especially valuable career path for people with an engineering background.

Why I think risk analysis is useful:

EA researchers rely a lot on quantification, but use a limited range of methods (simple Excel sheets or Guesstimate models). My impression is also that most EAs don't understand these methods enough to judge when they are useful or not (my past self included). Risk analysis expands this toolkit tremendously, and teaches stuff like the proper use of priors, underlying assumptions of different models, and common mistakes in risk models.

The field of Risk Analysis

Risk analysis is a pretty small field, and most is focused on risks of limited scope and risks that are easier to quantify than the risks EAs commonly look at. There is a Society of Risk Analysis (SRA), which manages the Journal of Risk Analysis (the main journal of this field). I found most of their study topics not so interesting, but it was useful to get an overview of the field, and there were some useful contacts to make (1). The EA-aligned org GCRI is active and well-established in SRA, but no other EA orgs are.

Topics & advisers

I hoped to work on GCR/X-risk directly, which substantially reduced my options. It would have been useful to just invest in learning a method very well, but I was not motivated to research something not directly relevant. I think it's generally difficult to make an academic career as a general x-risk researcher, and it's easier to research 1 specific risk. However, I believes this leaves open a number of cross-cutting issues.

I have a shortlist of potential supervisors I considered/contacted/was in conversation with, including in public policy and philosophy. I can provide this list privately on request.

Best grad programs:

The best background for grad school seems to be mathematics or more specifically, engineering. (I did not have this, which excluded a lot of options). The following 2 programs seemed most promising, although I only investigated PRGS in depth:

-- 


(1) For example, I had a nice conversation with the famous psychology researcher Paul Slovic, who now does research into the psychology involved in mass atrocities. https://psychology.uoregon.edu/profile/pslovic/

Aww yes, people writing about their life and career experiences! Posts of this type seem to have some of the best ratio of "how useful people find this" to "how hard it is to write" -- you share things you know better than anyone else, and other people can frequently draw lessons from them.

I have a concept of paradigm error that I find helpful.

A paradigm error is the error of approaching a problem through the wrong, or an unhelpful, paradigm. For example, to try to quantify the cost-effectiveness of a long-termism intervention when there is deep uncertainty.

Paradigm errors are hard to recognise, because we evaluate solutions from our own paradigm. They are best uncovered by people outside of our direct network. However, it is more difficult to productively communicate with people from different paradigms as they use different language.

It is related to what I see as

  • parameter errors (= the value of parameters being inaccurate)
  • model errors (= wrong model structure or wrong/missing parameters)

Paradigm errors are one level higher: they are the wrong type of model.


Relevance to EA

I think a sometimes-valid criticism of EA is that it approaches problems with a paradigm that is not well-suited for the problem it is trying to solve.

I think I call this "the wrong frame".

"I think you are framing that incorrectly etc"

eg in the UK there is often discussion of if LGBT lifestyles should be taught in school and at what age. This makes them seem weird and makes it seem risky. But this is the wrong frame - LGBT lifestyles are typical behaviour (for instance there are more LGBT people than many major world religions). Instead the question is, at what age should you discuss, say, relationships in school. There is already an answer here - I guess children learn about "mummies and daddies" almost immediately. Hence, at the same time you talk about mummies and daddies, you talk about mummies and mummies, and single dads and everything else. 

By framing the question differently the answer becomes much clearer. In many cases I think the issue with bad frames (or models) is a category error.

I like this, I think i use the wrong models when trying to solve challenges in my life.