Next week for the 80,000 Hours Podcast I'll be interviewing Andreas Mogensen — Oxford philosopher, All Souls College Fellow and Assistant Director at the Global Priorities Institute.

He's the author of, among other papers:

  • Against Large Number Scepticism
  • The Paralysis Argument
  • Giving Isn’t Demanding
  • Do Evolutionary Debunking Arguments Rest on a Mistake About Evolutionary Explanations?
  • Is Identity Illusory?
  • Maximal Cluelessness
  • Moral Demands and the Far Future
  • Do not go gentle: why the Asymmetry does not support anti-natalism
  • The only ethical argument for positive d? Partiality and pure time preference
  • Tough enough? Robust satisficing as a decision norm for long-term policy analysis
  • Staking our future: deontic long-termism and the non-identity problem

Somewhat unusually among philosophers working on effective altruist ideas, Andreas leans towards deontological approaches to ethics.

What should I ask him?

New Answer
Ask Related Question
New Comment

5 Answers sorted by

What kinds of procreation asymmetries would follow from plausible deontological views, if any? What would be their implications for cause prioritization?

  1. Which directions in global priorities research seem most promising?
  2. Has Andreas ever tried communicating deep philosophical research to politicians/CEOs/powerful non-academics? If so, how did they react to ideas like deontic long-termism? Does he think any of them made a big behavior change after hearing about these kinds of ideas?

Does he think the maximality rule from Maximal Cluelessness is hopelessly too permissive, e.g. between any two options, it will practically never tell us which is better?

I have a few ideas on ways you might be able to get more out of it that I'd be interested in his thoughts on, although this may be too technical for an interview:

  1. Portfolio approaches, e.g. hedging and diversification.
  2. More structure on your set of probability distributions or just generally a smaller range of probability distributions you entertain. For example, you might not be willing to put precise probabilities on exclusive possibilities A and B, but you might be willing to say that A is more likely than B, and this cuts down the kinds of probability distributions you need to entertain. Maybe you have a sense that some probability distributions are more "likely" than others, but still be unable to to put precise probabilities on those distributions, and this gives you more structure.
  3. Discounting sufficiently low probabilities or using bounded social welfare functions (possibly with respect to the difference you make, or aggregating from 0), so that extremely unlikely but extreme possibilities (or tiny differences in probabilities for extreme outcomes) don't tip the balance.
  4. Targeting outcomes with the largest expected differences in value, with relatively small indirect effects from actions targeting them, e.g. among existential risks.

What books or papers have been most important for Andreas?  What books does he recommend that EAs should read?

These are just the first questions that came to mind, but may not necessarily overlap with Adreas' interests or knowledge:

  • Given his deontological leanings, is there something he would like to see people in the EA community doing less/more of?
  • What's the paper/line of investigation from GPI that has changed his view on practical priorities for EA the most?
  • How involved in philosophical discussions should the median EA be? (e.g. should we all read Parfit or  just muddle through with what we hear from informal discussions of ethics within the community?)
  • What's the thrust of his argument in "Against Large Number Sceptism"? How would he characterize how people who feel uncomfortable with arguments resting on large numbers think about the subject?
  • Where does the interest in Decision Theroy among EAs come from? Is it because it could have practical implications or something else entirely? What would change if we had an answer to the top open questions in Decision Theory?