Hide table of contents

Next weekend at EAGx Australia I'll be doing a live 80,000 Hours Podcast recording with philosopher Alan Hájek, who has spent his life studying the nature of probability, counterfactuals, Bayesianism, expected value and more.

What should I ask him?

He's he author of among other papers:

  • Waging war on Pascal's wager
  • The reference class problem is your problem too
  • Interpretations of probability
  • Arguments for—or against—Probabilism?
  • Most counterfactuals are false
  • The nature of uncertainty

Topics he'd likely be able to comment on include:

  • problems with orthodox expected utility theory, especially involving infinite and undefined utilities or expectations
  • risk aversion, whether it’s justified, and how best to spell it out
  • how to set base rate priors for unknown quantities
  • his heuristics for doing good philosophy (about which he has lots to say) / how to spot bad philosophical arguments

See more about Professor Hájek here: https://philosophy.cass.anu.edu.au/people/professor-alan-h-jek

25

0
0

Reactions

0
0
New Answer
New Comment

9 Answers sorted by

Are there any axioms of rationality he thinks are probably normatively correct/required? Are they consistent all together? What implications would they have for ethics (taken together, or at least consistent subsets of these axioms)?

Some more specific questions:

  1. Are we normatively required to:
    1. have utility functions?
    2. have bounded utility functions?
    3. satisfy the vNM independence axiom?
    4. satisfy the option-set independence of irrelevant alternatives axiom?
    5. satisfy the continuity/Archimedean axiom?
    6. have transitive preferences?
    7. have complete preferences?
    8. satisfy the sure-thing principle?
    9. satisfy a stochastic dominance axiom?
    10. satisfy some sequential dominance axiom?
    11. satisfy axioms over all possible (including choice-dependent) sequences of lotteries and choices, and not just separately for each individual choice?
    12. satisfy any axiom over possibilities that are unrealistic, e.g. avoiding sequential dominance/Dutch books/money pumps based on the manipulation of your subjective probability of P vs NP or your subjective probability of consciousness for typical adult fruit flies?
  2. How should we deal with the possibility that the universe is infinite and our impact could be infinite?
    1. What about different infinite cardinal numbers?

Many people - both in academia and policymaking - consider the concept of 'Knightian Uncertainty' (roughly, the absence of probabilities for decision-making) to be highly relevant (eg for the purpose of spelling out precautionary principles). Does the concept make sense? If not, is it a problem that many people find it practically relevant?

Does he have a preferred resolution of the St. Petersburg Paradox

Or, put another way, should a probabilist Benthamite be risk-neutral on social welfare? (On my mind because I just listened to the Cowen-SBF interview where SBF takes the risk-neutral position.)

As well as St. Petersburg Paradox I'd be interested in his thoughts on the Pasadena Game

Laplace's rule of succesion is often used by forecasters to set base rates. What does he think of that? Is this a good rule of thumb?

Moreover, Laplace's rule gives different results depending on how finely you subdivide time (eg saying that there has been one year of global pandemic in the last 20 years will give different results that if you say there has been 12 months of pandemic in the last 20*12 months). How should we account for that inconsistency when applying Laplace's law?

How should one aggregate different forecasts? Is External Bayesianity a compelling criteria for an aggregation procedure? How about marginalization?

I'm confused about separability, i.e,. the assumption that my actions shouldn't depend on whether there is a separate world somewhere we can't reach.

For instance, if there was a separate Earth unreachably far away, I would intuitively be more in favor of taking riskier actions which would have a high upside (e.g., utopia) but also a chance of killing all humans—because there is a backup planet.

Am I just confused here? If not, are there any proofs of utilitarianism which don't rely on separability? (e.g., I think Harsanyi's does?). 

Harsanyi's theorem doesn't start from any axiom I would call "separability". See this post for non-technical summary.  It also doesn't imply separability in different number cases. For example, average utilitarianism is consistent with Harsanyi's theorem, but the average welfare level of unaffected individuals matters when choosing between options with different numbers of individuals. Under average utilitarianism, it's good to create an individual with higher than the average welfare without them and bad to create individuals with lower than the aver... (read more)

4
NunoSempere
2y
Thanks Michael, seems that I was just a bit confused here.

Human's can't actually implement Bayesian probability—or can they?—, e.g., when encountering black swans, it's hard to perform an update when our prior probability to the event was ~0. Similarly, the amount of computation needed for formal Bayesian updates seems too high for normal humans in the course of their day.

What are your thoughts on what humans should do to better approximate Bayesian updating? Should humans even be trying to imitate Bayesian updating rather than doing something else (e.g., relying on scenarios?)

What does taking utilitarianism seriously imply?

  • Would he pay the mugger in a Pascal's mugging? Generally does he think acting fanatically is an issue?
  • How does he think we should set a prior for the question of whether or not we are living at the most influential time? Uniform prior or otherwise?
  • What are his key heuristics for doing good philosophy, and how does he spot bad philosophical arguments?


 

Ask him about counterfactuals, ask him if his views have any implications for our ideas of counterfactual impact?

Ask him whether relative expectations can help us get out wagers like this one from Hayden Wilkinson's paper:

Dyson's Wager

You have $2,000 to use for charitable purposes. You can donate it to either of two charities. 

The first charity distributes bednets in low-income countries in which malaria is endemic. With an additional $2,000 in their budget this year, they would prevent one additional death from malaria. You are certain of this. 

The second charity does speculative research into how to do computations using ‘positronium’ - a form of matter which will be ubiquitous in the far future of our universe. If our universe has the right structure (which it probably does not), then in the distant future we may be able to use positronium to instantiate all of the operations of human minds living blissful lives, and thereby allow morally valuable life to survive indefinitely long into the future. From your perspective as a good epistemic agent, there is some tiny, non-zero probability that, with (and only with) your donation, this research would discover a method for stable positronium computation and would be used to bring infinitely many blissful lives into existence.
 

Curated and popular this week
Relevant opportunities