Thanks, Michael!
Another opportunity that just came out is the Stanford Existential Risks Initiative's summer research program - people can see info and apply here. This summer, we're collaborating with researchers at FHI, and all are welcome to apply.
Also, here's a reading list on democratic backsliding, recently posted as a comment by Haydn Belfield.
Some additional arguments:
Thanks for this! I especially appreciated the recommendations for doing 2-person reading groups, and for having presentations include criticisms.
On top of your recommendations, here's a few additional ideas that have worked well for reading groups I've participated in/helped organize, in case others find them useful. (Credit to Stanford's The Precipice and AI Safety reading groups for a bunch of these!)
I see, thanks for clarifying these points.
I think this mostly (although not quite 100%) addresses the two concerns that you raise
Could you expand on this? I see how policy makers could realize very long-term value with ~30 yr planning horizons (through continual adaptation), but it's not very clear to me why they would do so, if they're mainly thinking about the long-term interests of future [edit: current] generations. For example, my intuition is that risks of totalitarianism or very long-term investment opportunities can be decently addressed with decades-long time horizons for making plans (for the reasons you give), but will only be prioritized if policy makers use even longer time horizons for evaluating plans. Am I missing something?
[2/2]
I'm also curious:
Thanks!
[1/2]
Thanks for doing this! Do you have any advice for EA group organizers (especially university groups), based on your experience with operations at other kinds of organizations? Areas I'm curious about include:
Thanks, Thomas!
These generally seem like very relevant criteria, so I'm definitely surprised by the results.
The only part I can think of that might have contributed to lower predictiveness of engagement is the "experience" criteria--I'd guess there might have been people who were very into EA since before the fellowship, and that this made them both score poorly on this metric and get very involved with the group later on. I wonder what the correlations look like after controlling for experience (although it's probably not that different, since it was only one of seven criteria).
I'm also curious: I'd guess that familiarity with EA-related books/blogs/cause areas/philosophies is a strong (positive) predictor of receptiveness to EA. Do you think this mostly factored into the scores as a negative contributor to the experience score, or was it also a big consideration for some of the other scores?
Thanks! Also interested in this.
This syllabus from a class on authoritarian politics might be useful. I'm still going through it, but I found these parts especially interesting (some are papers rather than books, but hopefully close enough):
Also:
[edit: after more discussion, I'm now pretty confused and think that this is pretty confused, especially my claim that expected value maximization is a proper and useful criteria of rightness.]
Sorry for my delay, and thank you for posting this!
I have two main doubts:
This piece focuses on criticizing the credence assumption. You make compelling arguments against it. But I don't care that much for the credence assumption (at least not since some of our earlier conversations)--I'm much more concerned with the claim that our willingness to make bets should follow the laws of probability. In other words, I’m much more concerned with decision making than with the nature of belief. Hopefully this doesn't look too goalpost-shifty, since I've tried to emphasize my concern with decision theory/bets since our first discussions on this topic.
You make a good case that we can just reject the credence assumption, when it comes to beliefs. But I think you'll have a much harder time rejecting an analogous assumption that comes up in the context of bets:
We can have expected value theory without Bayesian epistemology (i.e. we can see maximizing expected value as a feature of good decisions, without being committed to the claim that the probabilistic weights involved are our beliefs. Admittedly, this makes expected value not a great name). So refuting the psychological aspect of Bayesian epistemology doesn't refute expected value theory, which longtermism (as I understand it) does depend on.
2. None of the proposed alternatives to expected value maximization fill its important role: providing a criteria of rightness for decision making under uncertainty.
Maybe I should be more clear about what kind of alternative I'm looking for. Apologies for any past ambiguous/misleading communication from my end about this--my thinking has changed and gotten more precise.
A distinction that seems very useful here is the distinction between criteria of rightness and decision procedures. In short, perhaps as a refresher:
Why are criteria of rightness useful? Because they are grounds from which we can evaluate (criticize!) decision procedures, and thus figure out which decision procedures are useful in what circumstances.
A useful analogy might be to mountain-climbing (not that I know anything about mountains). A good criteria for success might be the altitude you've reached, but that doesn't mean that "seek higher altitudes" is a very useful climbing procedure. Constantly thinking about altitude (I'm guessing) would be distracting at best. Does that mean the climber should forget about altitude? No! Keeping the criteria of altitude in mind--occasionally glancing to the top of the mountain--would be very useful for choosing good climbing tactics, even (especially) if you're not thinking about it all the time.
I'm bringing this up because I think this is one way in which we've been talking past each other:
As far as I can tell, this post proposes alternate epistemologies and decision procedures, but it doesn't propose an alternative criteria of rightness. So the important role of a criteria of rightness remains unfilled, leaving us with no principled grounds from which to criticize decisions made under uncertainty or potential procedures for making such decisions.
Loose ends:
Loose end: problems vs paradoxes
Nice, this one made me laugh
Loose end: paradoxes
I'd argue things aren’t that bad.
Also, to make sure we’re on the same page about this - many of these paradoxes (e.g. the Pasadena game) seem to be paradoxes with how probabilities are used rather than with how they’re assigned. That doesn’t invalidate your point, although maybe it makes the argument as a whole fit together less cleanly (since here you’re criticizing “Bayesian decision making,” if you will, while later you focus on criticizing Bayesian epistemology).
Loose end: supposed alternative decision theories
Unless I'm missing something, none of these seem to be alternative decision theories, in the sense discussed above. To elaborate:
I have several doubts about this approach:
Fascinating stuff, but many steps away from an alternative approach to decision making.
You make a compelling case that these won’t help much.
Loose end: tools from critical rationalism
+1 for these as really useful things that expected value theory under-emphasizes (which is fine because EV is a general criteria of rightness, not a general decision procedure).
Loose end: authoritarian supercomputers
Do you really buy this argument? A pretty similar argument could get people riled up against the vile tyranny of calculators.
Apparent problems with this thought experiment:
Loose end: evolutionary decision making
As I’ve tried to argue, my point is that, without EV maximization, we lack plausible grounds for criticizing decisions amidst uncertainty. Criticism of decisions made under uncertainty must logically begin with premises about what kinds of decisions under uncertainty are good ones, and you’ve rejected popular premises of this sort without offering tenable alternatives.