Mauricio

Comments

Re. Longtermism: A response to the EA forum (part 2)

[edit: after more discussion, I'm now pretty confused and think that this is pretty confused, especially my claim that expected value maximization is a proper and useful criteria of rightness.]

Sorry for my delay, and thank you for posting this!

I have two main doubts:

  1. Longtermism, at least as I understand it, doesn't actually depend on the credence assumption.
  2. None of the proposed alternatives to expected value maximization fill its important role: providing a criteria of rightness for decision making under uncertainty.

 

  1. Longtermism, at least as I understand it, doesn't actually depend on the credence assumption.

This piece focuses on criticizing the credence assumption. You make compelling arguments against it. But I don't care that much for the credence assumption (at least not since some of our earlier conversations)--I'm much more concerned with the claim that our willingness to make bets should follow the laws of probability. In other words, I’m much more concerned with decision making than with the nature of belief. Hopefully this doesn't look too goalpost-shifty, since I've tried to emphasize my concern with decision theory/bets since our first discussions on this topic.

You make a good case that we can just reject the credence assumption, when it comes to beliefs. But I think you'll have a much harder time rejecting an analogous assumption that comes up in the context of bets:

  • Willingness to bet (i.e. the maximum ratio of potential gains to potential losses that one is willing to accept) should be a real-valued function.
    • More on this below, when we get to two-valued representations of uncertainty.

We can have expected value theory without Bayesian epistemology (i.e. we can see maximizing expected value as a feature of good decisions, without being committed to the claim that the probabilistic weights involved are our beliefs. Admittedly, this makes expected value not a great name). So refuting the psychological aspect of Bayesian epistemology doesn't refute expected value theory, which longtermism (as I understand it) does depend on.

 

2. None of the proposed alternatives to expected value maximization fill its important role: providing a criteria of rightness for decision making under uncertainty.

Maybe I should be more clear about what kind of alternative I'm looking for. Apologies for any past ambiguous/misleading communication from my end about this--my thinking has changed and gotten more precise. 

A distinction that seems very useful here is the distinction between criteria of rightness and decision procedures. In short, perhaps as a refresher:

  • A criteria of rightness is a standard that an action/decision must meet to be good.
  • A decision procedure is an algorithm (perhaps a fuzzily defined one) for making decisions.

Why are criteria of rightness useful? Because they are grounds from which we can evaluate (criticize!) decision procedures, and thus figure out which decision procedures are useful in what circumstances. 

A useful analogy might be to mountain-climbing (not that I know anything about mountains). A good criteria for success might be the altitude you've reached, but that doesn't mean that "seek higher altitudes" is a very useful climbing procedure. Constantly thinking about altitude (I'm guessing) would be distracting at best. Does that mean the climber should forget about altitude? No! Keeping the criteria of altitude in mind--occasionally glancing to the top of the mountain--would be very useful for choosing good climbing tactics, even (especially) if you're not thinking about it all the time.

I'm bringing this up because I think this is one way in which we've been talking past each other:

  • I claim that expected value maximization is correct and useful as a criteria of rightness. When you suggest rejecting it, that leaves me looking for alternative criteria of rightness--looking for alternative answers to the question "what makes a decision right/good, if it's made under uncertainty?"
  • You've been pointing out that expected value maximization is terrible as a decision procedure, and you've been proposing alternative decision procedures.

As far as I can tell, this post proposes alternate epistemologies and decision procedures, but it doesn't propose an alternative criteria of rightness. So the important role of a criteria of rightness remains unfilled, leaving us with no principled grounds from which to criticize decisions made under uncertainty or potential procedures for making such decisions.

 

Loose ends:

 

Loose end: problems vs paradoxes

Hence, paradoxes lurking outside bayesian epistemology are the reason one can never leave it, but paradoxes lurking inside are exciting research opportunities.

Nice, this one made me laugh

 

Loose end: paradoxes

Other paradoxes within bayesian epistemology include the Necktie paradox, the St. Petersburg paradox, Newcomb’s paradox, Ellsberg Paradox, Pascal’s Mugging, Bertrand’s paradox, The Mere addition paradox (aka “The Repugnant Conclusion”), The Allais Paradox, The Boy or Girl Paradox, The Paradox Of The Absent Minded Driver (aka the “Sleeping Beauty problem”), and Siegel’s paradox.

I'd argue things aren’t that bad.

  • At least the St. Petersburg paradox, Newcomb’s problem, the “repugnant” conclusion, the Boy or Girl Paradox, and the Sleeping Beauty problem arguably have neat solutions:
  • The Ellsberg and Allais “paradoxes” refute the claim that people are in fact perfect expected value maximizers, but they don’t refute a different claim: that people should be--and that we often roughly are--expected value maximizers (while being subject to cognitive biases like ambiguity aversion)

Also, to make sure we’re on the same page about this - many of these paradoxes (e.g. the Pasadena game) seem to be paradoxes with how probabilities are used rather than with how they’re assigned. That doesn’t invalidate your point, although maybe it makes the argument as a whole fit together less cleanly (since here you’re criticizing “Bayesian decision making,” if you will, while later you focus on criticizing Bayesian epistemology).

 

Loose end: supposed alternative decision theories

Despite [expected value theory’s] many imperfections, what explicit alternative is there?

Here are some alternatives.

Unless I'm missing something, none of these seem to be alternative decision theories, in the sense discussed above. To elaborate:

A two-valued representation of uncertainty, like the Dempster-Shafer theory, lets one express uncertainty about both A and -A

I have several doubts about this approach:

  • It’s not an alternative decision theory
  • It doesn’t seem to resolve e.g. problems with positive credences in infinite values
  • To the extent that the sum of A and -A doesn’t equal one, Dutch book arguments still apply.
    • You argue compellingly that one can just drop the credence assumption to avoid Dutch book arguments in the context of beliefs, but--as I’ve tried to argue--it’s harder (and more important?) to avoid Dutch books in the context of decisions/bets.
      • We don’t need to keep theorizing here. To resolve this, please tell me the odds at which you’re willing to buy/sell bets that this will happen, and the odds at which you’re willing to buy/sell bets that it won’t happen. Then we (and your wallet) get to find out if the supposed laws of rationality depend on our assumptions :)

Alternative logics one might use to construct a belief theory include fuzzy logics, three-valued logic, and quantum logic, although none of these have been developed into full-fledged belief theories. 

Fascinating stuff, but many steps away from an alternative approach to decision making.

Making Do Without Expectations: Some people have tried to rehabilitate expected utility theory from the plague of expectation gaps.

You make a compelling case that these won’t help much.

 

Loose end: tools from critical rationalism

For important decisions, beyond simple ones made while playing games of chance, I use tools I’ve learned from critical rationalism, which are things like talking to people, brainstorming, and getting advice.

+1 for these as really useful things that expected value theory under-emphasizes (which is fine because EV is a general criteria of rightness, not a general decision procedure).

 

Loose end: authoritarian supercomputers

[...] So the question is: Would you always do what you’re told?

Do you really buy this argument? A pretty similar argument could get people riled up against the vile tyranny of calculators.

Apparent problems with this thought experiment:

  • It trivializes what’s at stake in our decision making--my feelings of autonomy and self-actualization are far less important than the people I could help with better decision making.
    • See this for a similar sentiment. (I’m not endorsing its arguments.)
  • I suspect much of my intuitive repulsion to this thought experiment comes from its (by design?) similarity to irrelevant real-world analogues, e.g. cult leaders who aren’t exactly ideal decision makers.
  • Just because critical reasoning is being practiced by some reasoner other than me (by the supercomputer, in this hypothetical) doesn’t mean it’s not being practiced.

 

Loose end: evolutionary decision making

Yes, it’s completely true that decisions made according to this framework are made up out of thin air (so it is with all theories) - we can view this as roughly analogous to the mutation stage in Darwinian evolution. Then, once generated, we subject the ideas to as much criticism from ourselves and others as possible - we can view this as roughly analogous to the selection stage.

As I’ve tried to argue, my point is that, without EV maximization, we lack plausible grounds for criticizing decisions amidst uncertainty. Criticism of decisions made under uncertainty must logically begin with premises about what kinds of decisions under uncertainty are good ones, and you’ve rejected popular premises of this sort without offering tenable alternatives.

Notes on EA-related research, writing, testing fit, learning, and the Forum

Thanks, Michael!

Another opportunity that just came out is the Stanford Existential Risks Initiative's summer research program - people can see info and apply here. This summer, we're collaborating with researchers at FHI, and all are welcome to apply.

Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.?

Also, here's a reading list on democratic backsliding, recently posted as a comment by Haydn Belfield.

What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings?

Some additional arguments:

  • This one, arguing that humanity's long-term future will be good
  • These, arguing that we should be nice/cooperative with others' value systems
  • In practice, violence is often extremely counterproductive; it often fails and then brings about massive cultural/political/military backlash against whatever the perpetrator of violence stood for.
    • Examples of violence that appear to have been counterproductive:
    • (Of course, violence doesn't always fail, but it seems to backfire often enough that people should be very wary of it even if they have no other moral qualms with it. And violence seems especially likely to fail and backfire when it's aimed at "Thanos-ing all humanity," since humans are pretty resilient.)
How to run a high-energy reading group

Thanks for this! I especially appreciated the recommendations for doing 2-person reading groups, and for having presentations include criticisms.

On top of your recommendations, here's a few additional ideas that have worked well for reading groups I've participated in/helped organize, in case others find them useful. (Credit to Stanford's The Precipice and AI Safety reading groups for a bunch of these!)

  • Break up a large group into small groups of ~3-4 people for discussion
    • This avoids large-group discussions, which are often bad (especially over Zoom).
  • Have readings be copied into Google Docs, with a few bolded lines at the top encouraging people to add a few comments in the doc.
    • This prompts people to generate thoughts on the material, and it adds a few interesting ideas to the reading.
  • Have participants vote on which questions to discuss: digitally share a Google Doc with potential discussion questions, then give people ~5 minutes to write "+1" next to all questions they'd like to discuss.
    • Small groups can do this to decide what to have as the focus point of their conversation.
    • Alternatively, organizers can use this to break a large group into small groups based on people's interests, like this (adapted for Zoom times):
      • The organizer encourages people to add & vote on questions for ~5 min.
      • The organizer identifies the most popular questions--enough of them that each small group could discuss a different one if they wanted to.
      • The organizer communicates to the group which questions were most popular, and labels each of these questions with a number.
      • The organizer encourages people who are especially interested in some of the questions to indicate this (e.g. by messaging a number to the Zoom chat).
      • The organizer creates groups of 3-4 people, trying to put together people who indicated interest in the same question.
  • When generating discussion questions, lean away from very vague or big-picture questions.
    • Very specific questions (which might  be sub-questions of big-picture questions) seem to lead to much more fruitful discussion.
What Helped the Voiceless? Historical Case Studies

I see, thanks for clarifying these points.

I think this mostly (although not quite 100%) addresses the two concerns that you raise

Could you expand on this? I see how policy makers could realize very long-term value with ~30 yr planning horizons (through continual adaptation), but it's not very clear to me why they would do so, if they're mainly thinking about the long-term interests of future [edit: current] generations. For example, my intuition is that risks of totalitarianism or very long-term investment opportunities can be decently addressed with decades-long time horizons for making plans (for the reasons you give), but will only be prioritized if policy makers use even longer time horizons for evaluating plans. Am I missing something?

AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything.

[2/2]

I'm also curious:

  • What makes collaborations with other kinds of organizations (non-EA orgs) successful at building connections/mutual support between orgs?
  • Other operations-related things you think might be useful for EA group organizers

Thanks!

AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything.

[1/2]

Thanks for doing this! Do you have any advice for EA group organizers (especially university groups), based on your experience with operations at other kinds of organizations? Areas I'm curious about include:

  • How can EA groups grow their teams and activities while maintaining good team coordination and management?
  • What relatively low-cost things can leadership do, if any, that go far in improving new team members' (especially volunteers') morale/engagement/commitment/initiative?
  • How can experienced EA groups best provide organizational support for new/small ones?
Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement

Thanks, Thomas!

These generally seem like very relevant criteria, so I'm definitely surprised by the results.

The only part I can think of that might have contributed to lower predictiveness of engagement is the "experience" criteria--I'd guess there might have been people who were very into EA since before the fellowship, and that this made them both score poorly on this metric and get very involved with the group later on. I wonder what the correlations look like after controlling for experience (although it's probably not that different, since it was only one of seven criteria).

I'm also curious: I'd guess that familiarity with EA-related books/blogs/cause areas/philosophies is a strong (positive) predictor of receptiveness to EA. Do you think this mostly factored into the scores as a negative contributor to the experience score, or was it also a big consideration for some of the other scores?

Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.?

Thanks! Also interested in this.

This syllabus from a class on authoritarian politics might be useful. I'm still going through it, but I found these parts especially interesting (some are papers rather than books, but hopefully close enough):

  • "What Do We Know About Democratization After Twenty Years?" (Geddes, 1999)
    • Discusses the relative longevity of different kinds of authoritarian regimes
  • "Civil Society and the Collapse of the Weimar Republic" (Berman, 1997)
    • On how the Nazi Party used civic associations to expand its power in the Weimar Republic
  • Parts of Totalitarian and Authoritarian Regimes (Linz, 1975), especially from ch. 2:
    • Pp. 65-71 on definitions of totalitarianism
    • Pp. 129-136 on criticisms of the concept of totalitarianism
    • P. 137 has a list of earlier scholarly work on democratic backsliding (pretty old though)
  • Development as Freedom (Sen, 1999), especially pp. 178-88
    • On the fact that “There has never been a famine in a functioning multiparty democracy”

Also:

  • Economic Origins of Dictatorship and Democracy (Acemoglu and Robinson, 2005)
    • Historical case studies and model of transitions to (and from) authoritarianism
    • I really liked ch. 2 as an overview
Load More