MichaelStJules

Animal welfare research intern at Charity Entrepreneurship, organizer for Effective Altruism Waterloo, and deep learning researcher.

Earned to give in deep learning at a startup for 2 years for animal charities, and now trying to get into effective animal advocacy research. Curious about s-risks.

Suffering-focused, anti-speciesist, prioritarian, consequentialist. Also, I like math and ethics.

My shortform.

Comments

EA Relationship Status

I think these are all good points, and I agree that these are good reasons for marriage. I didn't intend my comment as a good reason to not get married.

One thing I had in mind is that if someone feels that there's a good chance they'll divorce their partner (and the base rate is high, so on an outside view, this seems true), then marriage vows ("till death do us part") might feel like lying or making a promise they know there's a good chance they won't keep, and they might have a strong aversion to this. Personally, I feel this way. If I make a promise that I expect to only keep with ~50% probability, then this feels like lying. 90% probability feels like lying to me, too. I'm not sure where it stops feeling like lying.

However, this doesn't mean they shouldn't get married anyway; interpreting or rewriting the vows as slightly less demanding (but still fairly demanding) is better and not necessarily lying. I had this thread (click "See in context" for more) in mind about the GWWC pledge. 

I don't know what's normally expected, but I'd rather be somewhat explicit that we can get divorced for any reason, as long as we make an honest effort to work through it (say for a year, with counselling, etc.), with exceptions allowing immediate divorce for abuse or one of us acting very harmful to the other or others.

Michael_Wiebe's Shortform

I think we can justify ruling out all options the maximality rule rules out, although it's very permissive. Maybe we can put more structure on our uncertainty than it assumes. For example, we can talk about distributional properties for  without specifying an actual distribution for , e.g.  is more likely to be between 0.8 and 0.9 than 0.1 and 0.2, although I won't commit to a probability for either.

EA Relationship Status

Maybe some EAs would only want to be married to a person until it no longer maximizes utility, but feel more is expected in a marriage, and don't want to commit to more. I don't expect this accounts for much of the difference, though.

EA Relationship Status

Other ideas?

 

1. EAs are pickier with partners, e.g. looking for someone with similar values? That EA skews so heavily male might make that harder, too.

2. EAs are more okay with long-term relationships (possibly including having and raising children) outside marriage? I guess this gets at your last suggestion about weirdness/standards.

 

  • EAs often highly prioritize their careers.
  • EAs are generally less interested in having children.

If I had to guess, these would play a big part. Career prioritization is mine (I'm 27). Specifically, I don't know where I'll want to live in the next few years, and I'd be worried about having a partner limiting my options (and I prefer non-remote work). 

I'm also just personally not that interested in having a relationship, and am satisfied on my own, not that I'm specifically aromantic or asexual (although I'd expect the prevalence of these to be higher in EA, too).

I also second everything RyanCarey said here.

Michael_Wiebe's Shortform

if we don't know the mean of , is the problem simply intractable? Should we resort to maxmin utility?

It's possible in a given situation that we're willing to commit to a range of probabilities, e.g.  (without committing to  or any other number), so that we can check the recommendations for each value of  (sensitivity analysis).

I don't think maxmin utility follows, but it's one approach we can take.

what if we have a hyperprior over the mean of ? Do we just take another level of expectations, and end up with the same solution?

Yes, I think so.

how does a stochastic dominance decision theory work here?

I'm not sure specifically, but I'd expect it to be more permissible and often allow multiple options for a given setup. I think the specific approach in that paper is like assuming that we only know the aggregate (not individual) utility function up to monotonic transformations, not even linear transformations, so that any action which is permissible under some degree  of risk aversion with respect to aggregate utility is permissible generally. (We could also have uncertainty about individual utility/welfare functions, too, which makes things more complicated.)

Formalizing longtermism

You could just restrict the set of options, or make the option the intention to follow through with the action, which may fail (and backfire, e.g. burnout), so adjust your expectations keeping failure in mind.

Or attach some probability of actually doing each action, and hold that for any positive EV shorttermist option, there's a much higher EV longtermist option which isn't much less likely to be chosen (it could be the same one for each, but need not be).

Formalizing longtermism

Christian Tarsney has done a sensitivity analysis for the parameters in such a model in The Epistemic Challenge to Longtermism for GPI.

It looks to me like it's unsolvable without some nonzero exogenous extinction risk

There's also the possibility that the space we would otherwise occupy if we didn't go extinct will become occupied by sentient individuals anyway, e.g. life reevolves, or aliens. These are examples of what Tarsney calls positive exogenous nullifying events, with extinction being a typical negative exogenous nullifying event.

There's also the heat death of the universe, although it's only a conjecture.

because otherwise there will be multiple parameter choices that result in infinite utility, so you can't say which one is best.

There are some approaches to infinite ethics that might allow you to rank some different infinite outcomes, although not necessarily all of them. See the overtaking criterion. These might make assumptions about order of summation, though, which is perhaps undesirable for an impartial consequentialist, and without such assumptions, conditionally convergent series can be made to sum to anything or diverge just by reordering them, which is not so nice.

Formalizing longtermism

I like the simpler/more general model, although I think you should also take expectations, and allow for multiple joint probability distributions for the outcomes of a single action to reflect our deep uncertainty (and there's also moral uncertainty, but I would deal with that separately on top of this). That almost all of the (change in) value happens in the longterm future isn't helpful to know if we can't predict which direction it goes.

Greaves and MacAskill define strong longtermism this way:

Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.

So this doesn't say that most of the value of any given action is in the tail of the rewards; perhaps you can find some actions with negligible ex ante longterm consequences, e.g. examples of simple cluelessness.

How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs

If you want robust arguments for interventions you should look at those interventions. I believe there are robust arguments for work on e.g. AI risk, such as Human Compatible.

Thank you!

I feel like it's misleading to take a paper that explicitly says "we show that strong longtermism is plausible", does so via robust arguments, and conclude that longtermist EAs are basing their conclusions on speculative arguments.

I'm not concluding that longtermist EAs are in general basing their conclusions on speculative arguments based on that paper, although this is my impression from a lot of what I've seen so far, which is admittedly not much. I'm not that familiar with the specific arguments longtermists have made, which is why I asked you for recommendations.

I think showing that longermism is plausible is also an understatement of the goal of the paper, since it only really describes section 2, and the rest of the paper aims to strengthen the argument and address objections. My main concerns are with section 3, where they argue specific interventions are actually better than a given shorttermist one. They consider objections to each of those and propose the next intervention to get past them. However, they end with the meta-option in 3.5 and speculation: 

It would also need to be the case that one should be virtually certain that there will be no such actions in the future, and that there is no hope of discovering any such actions through further research. This constellation of conditions seems highly unlikely.

I think this is a Pascalian argument: we should assign some probability to eventually identifying robustly positive longtermist interventions that is large enough to make the argument go through. How large and why?

It seems to me that longtermists are very obviously trying to do both of these things. (Also, the first one seems like the use of "explicit calculations" that you seem to be against.)

I endorse the use of explicit calculations. I don't think we should depend on a single EV calculation (including by taking weighted averages of models or other EV calculations; sensitivity analysis is a preferable). I'm interested in other quantitative approaches to decision-making as discussed in the OP.

My major reservations about strong longtermism include: 

  1. I think (causally or temporally) longer causal chains we construct are more fragile, more likely to miss other important effects, including effects that may go in the opposite direction. Feedback closer to our target outcomes and what we value terminally reduces this issue.
  2.  I think human extinction specifically could be a good thing (due to s-risks or otherwise spreading suffering through space) so interventions that would non-negligibly reduce extinction risk are not robustly good to me (not necessarily robustly negative, either, though). Of course, there are other longtermist interventions.
  3. I am by default skeptical of the strength of causal effects without evidence, and I haven't seen good evidence for major claims of causation I've seen, but I also have only started looking, and pretty passively.
Load More