MichelJusten

653Madison, WI, USAJoined Oct 2020
eauw.org

Bio

Currently working on independent meta-EA projects and strategy. Previously interned at Global Challenges Project and founded EA University of Wisconsin–Madison.

Interested in EA community growth and health, EA & psychology, and avoiding doom with everything humanity's got. Also meditation. If you think we share an interest (we probably do), don't hesitate to reach out!

https://www.linkedin.com/in/michel-justen/

Comments
54

Thanks for such a long annotated list! I think I'm going to start with How Change Happens: Why Some Social Movements Succeed While Others Don't and then move into this list. Overloaded at the moment but I'll be in touch if a chat seems valuable!  

I'll look into these – Thanks!

Thank you for such a thoughtful response! This helps clear up some confusion and gives me more to think about. The perks of accessible discourse with an academic philosopher ;) 

Wow, I found this post really thought provoking. 

Two thought experiments that made me realize I care more about grand aesthetics than I gave credit to:

  1. If two beings were equally happy, but one is wireheaded and one is living at a pinnacle of cultural excellence, which do I value more? 
    1. I'm pulled towards the latter, and I don't find a "well that's just your aesthetic preferences bro" rebuttal convincing grounds to entirely dismiss this preference.
  2. Would I want my children to live a life of intense happiness afforded by extreme comfort  or a slightly less happy life afforded by a pursuit of deep, lasting meaning? 
    1. I think the latter; I'm willing to trade some utility for grandeur. 

I could still reconcile these views with total utilitarianism if I say that grandeur and excellence are important elements of total utility (or at least the total utility I care about). But I'm weary of this move since (1) you could always just redefine total utility to dismiss any critique of total utility and (2) it would make my definition of total utility different than many other people's definition of total utility, which seems OK with experience-machine-style / extreme comfort utility.

Also, a point on word choice: I'm weary of "cultural excellence" as the thing to be caring about under this critique. I'm worried "culture" invites critiques of specific-culture elitism (e.g., western elitism) and is too transient across the grand time-scales  we might care about. I'm more drawn to "civilization excellence" or "enduring meaning" as phrases that capture something about the world I intrinsically care about beyond just total utility. 

I think this is a great post and I’m glad you took the time to summarize your longer post!

In my experience, the longtermist/ x-risk community has an implicit attitude of “we can do it better.” “We’re the only ones really thinking about this and we’ll forge our own institutions and interventions.” I respect this attitude a great deal, but I think it causes us to underestimate how powerful the political and economic currents around us are (and how reliant we are on their stability).

It just doesn’t seem that unlikely to me that we come up with some hard-won biosecurity policy or AI governance intervention, and then geopolitical turmoil negates all the intervention’s impact. Technical interventions are a bit more robust, but I’d claim a solid subset of those also require a type of coordination and trust that systemic cascading risks threaten.

Exciting to hear that there's a new Rational Altruism Lab starting at UCLA!

I think I still stand behind the sentiment in (3), I'm just not sure how to best express it. 

I agree that 100% (naively) maximizing for something can quickly become counter-productive in practice. It is really hard to know what actions one should take if one is fully maximizing for X, so even if one wants to maximize X it makes sense to take into account things like optics, burnout, Systemic cascading effects,  epistemic uncertainty, and whatever else gives you pause before maximizing.

This is the type of considerate maximization I was gesturing at when I said directionally more maximizing might be a good thing for some people (to the extent they genuinely endorse doing the most good), but I recognize that 'maximizing' can be understood differently here.

Caveat 1: I think there are lots of things it doesn't make sense to 100% maximize for, and you shouldn't tell yourself you are 100% maximizing for them. "Maximizing for exploration" might be such a thing. And even if you were 100% maximizing for exploration, it's not like you wouldn't take into account the cost of random actions, venturing into domains you have no context in, and the cost of spending a lot of time thinking about how to best maximize.

Caveat 2: it must be possible to maximize within one of multiple goals. I care a great deal about doing the most good I can, but I also care about feeling alive in this world.  I'm lying to myself if I say that something like climbing is only instrumental towards more impact. When I'm working, I'll maximize for impact (taking into account the uncertainty around how to maximize). When I'm not, I won't.

[meta note: I have little experience in decision theory, formal consequentialist theory, or whatever else is relevant here, so might be overlooking concepts]. 

Maybe the linked post makes it mean something other than what I understand - but most people aren't going to read it.

Fair. Nate Soares talks about desperation as a 'dubious virtue' in this post, a quality that can "easily turn into a vice if used incorrectly or excessively." He argues though that you should give yourself permission to go all out for something, at least in theory. And then look around, and see if anything you care about – anything you're fighting for – is "worthy of a little desperation."

I'm really glad you wrote this post. Hearing critiques from prominent EAs promotes a valuable community norm of self-reflection and not just accepting EA as is, in my opinion. 

A few thoughts:

  1. It's important to emphasize how much maximization can be normalized in EA subpockets. You touch on this in your post: "And I’m nervous about what I perceive as dynamics in some circles where people seem to “show off” how little moderation they accept - how self-sacrificing, “weird,” extreme, etc. they’re willing to be in the pursuit of EA goals"). I agree, and I think this is relevant to growing EA Hubs and cause-area silos. If you move to an EA hub that predominantly maximizes along one belief (e.g.,  AI safety in Berkeley), very natural human tendencies will draw you to also maximize along that belief. Maximizing will win you social approval and dissenting is hard, especially if you're still young and impressionable (like meee). If you agree with this post's reasoning, I think you should take active steps to correct for a social bias toward hard-core maximizing (see 2).
  2. If you're going to maximize along some belief, you should seriously engage with the best arguments for why you're wrong. Scout mindset baby. Forming true beliefs about a complicated world is hard and motivated reasoning is easy. 
  3. Maximizing some things is still pretty cool. I think some readers (of the post and my comment) could come away with a mistaken impression that more moderation in all aspects of EA is always a good thing. I think it's more nuanced than that: Most people who have done great things in the past have maximized much harder than their peers. I agree one should be cautious of maximizing things we are"conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA." But maximizing some things that are good across a variety of plausibly true beliefs can be pretty awesome for making progress on your goals (e.g., maximizing early-career success and learning). And even if the extreme of maximization is bad, more maximization might be directionally good, depending on how much you're currently maximizing. We also live in a world that may end within the next 100 years, so you have permission to be desperate.

Just reached out to Peter about this. Had him on my radar but thanks for the nudge :) 

Load More