michel

Events Associate @ CEA
1228Madison, WI, USAJoined Oct 2020
eauw.org

Bio

Currently working as Events Associate at the Centre for Effective Altruism.

Previously worked on independent meta-EA projects and strategy and interned at Global Challenges Project. I also founded EA University of Wisconsin–Madison and scaled the EA Opportunity Board.

If you think we share an interest (we probably do), don't hesitate to reach out!

https://www.linkedin.com/in/michel-justen/

Comments
68

Topic Contributions
1

michel4mo40

I like the idea of featuring well-packed research questions, but I don't want to flood the board with them.

I am currently hiring a new director to execute on product improvement and outreach projects as I step into a more strategic advisor role. I'll sync with the new hire about featuring these research questions.

michel4mo20

Seeing this late but appreciate the comment! I think this makes a valuable distinction I had oversimplified. Made some changes and will communicate this more clearly going forward.

michel5mo20

+1 to a desire to read GPI papers but never having actually read any because I perceive them to be big and academic at first glance. 

I have engaged with them in podcasts that felt more accessible, so maybe  there's something there.

michel5mo41

Yup, I agree. This post was written as a more personal response to a disorienting situation.

Let the investigations and reevaluations ensue.

michel5mo20

Good point. It’s worth noting that ‘outreach’ is often mentioned in the examples, not the key consideration itself. I think the key considerations that mention outreach in the example often influence more than outreach. For example, “Relative costs vs. benefits of placing greater emphasis on not-explicitly-EA brands” mentions outreach, but I think this closely connected to how professional networks identify themselves and how events are branded.

I have a background in university community building, so I wouldn’t be surprised if that biased me to often make the examples about outreach.

michel5mo10

When I make a decision, I care more about how good the outcome of the decision is than how mathematically consistent my process for making the decision is. My decision making, in practice, is fuzzy, time constrained, and rarely formalized.

Do you think that the alternative you discuss in this post are more likely to lead quicker, better answers? Or is this post more just calling out the deep mathematical foundations of typical decision decision making progress, even if they’re fine to use practice?

Disclaimer: didn’t read much of the post.

michel5mo21

Yup, that seems like a fair critique. The taxonomies are messy and I would expect examples to overlap without respect for categorization. (I like thinking of the failure mode categorization more as overlapping clusters than discrete bins.) 

I care more about the higher-level causes of failure within some cluster of failure, and "unwillingness to trade impact"still seems sufficiently different to me than "internal disenchantment," even if I'd expect certain actions to move the needle on both. 

michel5mo20

Nice write up! Hopefully a database of EA/EA-adjacent organizations and initiatives can help people here:

https://forum.effectivealtruism.org/posts/wLZExunpNhAnafJbg/a-database-of-ea-organizations-and-initiatives

(sorry, on mobile and can’t hyperlink)

michel5mo15

I agree large-scale catastrophic failures are an important consideration. Originally I thought all global catastrophic risks would be downstream of some reservation failure (i.e., EA didn't do enough), but now I think this categorization unrealistically estimates EA's capabilities at the moment (i.e.,  global catastrophic risks might occur despite the realistic ideal EA movement's best efforts). 

In some sense I think large scale catastrophic risks aren't super action guiding because we're victim to them despite our best efforts, which is why I didn't include them. But now I counter my own point: Large-scale catastrophic risks could be action guiding in that they indicate the importance of thinking about things like EA coordination recovery post-catastrophe. 

I'm now considering adding a fifth cluster of failure: uncaring universe failures. Failures in which EA becomes crippled from something like a global catastrophic risk despite our  best efforts. (I could also call them ruthless universe failures if I really care about my Rs). 

  • Agreement Upvote: Yeah do that
  • Disagreement Downvote: Nah
michel5mo20

Thank you for leaving this comment :)

Load more