michel

1202Madison, WI, USAJoined Oct 2020
eauw.org

Bio

Currently working on independent meta-EA projects and strategy. Previously interned at Global Challenges Project and founded EA University of Wisconsin–Madison. Also built and currently maintain the EA Opportunity Board.

I'm interested in EA community growth and health, EA & psychology, and avoiding doom with everything humanity's got. Also meditation. If you think we share an interest (we probably do), don't hesitate to reach out!

https://www.linkedin.com/in/michel-justen/

Comments
68

Topic Contributions
1

I like the idea of featuring well-packed research questions, but I don't want to flood the board with them.

I am currently hiring a new director to execute on product improvement and outreach projects as I step into a more strategic advisor role. I'll sync with the new hire about featuring these research questions.

Seeing this late but appreciate the comment! I think this makes a valuable distinction I had oversimplified. Made some changes and will communicate this more clearly going forward.

+1 to a desire to read GPI papers but never having actually read any because I perceive them to be big and academic at first glance. 

I have engaged with them in podcasts that felt more accessible, so maybe  there's something there.

Yup, I agree. This post was written as a more personal response to a disorienting situation.

Let the investigations and reevaluations ensue.

Good point. It’s worth noting that ‘outreach’ is often mentioned in the examples, not the key consideration itself. I think the key considerations that mention outreach in the example often influence more than outreach. For example, “Relative costs vs. benefits of placing greater emphasis on not-explicitly-EA brands” mentions outreach, but I think this closely connected to how professional networks identify themselves and how events are branded.

I have a background in university community building, so I wouldn’t be surprised if that biased me to often make the examples about outreach.

When I make a decision, I care more about how good the outcome of the decision is than how mathematically consistent my process for making the decision is. My decision making, in practice, is fuzzy, time constrained, and rarely formalized.

Do you think that the alternative you discuss in this post are more likely to lead quicker, better answers? Or is this post more just calling out the deep mathematical foundations of typical decision decision making progress, even if they’re fine to use practice?

Disclaimer: didn’t read much of the post.

Yup, that seems like a fair critique. The taxonomies are messy and I would expect examples to overlap without respect for categorization. (I like thinking of the failure mode categorization more as overlapping clusters than discrete bins.) 

I care more about the higher-level causes of failure within some cluster of failure, and "unwillingness to trade impact"still seems sufficiently different to me than "internal disenchantment," even if I'd expect certain actions to move the needle on both. 

Nice write up! Hopefully a database of EA/EA-adjacent organizations and initiatives can help people here:

https://forum.effectivealtruism.org/posts/wLZExunpNhAnafJbg/a-database-of-ea-organizations-and-initiatives

(sorry, on mobile and can’t hyperlink)

I agree large-scale catastrophic failures are an important consideration. Originally I thought all global catastrophic risks would be downstream of some reservation failure (i.e., EA didn't do enough), but now I think this categorization unrealistically estimates EA's capabilities at the moment (i.e.,  global catastrophic risks might occur despite the realistic ideal EA movement's best efforts). 

In some sense I think large scale catastrophic risks aren't super action guiding because we're victim to them despite our best efforts, which is why I didn't include them. But now I counter my own point: Large-scale catastrophic risks could be action guiding in that they indicate the importance of thinking about things like EA coordination recovery post-catastrophe. 

I'm now considering adding a fifth cluster of failure: uncaring universe failures. Failures in which EA becomes crippled from something like a global catastrophic risk despite our  best efforts. (I could also call them ruthless universe failures if I really care about my Rs). 

  • Agreement Upvote: Yeah do that
  • Disagreement Downvote: Nah

Thank you for leaving this comment :)

Load More