Saul Munn

@ Manifest, Manifund, OPTIC
731 karmaJoined Pursuing an undergraduate degreeWorking (0-5 years)
saulmunn.com

Comments
77

fwiw i instinctively read it as the 2nd, which i think is caleb's intended reading

why do i find myself less involved in EA?

epistemic status: i timeboxed the below to 30 minutes. it's been bubbling for a while, but i haven't spent that much time explicitly thinking about this. i figured it'd be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don't expect to reflectively endorse all of these points later down the line. i think it's probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i'm not very keen on arguing about any of the points below, but if you think you could be useful toward my reflecting processes (or if you think i could be useful toward yours!), i'd prefer that you book a call to chat more over replying in the comments. i do not give you consent to quote my writing in this short-form without also including the entirety of this epistemic status.

  • 1-3 years ago, i was a decently involved with EA (helping organize my university EA program, attending EA events, contracting with EA orgs, reading EA content, thinking through EA frames, etc).
  • i am now a lot less involved in EA.
    • e.g. i currently attend uc berkeley, and am ~uninvolved in uc berkeley EA
    • e.g. i haven't attended a casual EA social in a long time, and i notice myself ughing in response to invites to explicitly-EA socials
    • e.g. i think through impact-maximization frames with a lot more care & wariness, and have plenty of other frames in my toolbox that i use to a greater relative degree than the EA ones
    • e.g. the orgs i find myself interested in working for seem to do effectively altruistic things by my lights, but seem (at closest) to be EA-community-adjacent and (at furthest) actively antagonistic to the EA community
  • (to be clear, i still find myself wanting to be altruistic, and wanting to be effective in that process. but i think describing my shift as merely moving a bit away from the community would be underselling the extent to which i've also moved a bit away from EA's frames of thinking.)
  • why?
    • a lot of EA seems fake
      • the stuff — the orientations — the orgs — i'm finding it hard to straightforwardly point at, but it feels kinda easy for me to notice ex-post
    • there's been an odd mix of orientations toward [ aiming at a character of transparent/open/clear/etc ] alongside [ taking actions that are strategic/instrumentally useful/best at accomplishing narrow goals... that also happen to be mildly deceptive, or lying by omission, or otherwise somewhat slimy/untrustworthy/etc ]
      • the thing that really gets me is the combination of an implicit (and sometimes explicit!) request for deep trust alongside a level of trust that doesn't live up to that expectation.
        • it's fine to be in a low-trust environment, and also fine to be in a high-trust environment; it's not fine to signal one and be the other. my experience of EA has been that people have generally behaved extremely well/with high integrity and with high trust... but not quite as well & as high as what was written on the tin.
      • for a concrete ex (& note that i totally might be screwing up some of the details here, please don't index too hard on the specific people/orgs involved): when i was participating in — and then organizing for — brandeis EA, it seemed like our goal was (very roughly speaking) to increase awareness of EA ideas/principles, both via increasing depth & quantity of conversation and via increasing membership. i noticed a lack of action/doing-things-in-the-world, which felt kinda annoying to me... until i became aware that the action was "organizing the group," and that some of the organizers (and higher up the chain, people at CEA/on the Groups team/at UGAP/etc) believed that most of the impact of university groups comes from recruiting/training organizers — that the "action" i felt was missing wasn't missing at all, it was just happening to me, not from me. i doubt there was some point where anyone said "oh, and make sure not to tell the people in the club that their value is to be a training ground for the organizers!" — but that's sorta how it felt, both on the object-level and on the deception-level.
      • this sort of orientation feels decently reprensentative of the 25th percentile end of what i'm talking about.
    • also some confusion around ethics/how i should behave given my confusion/etc
      • importantly, some confusions around how i value things. it feels like looking at the world through an EA frame blinds myself to things that i actually do care about, and blinds myself to the fact that i'm blinding myself. i think it's taken me awhile to know what that feels like, and i've grown to find that blinding & meta-blinding extremely distasteful, and a signal that something's wrong.
        • some of this might merely be confusion about orientation, and not ethics — e.g. it might be that in some sense the right doxastic attitude is "EA," but that the right conative attitude is somewhere closer to (e.g.) "embody your character — be kind, warm, clear-thinking, goofy, loving, wise, [insert more virtues i want to be here]. oh and do some EA on the side, timeboxed & contained, like when you're donating your yearly pledge money."
  • where now?
    • i'm not sure! i could imagine the pendulum swinging more in either direction, and want to avoid doing any further prediction about where it will swing for fear of that prediction interacting harmfully with a sincere process of reflection.
    • i did find writing this out useful, though!

Thanks for the clarification — I've sent a similar comment on the Open Phil post, to get confirmation from them that your reading is accurate :)

How will this change affect university groups currently supported by Open Philanthropy that are neither under the banner of AI safety nor EA? The category on my mind is university forecasting clubs, but I'd also be keen to get a better sense of this for e.g. biosecurity clubs, rationality clubs, etc.


(I originally posted this comment under the Uni Groups Team's/Joris's post (link), but Joris didn't seem to have a super conclusive answer, and directed me to this post.)

(also — thanks for taking the time to write this out & share it. these sorts of announcement posts don't just magically happen!)

How will this change affect university groups currently supported by Open Philanthropy that are neither under the banner of AI safety nor EA? The category on my mind is university forecasting clubs, but I'd also be keen to get a better sense of this for e.g. biosecurity clubs, rationality clubs, etc.

[epistemic status: i've spent about 5-20 hours thinking by myself and talking with rai about my thoughts below. however, i spent fairly little time actually writing this, so the literal text below might not map to my views as well as other comments of mine.]

IMO, Sentinel is one of the most impactful uses of marginal forecasting money.

some specific things i like about the team & the org thus far:

  • nuno's blog is absolutely fantastic — deeply excellent, there are few that i'd recommend higher
  • rai is responsive (both in terms of time and in terms of feedback) and extremely well-calibrated across a variety of interpersonal domains
  • samotsvety is, far and away, the best forecasting team in the world
  • sentinel's weekly newsletter is my ~only news source
    • why would i seek anything but takes from the best forecasters in the world?
    • i think i'd be willing to pay at least $5/week for this, though i expect many folks in the EA community would be happy to pay 5x-10x that. their blog is currently free (!!)
    • i'd recommend skimming whatever their latest newsletter was to get a sense of the content/scope/etc
  • linch's piece sums up my thoughts around strategy pretty well

i have the highest crux-uncertainty and -elasticity around the following, in (extremely rough) order of impact on my thought process:

  • do i have higher-order philosophical commitments that swamp whatever Sentinel does? (for ex: short timelines, animal suffering, etc)
  • will Sentinel be able to successfully scale up?
  • conditional on Sentinel successfully forecasting a relevant GCR, will Sentinel successfully prevent or mitigate the GCR?
  • will Sentinel be able to successfully forecast a relevant GCR?
  • how likely are the category of GCRs that sentinel might mitigate to actually come about? (vs no GCRS or GCRS that are totally unpredictable/unmitigateable)

i’ll add $250, with exactly the same commentary as austin :)

to the extent that others are also interested in contributing to the prize pool, you might consider making a manifund page. if you’re not sure how to do this or just want help getting started, let me (or austin/rachel) know!

also, you might adjust the “prize pool” amount at the top of the metaculus page — it currently reads “$0.”

Load more