Saul Munn

@ Manifest, Manifund, OPTIC
804 karmaJoined Pursuing an undergraduate degreeWorking (0-5 years)
saulmunn.com

Comments
89

some further & updated thoughts, written in ~30 min, are below. canonical version lives here.


Here’s a frame I’ve found helpful for thinking about effective altruism:

  • When I look inside myself, I notice that I care about a lot of things.
    • You could also reasonably replace “care” with “wanting,” “preferring,” “valuing,” “desiring,” “having goals,” etc, rather than “caring.” I’m okay being loose.
    • Some examples of things I care about:
      • I want my sister to have an excellent career.
      • I’m hungry, and want some food.
      • I want to be valued by people I respect.
      • I want my dogs to have enjoyable lives.
      • (And many, many more).
    • (It’s often useful to be introspective/clear-eyed about what you care about, what that ontology looks like, which values are instrumental to which other values, etc., but I won’t be doing that here, and indeed I think it might be anti-helpful in this particular frame at this particular time. Stay with me until the end.)
  • Sort-of by definition, I want more of the things I care about. I see my life as a difficult, high-level optimization problem aimed at making decisions which, given my resources at various times, increase my values across time.
  • Some of the things I care about — like wanting food because I’m hungry — are fundamentally oriented at myself. And I take actions to do better along these axes.
    • Some examples of actions:
      • Reading a book on tax strategies
      • Learning how to cook
      • Asking people for feedback on my sartorial choices
      • etc
    • And in general, I try to be effective at getting what I want, here — that is, I aim to achieve these kinds of goals/values/preferences to as great of a degree as possible.
  • But other things I care about — like wanting my sister to have an excellent career, or my dogs to have enjoyable lives — are fundamentally oriented at others-by-their-lights. And I take actions to do better along these axes, too.
    • These motivations often look starkly different in a lot of different situations.
    • For some of these altruistic motivations, it just so happens that some lovely dynamics have coalesced such that there’s an existing group of people / infrastructure / etc who have worked & are working quite hard toward helping me get what I want w/r/t some of those things I care about that are oriented at others-by-their-lights. In particular, I haven’t found any community which is more effective at helping me achieve the things I care about that are oriented at others-by-their-lights than this one.

Why do I like this frame?

  • Because it’s apparent that I care about quite a few things. It becomes evident quickly that totalizing stances toward EA are just not worth it; a bad trade; just getting less of what I want.
    • In particular, I think this kind of frame can be validating toward folks who’ve gone quite far, and repressed the values that they in-fact have in other areas of their life. (I think I was in this camp ~two years ago.)
  • There are interesting subproblems that come into clearer view, e.g.:
    • When should, on the margin, my resources go toward different things that I care about?
    • What actions would get me more access to the things that I want with greater robustness (i.e. getting me closer to many different things I want, all at once)?
    • etc

Started something sorta similar about a month ago: https://saul-munn.notion.site/A-collection-of-content-resources-on-digital-minds-AI-welfare-29f667c7aef380949e4efec04b3637e9?pvs=74

What, concretely, would that involve? /What, concretely, are you proposing?

I think affecting P(things go really well | no AI takeover) is pretty tractable!

What interventions are you most excited about? Why? What are they bottlenecked on?

PurpleAir collects data from a network of private air quality sensors. Looks interesting, and possibly useful for tracking rapid changes in air quality (e.g. from a wildfire).

(written v quickly, sorry for informal tone/etc)

i think that a happy medium is getting small-group conversations (that are useful, effective, etc) of size 3–4 people. this includes 1-1s, but the vibe of a Formal, Thirty Minute One on One is a very different vibe from floating through 10–15, 3–4-person conversations in a day, each that last varying amounts of time.

  • much more information can flow with 3-4 ppl than with just 2 ppl
  • people can dip in and out of small conversations more than they can with 1-1s
  • more-organic time blocks means that particularly unhelpful conversations can end after 5-10m, and particularly helpful ones can last the duration that would be good for them to last (even many hours!)
  • 3-4 person conversations naturally select for a good 1-1. once 1-2 people have left a 3-4 person conversation, the conversation is then just a 1-1 of the two people who've engaged in the conversation longest — which seems like some evidence of their being a good match for a 1-1.

however, i think that this is operationally much harder to do for organizers than just 1-1s. my understanding is that this is much of the reason EAGs (& other conferences) do 1-1s, instead of small group conversations.

  • i think Writehaven did a mediocre job of this at LessOnline this past year (but, tbc, it did vastly better than any other piece of software i've encountered).
  • i think Lighthaven as a venue forces this sort of thing to happen, since there are so so so many nooks for 2-4 people to sit and chat, and the space is set up to make 10+ person conversations less likely to happen.

i know that The Curve (from @Rachel Weinberg) created some "Curated Conversations:" they manually selected people to have predetermined conversations for some set amount of time. iirc this was typically 3-6 people for ~1h, but i could be wrong on the details. rachel: how did these end up going, relative to the cost of putting them together?

[srs unconf at lighthaven this sunday 9/21]

Memoria is a one-day festival/unconference for spaced repetitionincremental reading, and memory systems. It’s hosted at Lighthaven in Berkeley, CA, on September 21st, from 10am through the afternoon/evening.

Michael NielsenAndy MatuschakSoren BjornstadMartin Schneider, and about 90–110 others will be there — if you use & tinker with memory systems like Anki, SuperMemo, Remnote, MathAcademy, etc, then maybe you should come!

Tickets are $80 and include lunch & dinner. More info at memoria.day.

i thought this was excellent. thank you for writing it up!

Thank you for this! It can take a lot of effort to write, edit, and publish reports like this, but they (generally) create quite a bit of value. I found this one exceptionally concrete & clear to read — well done!

Load more