This is a special post for quick takes by ChanaMessinger. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Effective giving quick take for giving season

This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and attention from their community. There might have been an additional dynamic where people who took the advice the most seriously ended up deeply embedded in other professional communities, so heard about the switch later or found it harder to reconnect with the community and the new priorities.

I really don’t have an overall view on how bad all of this was, or if anyone should have done anything differently, but I do have a sense that EA has a bit of a feature of jerking people around like this, where priorities and advice change faster than the advice can be fully acted on. The world and the right priorities really do change, though; I’m not sure what should be done except to be clearer about all this, but I suspect it’s hard to properly convey “this seems like the absolute best thing in the world to do, also next year my view could be that it’s basically useless” even if you use those exact words. And maybe people have done this, or maybe it’s worth trying harder. Another approach would be something like insurance.

A frame I’ve been more interested in lately (definitely not original to me) is that earning to give is a kind of resilience / robustness-add for EA, where more donors just means better ability to withstand crazy events, even if in most worlds the small donors aren’t adding much in the way of impact. Not clear that that nets out, but “good in case of tail risk” seems like an important aspect.

A more out-there idea, sort of cobbled together from a me-idea and Ben West-understanding is that, among the many thinking and working styles of EAs, one axis of difference might be “willing to pivot quickly, change their mind and their life plan intensely and often” vs “not as subject to changing EA winds” (not necessarily in tension, but illustrative). Staying with E2G over many years might be related to having being closer to the latter; this might be an under-rated virtue and worth leveraging.


 

I think another example of the jerking people around thing could be the vibes from summer 2021 to summer 2022 that if you weren't exceptionally technically competent and had the skills to work on object-level stuff, you should do full-time community building like helping run university EA groups. And then that idea lost steam this year. 

Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.

Hmm, this doesn't seem obvious to me – if you care more about people's success then you are more willing to give offers to people who don't have a robust resume etc., which is going to lead to a lower hit rate than usual.

It's an interesting point about the potential for jerking people around and alienating them from the movement and ideals. It could also (maybe) have something to do with having a lot of philosophers leading the movement too. It's easier to change from writing philosophically about short termism "doing good better" to long termism "what we owe the future", to writing essays about talent constraint over money constraint, but harder to psychologically and practically (although still very possible) switch from being a mid career global health worker or earning to giver to working on AI alignment.

This isn't a criticism, of course it makes sense for the philosophy driving the movement to develop, just highlighting the difference in "pivotability" between leaders and some practitioners and the obvious potential for "jerking people around" collateral as the philosophy evolves.

Also having lots of young people in the movement who haven't committed years of their life to things can make changing tacks more viable for many and seem more normal, while perhaps it is harder for those who have committed a few years to something. This "Willingness to pivot quickly, change their mind and their life plan intensely and often” could be as much about stage of career than it is personality.

Besides earning to give people being potentially "jerked around", there are some other categories with considering too.

  1. Global health people as the relative importance within the movement seems to have slowly faded.

  2. if (just possibilities) AI becomes far less neglected in general in the next 3 to 5 years, or it becomes apparent that policy work seems far more important/tractable than technical alignment, then a lot of people who have devoted their careers to these may be left out in the cold.

Just some very low confidence musings!

Makes sense that there would be some jerk-around in a movement that focuses a lot on prioritization and re-prioritization, with folks who are invested in finding the highest priority thing to do. Career capital takes time to build and can't be re-prioritized at the same speed. Hopefully as EA matures, there can be some recognition that diversification is also important, because our information and processes are imperfect, and so there should be a few viable strategies going at the same time about how to do the most good. This is like your tail-risk point. And some diversity in thought will benefit the whole movement, and thoughtful people pursuing those strategies with many years of experience will result in better thinking, mentorship, and advice to share. 

I don't really see a world in which earning to give can't do a whole lot of good, even if it isn't the highest priority at the moment... unless perhaps the negative impacts of the high-earning career in question haven't been thought through or weighed highly enough. 

Perhaps making a stronger effort to acknowledge and appreciate people who acted altruistically based on our guesses at the time, before explaining why our guesses are different now, would help? (And for this particular case, even apologizing to EtG people who may have felt scorned?)

I think there's a natural tendency to compete to be "fashion-forward", but that seems harmful for EA. Competing to be fashion-forward means targeting what others will approve of (or what others think others will approve of), as opposed to the object-level question of what actually works.

Maybe the sign of true altruism in an EA is willingness to argue for boring conventional wisdom, or willingness to defy a shift in conventional wisdom if you don't think the shift makes sense for your particular career situation. 😛 (In particular, we shouldn't discount switching costs and comparative advantage. I can make a radical change to the advice I give an aimless 20-year-old, while still believing that a mid-career professional should stay on their current path, e.g. due to hedging/diminishing marginal returns to the new hot thing.)

BTW this recent post made a point that seems important:

Burnout is extremely expensive, because it does not just cost time in and of itself but can move your entire future trajectory. If I were writing practical career tips for young EAs, my first headline would be "Whatever you do, don't burn out."

Plenty of people in the EA community have burned out. A small number of us talk about it. Most people, understandably, prefer to forget and move on. Beware this and other selection effects in (a) who is successful enough that you are listening to them in the first place and (b) what those people choose to talk about.

IMO, acknowledging and appreciating the effort people put in is the best way to prevent burnout. Implying that "your career path is boring now" is the opposite. Almost everyone in EA is making some level of sacrifice to do good for others; let's thank them for that!

Thank you, whoever's reading this!

Not all "EA" things are good 
 

just saying what everyone knows out loud (copied over with some edits from a twitter thread)

Maybe it's worth just saying the thing people probably know but isn't always salient aloud, which is that orgs (and people) who describe themselves as "EA" vary a lot in effectiveness, competence, and values, and using the branding alone will probably lead you astray.


Especially for newer or less connected people, I think it's important to make salient that there are a lot of takes (pos and neg) on the quality of thought and output of different people and orgs, which from afar might blur into "they have the EA stamp of approval"


Probably a lot of thoughtful people think whatever seems shiny in a "everyone supports this" kind of way is bad in a bunch of ways (though possibly net good!), and that granularity is valuable.

I think feel very free to ask around to get these takes and see what you find - it's been a learning experience for me, for sure. Lots of this is "common knowledge" to people who spend a lot of their time around professional EAs and so it doesn't even occur to people to say + it's sensitive to talk about publicly. But I think "some smart people in EA think this is totally wrongheaded" is a good prior for basically anything going on in EA.

Maybe at some point we should move to more explicit and legible conversations about each others' strengths and weaknesses, but I haven't thought through all the costs there, and there are many. Curious for thoughts on whether this would be good! (e.g. Oli Habryka talking about people with integrity here)

I would like a norm of writing some criticisms on wiki entries.

I think the wiki entry is a pretty good place for this. It's "the canonical place" as it were. I would think it's important to do this rather fairly. I wouldn't want someone to edit a short CEA article with a "list of criticisms", that (believe you me) could go on for days. And then maybe, just because nobody has a personal motivation to, nobody ends up doing this for Giving What We Can. Or whatever. Seems like the whole thing could quickly prove to be a mess that I would personally judge to be not worth it (unsure). I'd rather see someone own editing a class of orgs and adding in substantial content, including a criticism section that seeks to focus on the highest impact concerns.

Features that contribute to heated discussion on the forum

From my observations. I recognize many of these in myself. Definitely not a complete list, and possibly some of these things are not very relevant, please feel free to comment to add your own.

Interpersonal and Emotional

  • Fear, on all sides (according to me lots of debates are bravery debates and people on "both sides" feel in the minority and fighting against a more powerful majority (and often it's both true, just in different ways), and this is really important for understanding the dynamics)
    • Political backlash
    • What other EAs will think of you
    • Just sometimes the experience of being on the forum
  • Trying to protect colleagues or friends
  • Speed as a reaction to having strong opinions, or worrying that others will jump on you
  • Frustration at having to rehash arguments / protect things that should go without saying
  • Desire to gain approval / goodwill from people you’d like to hire/fund/etc you in the future
  • Desire to sound smart
  • Desire to gain approval / goodwill from your friends, or people you respect
  • Pattern matching (correctly or not) to conversations you’ve had before and porting over the emotional baggage from them
    • Sometimes it helps to assume the people you’re talking to are still trying to win their last argument with someone else

Low trust environment

  • Surprise that something is even a question
  • I think there's a nasty feedback loop in tense situations with low trust. (This section by Ozzie Gooen)
    • People don't communicate openly their takes on things.
    • This leads to significant misunderstanding.
    • This leads to distrust of each other and assumptions of poor intent.
    • This leads to parties doing more zero-sum or adversarial actions to each other.
    • When any communication does happen, it's inspected with a magnifying glass (because of how rare it is). It's misunderstood (because of how little communication there has been).
    • The communicators then think, "What's the point? My communication is misunderstood and treated with hostility." So they communicate less.
  • Not tracking being scrupulously truth-telling out of a desire to get less criticism
  • Not feeling like part of the decision making process, opaqueness of the reasoning of EA leadership 
  • Not understanding how and why decisions that affect you are made
  • Feeling misunderstood by the public, sometimes feeling wilfully misunderstood
     

Something to protect / Politics

  • Trying to protect a norm you think matters
  • Trying to protect other people you think are being treated unfairly
  • Trying to create the EA you want by fiat / speech actions
  • Power / game theoretical desires to have power shift in EA towards preferred distribution  
  • Speed - a sense that the conversation will get away from you otherwise

Organizational politics

  • An interest in understanding the internals of organizations you’re not part of
  • An interest in not-sharing the internals of organizations you are part of

This is so perceptive, relevant and respectfully written, thank you.

people on "both sides" feel in the minority and fighting against a more powerful majority

I've noticed this too and I think another common dynamic is where "both sides" feel like the other side obviously "started it" and so feel justified in responding in kind.

I've also noticed in myself recently this additional layer of upset that sounds something like, "We're supposed to be allies!" I think I need to keep reminding myself that this is just what people do, namely fight with people very much like them but a little bit different*. I think EA's been remarkably good at avoiding much of this over the years and obviously I wish we weren't falling prey to it quite so much right now, but I don't think it's a reason to feel extra upset.

 

*Here's my favourite dramatisation of this phenomenon.

Thanks for sharing, I think this is a very useful overview of important factors, and I encourage you to share it as a normal post (I mostly miss shortforms like this).

But "everyone knows"!

A dynamic I keep seeing is that it feels hard to whistleblow or report concerns or make a bid for more EA attention on things that "everyone knows", because it feels like there's no one to tell who doesn't already know. It’s easy to think that surely this is priced in to everyone's decision making. Some reasons to do it anyway:

  • You might be wrong about what “everyone” knows - maybe everyone in your social circle does, but not outside. I see this a lot in Bay gossip vs. London gossip - what "everyone knows" is very different in those two places
  • You might be wrong about what "everyone knows" - sometimes people use a vague shorthand, like "the FTX stuff" and it could mean a million different things, and either double illusion of transparency (you both think you know what the other person is talking about but don’t) or the pressure to nod along in social situations means that it seems like you're all talking about the same thing but you're actually not
  • Just because people know doesn't mean it's the right level of salient - people forget, are busy with other things, and so on.
  • Bystander effect: People might all be looking around assuming someone else has the concern covered because surely everyone knows and is taking the right amount of action on it.

In short, if you're acting based on the belief that there’s a thing “everyone knows”, check that that’s true. 

Relatedly: Everybody Knows, by Zvi Mowshowitz

[Caveat: There's an important balance to strike here between the value of public conversation about concerns and the energy that gets put into those public community conversations. There are reasons to take action on the above non-publicly, and not every concern will make it above people’s bar for spending the time and effort to get more engagement with it. Just wanted to point to some lenses that might get missed.]

Really intrigued by this model of thinking from Predictable Updating about AI Risk.
 

Now, you could argue that either your expectations about this volatility should be compatible with the basic Bayesianism above (such that, e.g., if you think it reasonably like that you’ll have lots of >50% days in future, you should be pretty wary of saying 1% now), or you’re probably messing up. And maybe so. But I wonder about alternative models, too. For example, Katja Grace suggested to me a model where you’re only able to hold some subset of the evidence in your mind at once, to produce your number-noise, and different considerations are salient at different times. And if we use this model, I wonder if how we think about volatility should change.


 

About going to a hub
A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the

For people who consider taking or end up taking this advice, some things I'd say if we were having a 1:1 coffee about it:

  • Being away from home is by its nature intense, this community and the philosophy is intense, and some social dynamics here are unusual, I want you to go in with some sense of the landscape so you can make informed decisions about how to engage.
  • The culture here is full of energy and ambition and truth telling. That's really awesome, but it can be a tricky adjustment. In some spaces, you'll hear a lot of frank discussion of talent and fit (e.g. people might dissuade you from starting a project not because the project is a bad idea but because they don't think you're a good fit for it). Grounding in your own self worth (and your own inside views) will probably be really important.
  • People both are and seem really smart. It's easy to just believe them when they say things. Remember to flag for yourself things you've just heard versus things you've discussed at length  vs things you've really thought about yourself. Try to ask questions about the gears of people's models, ask for credences and cruxes.  Remember that people disagree, including about very big questions. Notice the difference between people's offhand hot takes and their areas of expertise. We want you to be someone who can disagree with high status people, who can think for themselves, who is in touch with reality.
  • I'd recommend staying grounded with friends/connections/family outside the EA space. Making friends over the summer is great, and some of them may be deep connections you can rely on, but as with all new friends and people, you don't have as much evidence about how those connections will develop over time or with any shifts in your relationships or situations. It's easy to get really attached and connected to people in the new space, and that might be great, but I'd keep track of your level of emotional dependency on them.
  • We use the word "community" but I wouldn't go in assuming that if you come on your own you'll find a waiting, welcoming pre -made social scene, or that people will have the capacity to proactively take you under their wing, look out for you and your well being, especially if there are lots of people in a similar boat. I don't want you to feel like you've been promised anything in particular here. That might be up to you to make for yourself.
  • One thing that's intense is the way that the personal and professional networks overlap, so keep that in mind as you think about how you might keep your head on straight and what support you might need if your job situation changes, you have a bad roommate experience, you date and break up with someone (maybe get a friend's take on the EV of casual hookups or dating during this intense time, given that the emotional effects might last a while and play out in your professional life - you know yourself best and how that might play out for you).
  • This might be a good place to flag that just because people are EAs doesn't mean they're automatically nice or trustworthy, pay attention to your own sense of how to interact with strangers.
  • I'd recommend reading this post on power dynamics in EA.
  • Read CS Lewis 's The Inner Ring
  • Feeling lonely or ungrounded or uncertain is normal. There is lots of discussion on the forum about people feeling this way and what they've done about it. There is an EA peer support Facebook group where you can post anonymously if you want. If you're in more need than that, you can contact Julia Wise or Catherine Low on the community health team.
  • As per my other comment, some of this networking is constrained by capacity. Similarly, I wouldn't go in assuming you'll find a mentor or office space or all the networking you want. By all means ask, but also also give affordance for people to say no, respect their time and professional spaces and norms. Given the capacity constraints, I wouldn't be surprised if weird status or competitive dynamics formed, even within people in a similar cohort. That can be hard.
  • Status stuff in general is likely to come up; there's just a ton of the ingredients for feeling like you need to be in the room with the shiniest people and impress them. That seems really hard; be gentle with yourself if it comes up. On the other hand, that would be great to avoid, which I think happens via emotional grounding, cultivating the ability to figure out what you believe even if high status people disagree and keeping your eye on the ball.
  • This comment and this post and even many other things you can read are not all the possible information, this is a community with illegibility like any other, people all theoretically interacting with the same space might have really different experiences. See what ways of navigating it work for you, if you're unsure, treat it as an experiment.
  • Keep your eye on the ball. Remember that the goal is to make incredible things happen and help save the world. Keep in touch with your actual goals, maybe by making a plan in advance of what a great time in the Bay would like, what would count as a success and what wouldn't. Maybe ask friends to check in with you about how that's going.
  • My guess is that having or finding projects and working hard on them or on developing skills will be a better bet for happiness and impact than a more "just hang around and network" approach (unless you approach that as a project - trying to create and develop models of community building, testing hypotheses empirically, etc). If you find that you're not skilling up as much as you'd like, or not getting out of the Bay what you'd hoped, figure out where your impact lies and do that. If you find that the Bay has social dynamics and norms that are making you unhappy and it's limiting your ability to work, take care of yourself and safeguard the impact you'll have over the course of your life.

We all want (I claim) EA to be a high trust, truth-seeking, impact-oriented professional community and social space. Help it be those things. Blurt truth (but be mostly nice), have integrity, try to avoid status and social games, make shit happen.

Trust is a two-argument function

I'm sure this must have been said before, but I couldn't find it on the forum, LW or google

I'd like to talk more about trusting X in domain Y or on Z metric rather than trusting them in general. People/orgs/etc have strengths and weaknesses, virtues and vices, and I think this granularity is more precise and is a helpful reminder to avoid the halo and horn effects, and calibrates us better on trust.

A commonly used model in the trust literature (Mayer et al., 1995) is that trustworthiness can be broken down into three factors: ability, benevolence, and integrity.

RE: domain specific, the paper incorporates this under 'ability':

The domain of the ability is specific because the trustee may be highly competent in some technical area, affording that person trust on tasks related to that area. However, the trustee may have little aptitude, training, or experience in another area, for instance, in interpersonal communication. Although such an individual may be trusted to do analytic tasks related to his or her technical area, the individual may not be trusted to initiate contact with an important customer. Thus, trust is domain specific.

There are other conceptions but many of them describe something closer to trust that is domain specific rather than generalised.

...All of these are similar to ability in the current conceptualization. Whereas such terms as expertise and competence connote a set of skills applicable to a single, fixed domain (e.g., Gabarro's interpersonal competence), ability highlights the task- and situation-specific nature of the construct in the current model.

Thanks for this! Very interesting. 

I do want to say something stronger here, where "competence" sounds like technical ability or something, but I also mean a broader conception of competence that includes "is especially clear thinking here / has fewer biases here / etc"

Strongly agree. I'm surprised I haven't seen this articulated somewhere else previously.

For collecting thoughts on the concept of "epistemic hazards" - contexts in which you should expect your epistemics to be worse. not fleshed out yet. Interested in if this has already been written about, I assume so, maybe in a different framing.

From Habryka: "Confidentiality and obscurity feel like they worsen the relevant dynamics a lot, since they prevent other people from sanity-checking your takes (though this is also much more broadly applicable). For example, being involved in crimes makes it much harder to get outside feedback on your decisions, since telling people what decisions you are facing now exposes you to the risk of them outing you. Or working on dangerous technologies that you can't tell anyone about makes it harder to get feedback on whether you are making the right tradeoffs (since doing so would usually involve leaking some of the details behind the dangerous technology). "

I hope to flesh this out at some point, but I just want to put somewhere that by default (from personal experience and experience as an instructor and teacher) I think sleepaway experiences (retreats, workshops, camps) are potentially emotionally intense for at least 20% of participants, even entirely setting aside content (CFAR has noted this as well): away from normal environment, new social scene with all kinds of status-stuff to figure out, less sleep, lots of late night conversations that can be very powerful, romantic / sexual stuff in a charged environment, a lot of closeness happening very quickly because of being around each other 24/7, anything going on from outside the environment that's stressful you have less time and space to deal with. This can be valuable in the sense of giving people a chance to fully immerse themselves, but it's a lot, especially for younger people and it is worth organizers explicitly noting this in organizing, talking about it to participants, providing time for chillness / regrounding and being off the clock, and having people around who it's easy to talk to if you're going through a hard time.

Poke holes in my systematizing outreach apologism

Re: Ick at systematizing outreach and human interactions

There's a paradox I'm confused about, where if someone from a group I'm not in - let's say Christians, came to me on a college campus and smiled at me and asked about my interests and connected all of them to Jesus and then I found out I'd been logged in a spreadsheet as "potential convert" or something and then found the questions they'd asked me in a blog post or "Christian evangelist top questions" I might very well feel extremely weird about that (though I think less than others, I kind of respect the hustle).

BUT, when I think about how one gets there, I think, ok:

  1. You're a christian, you care about saving other people from hell
  2. You want to talk to people about this and get a community together + persuade people via arguments you think are in fact persuasive
  3. Other people want to do the same, you discuss approaches
  4. Other people have framings and types of questions that seem better to you than yours, so you switch
  5. You're talking to a lot of people and it's hard to keep track of what each of them said and what they wanted out of a community or worldview, so you start writing it down
  6. You don't want people to get approached for the same conversations over and over again, so you share what you've written with your fellow Christian evangelists
  7. It doesn't seem useful to anyone to keep talking to people who don't seem interested in Christianity, so you let your fellow evangelists know which folks are in that category
  8. People who seem excited about Christianity would probably get a lot out of going to conferences or reading more about it, so you recommend conferences and books and try to make it as easy as possible for them to access those, without having annoying atheists who just want to cause trouble showing up.

This is probably too charitable, there is definitely a thing where you actively want to persuade people because you think your thing is important, and you might lose interest in people who aren't excited about what you're excited about, but those things also seem reasonable to me.

A process that seems bad:

  1. Want to maximize number of EAs 
  2. Use framings, arguments and examples that you don't think hold water but work at getting people to join your group [I don't think EAs do this, I'm gesturing at the extreme other end]
  3. Make people feel weird and bad for disagreeing with you, whether on purpose or not
  4. Encourage people to repress their disagreements
  5. Get energy and labor from people that they won't endorse having given in a few years, or if they knew things you knew

3-5 seem like the worst parts here. 1 seems like a reasonable implication of their beliefs, though I do think we all have to cooperate to not destroy the commons.

2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?

3 and 4 are bad, also hard to avoid.

5 seems really bad, and something I'd like to strongly avoid via things like transparency and some other percolating advice I might end up endorsing for people new to EA, like not letting your feet go faster than your brain, figuring out how much deference you endorse, seeing avoiding resentment as a crucial consideration in your life choices, staying grounded, etc. 

I also think the processes can feel pretty similar from the inside (therefore danger alert!) but also look similar from the outside when they aren't. I certainly have systematically underestimated the moral seriousness and earnestness of many an EA.

What's the difference?

I think people are going to want to say something like "treating people as ends" but I don't know where that obligation stops. I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux. Ex: I endorse protecting the time and energy of other people by not telling everyone who I would talk to if I had a certain question or needed help in a certain way.

I do think it's more about whether you're doing things in such a way that if they knew why you were doing them, they'd mostly not be bothered (ie passing the red face test). But that doesn't really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.

[This is separate from the instrumental reasons to act differently because it weirds people out etc.]

.......................................................................................................................
Later musings:

Presumably the strongest argument is that these feelings are tracking a bunch of the bad stuff that's hard to point at:

  • people not actually understanding the arguments they're making
  • people not having your best interests in mind
  • people being overconfident their thing is correct
  • people not being able to address your ideas / cruxes
  • people having bad epistemics

I do think it's more about whether you're doing things in such a way that if they knew why you were doing them, they'd mostly not be bothered (ie passing the red face test). But that doesn't really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.

Of course this is a spectrum, and we shouldn't put up a public website listing all our beliefs including the most controversial ones or something like that (no one in EA is very close to this extreme). But the implicit jump from "some things shouldn't be explicit" to "digital sentience might weird some people out so there's a decent chance we shouldn't be that explicit about it" seems very non-obvious to me, given how central it is to a lot of longtermist's worldviews and honestly I think it wouldn't turn off many of the most promising people (in the long run; in the short run, it might get an initial "huh??" reaction).

Oh, sorry, those were two different thoughts.  "digital sentience is a weird reason to do a lot of things" is one thing, where it's not most people's crux and so maybe not the first thing you say, but agree, should definitely come up, and separately, "there are lots of things I endorse it being inappropriate to be too explicit about", like the granularity of assessment you might be making of a person at any given time (though possibly more transparency about the fact that you're being assessed in a bunch of contexts would be very good!)

I think steps 1 and 2 in your chain are also questionable, not just 3-5.

  1. Want to maximize number of EAs 

Why do we want to maximize number of EAs, this seems very non-obvious to me? Some people would add much more to the community than others via epistemics, culture, direct talent, etc. If we added enough of certain types of people to the community, especially too quickly, it could easily be net negative.

2. Use framings, arguments and examples that you don't think hold water but work at getting people to join your group [I don't think EAs do this, I'm gesturing at the extreme other end]

[...]

2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?

I think sometimes/often talking about people's cruxes rather than your own is good and fine. The issue is Goodharting via an optimal message to convert as many people to EA as quickly as possible, rather than messages that will lead to a healthy community over the long run.

I think there are two separate processes going on when you think about systematizing and outreach and one of them is acceptable to systematize and the other is not.

The first process is deciding where to put your energy.  This could be deciding whether to set up a booth at a college's involvement fair, buying ads, door-to-door canvassing, etc.  It could also be deciding who to follow up with after these interactions, from the email list collected, to who's door to go to a second time, to which places to spend money on in your second round of ad buys.  These things all lend themselves to systematization. They can be data driven and you can make forecasts on how likely each person was to respond positively and join an event, revisit those forecasts and update them over time.

The second process is the actual interaction/conversation with people.  I think this should not be systematized and should be as authentic as possible. Some of this is a focus on treating people as individuals.  Even if there are certain techniques/arguments/framings that you find work better than others, I'd expect there to be significant variation among people where some work better than others.  A skilled recruiter would be able to figure out what the person they are talking to cares about and focus on that more, but I think this is just good social skills.  They shouldn't be focusing on optimizing for recruitment. They should try to be a likeable person that others will want to be around and that goes a long way to recruitment in and of itself.

I see what you're pointing at, I think, but I don't know that this resolves all my edge cases. For instance, where does "I know this person is especially interested in animal welfare, so talk about that" fall?

I separately don't want to optimize for recruitment in the metric of number of people because of my model of what good additions to the community looks like (e.g. I want especially thoughtful people who have a good sense of the relevant ideas and arguments and what they buy and what their uncertainties are") - maybe your approach comes from that? Or are you saying even if one were trying to maximize numbers, they shouldn't systematize?

Thanks so much for writing this! I think it could be a top-level post, I'm sure many others would find it very helpful.

My 2 cents:

2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?

I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.

Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question).
We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think it's a huge difference from the "cult" mindset.

I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux.

The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better.
From my personal perspective this is strongly related to the point on uncertainty: I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct".
I don't know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.

Thanks, Lorenzo!

I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.

Agree about the "not holding water", I was trying to say that "addresses cruxes you don't have" might look similar to this bad thing, but I'm not totally sure that's true.

I disagree about getting people to join your group - that definitely seems like an instrumental goal, though definitely "get the relevant people to join your group" is more the thing - but different people might have different views on how relevant they need to be, or what their goal with the group is.
 

Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question).

I kind of agree here; I think there are things in EA I'm not particularly uncertain of, and while I'm open to being shown I'm wrong, I don't want to pretend more uncertainty than I have.


The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If value the wellbeing of others then EA can help them achieve their values better.

I've definitely heard that frame, but it honestly doesn't resonate for me. I think some people are wrong about what values are right and arguing with me sometimes convinces them of that. I've definitely had my values changed by argumentation! Or at least values on some level of abstraction - not on the level of solipsism vs altruism, but there are many layers between that and "just an empirical question".

I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct"

I incorporate an inside view on my values - if I didn't think they were right, I'd do something else with my time!

Transparency for undermining the weird feelings around systematizing community building

There's a lot of potential ick as things in EA formalize and professionalize, especially in community building. People might reasonably feel uncomfortable realizing that the intro talk they heard is entirely scripted, or that interactions with them have been logged in a spreadsheet or that the events they've been to are taking them through the ideas on a path from least to most weird (all things I've heard of happening, with a range of how confident I am in them actually happening as described here). I think there's a lot to say here about how to productively engage with this feeling (and things community builders should do to mitigate it), but I also think there's a quick trick that will markedly improve things (though do not fix all problems): transparency.

(This is an outside take from someone who doesn't do community building on college campuses or elsewhere, I think that work is hard and filled with paradoxes, and it's also possible that this is already done by default, but in the spirit of stating the obvious)

I've been updating over and over again over the last few years that earnestness is just very powerful, and I think there are ways (though maybe they require some social / communication skills that aren't universal) to say things like (conditional on them being true):

NB: I don't think these are the best versions of these scripts, this was a first pass to point at the thing I mean

  • "This EA group is one of many around the country and the world. There is a standard intro talk that contains framings we think are exceptionally useful and helps us make sure we don't miss any of the important ideas or caveats, so we are giving it here today. I am excited to convey these core concepts, and then for the group of people who come in subsequent weeks to figure out which  aspects of these they're most interested in pursuing and customizing the group to our needs."
  • "Hey, I'm excited to talk to you about EA stuff. The organizers of this group are hoping to chat with people who seem interested and not be repetitive or annoying to you, would it be ok with you if I took some notes on our conversation that other organizers can see?"
  • "EA ideas span a huge gamut from really straightforward to high-context / less conventional. These early dinners start with the less weird ones because we think the core ideas are really valuable to the world whether or not people buy some of the other potential implications. Later on, with more context, we'll explore a wider range."
  • "I get that the perception that people only get funding or help if they seem interested in EA is uncomfortable / seems bad. From my perspective, I'm engaged in a particular project with my EA time / volunteer time / career / donations / life, and I'm excited to find people who are enthused by that same project and want to work together on it. If people find this is not the project for them, that's a great thing to have learned, and I'm excited for them to find people to work with on the things they care about most"

Not everything needs to be explicit, but this at least tracks whether you're passing the red face test.

I think that being transparent in this way requires:

  • Some communication skills to convey things like the above with nuance and grace
  • Being able to track when explicitness is bad or unhelpful
  • Some social skills in tracking what the other person cares about and is looking for in conversations
  • Non-self-hatingness: Thinking that you are doing something valuable, that matters to you, that you don't have to apologize for caring about, along with its implications
  • A willingness to be honest and earnest about the above.

Things I'd like to try more to make conversations better

  • Running gather towns / town halls when there's a big conversation happening in EA
    • I thought the Less Wrong gather town on FTX was good and I'm glad it happened
    • I've run a few internal-to-CEA gather towns on tricky issues so far and I'm glad I did, though they didn't tend to be super well attended. 
  • Offering calls / conversations when a specific conversation is getting more heated or difficult
    • I haven't done this yet myself, but it's been on my mind for a while, and I was just given this advice in a work context, which reminded me. 
    • If a back and forth on the forum isn't going well, it seems really plausible that having a face to face call, or offering to mediate one for other people (this one I'm more skeptical of, especially without having any experience in mediation, though I do think there can be value add here) will make that conversation go better, give space for people to be better understood and so on.
  • An off-beat but related idea
    • Using podcasts where people can explain their thinking and decision making, including in situations where they wouldn't do the same thing again, since podcasts allow for longer more natural explanations

Ambitious Altruism

When I was doing a bunch of explaining of EA and my potential jobs during my most recent job search to friends, family and anyone else, one framing I landed on I found helpful was "ambitious altruism." It let me explain why just helping one person didn't feel like enough without coming off as a jerk (i.e. "I want to be more ambitious than that" rather than "that's not effective").

It doesn't have the maximizing quality, but it doesn't not have it either, since if there's something more you can do with the same resources, there's room to be more ambitious.

Some experimental thoughts on how to moderate / facilitate sensitive conversations

Drawn from talking to a few people who I think do this well. Written in a personal capacity.

  1. Go meta - talk about where you’re at and what you’re grappling with. Do circling-y things. Talk about your goals for the discussion.
    1. Be super clear about the way in which your thing is or isn’t a safe space and for what
    2. Be super clear about what bayesian updates people might make
  2. Consider starting with polls to get calibrated on where people are
  3. Go meta on what the things people are trying to protect are, or what the confusion you think is at play is
  4. Aim first to create common knowledge
  5. Distinguish between what’s thinkable and what’s sayable in this space and why that distinction matters
  6. Reference relevant norms around cooperative spaces or whatever space you’ve set up here
    1. If you didn’t set up specific norms but want to now, apologize for not doing so until that point in a “no fault up to this point but no more” way
  7. If someone says something you wish they hadn’t:
    1. Do many of the above 
    2. Figure out what your goals are - who are you trying to protect / make sure you have their back
    3. If possible, strategize with the person/people who are hurt, get their feedback (though don’t precommit to doing what they say)
    4. Have 1:1s with people able
  8. If you want to dissociate with someone or criticize them, explain the history and connection to them, don't memoryhole stuff,  give people context for understanding
    1. Display courage.
  9. Be specific about what you're criticizing
  10.  Cheerlead and remind yourself and others of the values you're trying to hold to
  11. People are mad for reasonable and unreasonable reasons, you can speak to the reasonable things you overlap on with strength

In my experience, the most important parts of a sensitive discussion is to display kindness, empathy, and common ground. 

It's disheartening to write something on a sensitive topic based on upsetting personal experiences, only to be met with seemingly stonehearted critique or dismissal. Small displays of empathy and gratitude can go a long way towards this, to make people feel like their honesty and vulnerability has been rewarded rather than punished. 

I think your points are good, but if deployed wrongly could make things worse.  For example, if a non-rationalist friend of yours tells you about their experiences with harassment,  immediately jumping into a bayesian analysis of the situation is ill-advised and may lose you a friend. 

(Written in a personal capacity) Yeah, agree, and your comment made me realize that some of these are actually my experimental thoughts on something like "facilitating / moderating" sensitive conversations. I don't know if what you're pointing at is common knowledge, but I'd hope it is, and in my head it's firmly in "nonexperimental", standard and important wisdom (as contained, I believe, in some other written advice on this for EA group leaders and others who might be in this position).

From my perspective, a hard thing is how much work is done by tone and presence - I know people who can do the "talk about a bayesian analysis of harassment" with non-rationalists with sensitivity, warmth, care, and people who do "displaying kindness, empathy and common ground" in a way that leaves people more tense than before. But that doesn't mean the latter isn't generally better advice, I think it probably is for most people - and I hope it's in people's standard toolkits.

Seems like there's room in the ecosystem for a weekly update on AI that does a lot of contextualization / here's where we are on ongoing benchmarks. I'm familiar with:
 

  • a weekly newsletter on AI media (that has a section on important developments that I like)
  • Jack Clark's substack which I haven't read much of but seems more about going in depth on new developments (though does have a "Why this matters" section. Also I love this post in particular for the way it talks about humility and confusion.
  • Doing Westminster Better on UK politics and AI / EA, which seems really good but again I think goes in depth on new stuff
  • I could imagine spending time on aggregation of prediction markets for specific topics, which Metaculus and Manifold are doing better and better over time.

I'm interested in something that says "we're moving faster / less fast than we thought we would 6 months ago" or "this event is surprising because" and kind of gives a "you are here" pointer on the map. This Planned Obsolescence post called "Language models surprised us" I think is the closest I've seen.

Seems hard, also maybe not worth it enough to do, also maybe it's happening and I'm not familiar with it, would love to hear, but it's what I'd personally find most useful and I suspect I'm not alone.

I think I agree, but also want to flag this list in case you (or others) haven't seen it: List of AI safety newsletters and other resources

Another newsletter(?) that I quite like is Zvi's 

Wout Schellart, Jose Hernandez-Orallo, and Lexin Zhou have started an AI evaluation digest, which includes relevant benchmark papers etc. It's pretty brief, but they're looking for more contributers, so if you want to join in and help make it more comprehensive/contextualised, you should reach out!
https://groups.google.com/g/ai-eval/c/YBLo0fTLvUk

Less directly relevant, but Harry Law also has a new newsletter in the Jack Clark style, but more focused on governance/history/lessons for AI:
https://learningfromexamples.substack.com/p/the-week-in-examples-3-2-september

Been flagging more often lately that decision-relevant conversations work poorly if only A is sayable (including "yes we should have this meeting") and not-A isn't.

At the same time I've been noticing the skill of saying not-A with grace and consideration, breezily and not with "I know this is going to be unpopular, but..." energy and it's an extremely useful skill.

Engaging seriously with the (nontechnical arguments) for AI Risk: One person's core recommended reading list
(I saw this list in a private message from a more well-read EA than me and wanted to write it up, it's not my list since I haven't read most of these, but I thought it was better to have it be public than not):

If still unconvinced, might recommend (as examples of arguments most uncorrelated with the above)

 

For going deeper:

Might as well put a list of skilling up possibilities (probably this has been done before)

Correct me if there are mistakes here

Template for EA Calls

Over the last six months, I've been having more and more calls with people interested in EA and EA careers. Sometimes I'm one of their first calls because they know me from social things, and sometimes I'm an introduction someone else (eg at 80k) has made. I've often found that an hour, my standard length for a call, feels very short. Sometimes I just chat, sometimes I try to have more of a plan.  Of course a lot depends on context, but I'm interested in having a bit of a template so that I can be maximally helpful to them in limited time (I can't be a career coach for everyone I'm introduced to) and with the specifics that I can give (not trying to replicate / replace eg 80k advising).

Posting so that people can give advice / help me with it and/or use it if it seems helpful.

Template

  • What's your relationship to EA?
    • I think I currently either spend no time on this or way too much time. I'm hoping this question (rather than "how did you get involved with EA?" or "what do you know about EA") will keep it short but useful. I'm also considering asking this more as a matter of course before the call.
  • What are your current options / thinking?
    • This is a place where, for people early in their thinking (which is most of the people I talk to) I tend to recommend a 5 minute timer to generate more options, advise taking a more explore attitude
    • Frequently recommend looking for small experiments to find out what they might like or are good at
    • I tend to recommend developing a view on which of the options are best by the metrics they care about, including impact
    • When relevant, I want to make a habit of recommending useful reading / podcast
    • Sometimes trying to raise people's ambitions (https://forum.effectivealtruism.org/posts/dMNFCv7YpSXjsg8e6/how-to-raise-others-aspirations-in-17-easy-steps)
  • I want to find out what sets them apart / their skillset, but I don't currently have a really good way of doing this if I don't already know that doesn't feel interview-y
  • The ways I think I can often be most helpful, especially for people really new to institutional EA is to tell them about landscape
    • giving an overview of orgs, foundations, and types of work
    • tell them what I know about who else is working on things they're excited about
    • sometimes that some of their interests aren't a focus of most EA work / money
    • asking about their views on longtermism
    • what people think are the main bottlenecks and do they have an interest in developing those skills
      • management
      • ops
      • vetting / grantmaking
  • If they're talking to me specifically about community building / outreach, I give my view on the landscape there: what's happening, what people are excited about, etc.
  • I also have given my thoughts on how to make 80k advising most helpful
    • Be honest about your biggest uncertainties and what you want help from them on
    • Really try to generate options
    • I wonder what else I can say here

It's possible I should ask more about the cause areas they care about - that feels like it's such a big conversation that it doesn't fit in an hour, but maybe it's really crucial. Don't know! Still figuring it out.

Scattered Takes and Unsolicited Advice (new ones added to the top)

  • If you care about being able to do EA work longterm, it's worth pretty significant costs to avoid resenting EA. Take that into account when you think about what decisions you're making and with what kind of sacrifice.
  • "Say more?" and "If your thoughts are in a pile, what's on top?" are pretty powerful conversational moves, in my experience
  • You can really inspire people to do a bunch in some cheap ways
  • A lot of our feelings and reactions come reactively / contextually / on the margins - people feel a certain way e.g. when they are immersed in EA spaces and sometimes have critiques, and when they are in non-EA spaces, they miss the good things about EA spaces. This seems normal and healthy and a good way to get multiple frames on something, but also good to keep in mind.
  • People who you think of as touchstones of thinking a particular thing may change their minds or not be as bought in as you'd expect
  • The world has so much detail
  • One of the most valuable things more senior EAs can do for junior EAs is contextualize: EA has had these conversations before, the thing you experienced was a 20th/50th/90th percentile experience, other communities do/don't go through similar things etc.
  • One of the best things we can all do for each other is push on expanding option sets, and ask questions that get us to think more about what we think and what we should do. 
  • About going to hubs to network
  • When you're new to EA, it's very exciting: Don't let your feet go faster than your brain - know what you're doing and why. It's not good for you or the world if in two years you look around and don't believe any of it and don't know how you got there and feel tricked or disoriented.
  • You're not alone in feeling overwhelmed or like an imposter
  • If you're young in EA: Don't go into community building just because the object level feels scarier and you don't have the skills yet
  • Networking is great, but it's not the only form of agency / initiative taking
  • Lots of ick feelings about persuasion and outreach get better if you're honest and transparent
  • Lots of ick feelings about all kinds of things are tracking a lot of different things at once: people's vibes, a sense of honesty or dishonesty, motivated reasoning, underlying empirical disagreements - it's good to track those things separately
  • Ask for a reasonable salary for your work, it's not as virtuous as you think to work for nothing
    • Sets bad norms for other people who can't afford to do that
    • Makes it more like volunteering so you might not take the work as seriously
  • Don't be self-hating about EA; figure out what you believe and don't feel bad about believing it and its implications and acting in the world in accordance with it
  • There are sides of spectra like pro-spending money or longtermism or meta work that aren't just "logic over feelings", they have feelings too.
  • Earnestness is shockingly effective - if you say what you think and why you think it (including "I read the title of a youtube video"), if you say when don't know what to do and what you're confused about, if you say what you're confident in and why, if you say how you feel and why, I find things (at least in this social space) go pretty damn well, way better than I would have expected.

Habits of thought I'm working on

  • Answer questions specifically as asked, looping back into my models of the world
    • I sometimes have a habit of modelling questions more as moves in a game, and I play the move that supports the overall outcome of the conversation I'm going for, which doesn't support truth-seeking
    • I also sometimes say things using some heuristics, and answer other questions with other heuristics and it takes work to notice that they're not consistent
  • When I hear a claim, think about whether I've observed it in my life
  • Notice what "fads" I'm getting caught up in
  • Trying to be more gearsy, less heuristics-y. What's actually good or bad about this, what do they actually think not just what general direction are they pulling the rope, etc
  • Noticing when we're arguing about the wrong thing, when we e.g. should be arguing about the breakdown of what percent one thing versus another
  • Noticing when we're skating over a real object level disagreement
  • Noticing whether I feel able to think thoughts
  • Noticing when I'm only consuming / receiving ideas but not actually thinking
  • Listing all the things that could be true about something
  • More predictions / forecasts / models
  • More often looking things up / tracking down a fact rather than sweeping it by or deciding I don't know
  • Paraphrasing a lot and asking if I've got things right
  • "Is that a lot?" - putting numbers in context
  • If there's a weird fact from a study, you can question the study as well as the fact
  • Say why you think things, including "I saw a headline about this"

Habits of thought I might work on someday

  • Reversal tests: reversing every statement to see if the opposite also seems true

 

More I like: https://twitter.com/ChanaMessinger/status/1287737689849176065

Conversational moves in EA / Rationality that I like for epistemics
 

  • “So you are saying that”
  • “But I’d change my mind if”
  • “But I’m open to push back here”
  • “I’m curious for your take here”
  • “My model says”
  • “My current understanding is…”
  • “...I think this because…”
  • “...but I’m uncertain about…”
  • “What could we bet on?”
  • “Can you lay out your model for me?”
  • “This is a butterfly idea
  • “Let’s do a babble
  • “I want to gesture at something / I think this gestures at something true”

Everyone who works with young people should have two pieces of paper in their pockets: 

In one: "they look up to you and remember things you say years later"

 In the other: "have you ever tried to convince a teenager of anything?" 

Take out as needed.

Reference: https://www.gesher-jds.org/2016/04/15/a-coat-with-two-pockets/

A point I haven't seen is that the "Different Worlds" hypothesis implies that if you consistently have an experience, especially interpersonally, you should on the margin expect it to happen more often relative to what conventional wisdom says.

Example: If your reports or partners consistently get angry when you do X, then decent chance that even if that isn't all that common, you're inadvertently selecting for people for whom it is, so don't update as far down on the likelihood of it happening again as you otherwise might

I've been thinking about and promulgating EA as nerdsniping (https://chanamessinger.com/blog/ea-as-nerdsniping) as a good intro to bring in curious people interested intellectually in the questions who can come up with their own ideas, often in contrast to EA as an amazing moral approach, but EAG  x Oxford pushed me to update towards the fact that EA / rationality give seeking people a worldview that makes a lot of sense and is more consistent and honest than many others they encounter is a huge part of the appeal, which is both good to know from an outreach perspective and has implications about how much people might get really into EA / rationality if they're looking for that in particular, which might point to being wary if you don't want to be in some sense "too convincing".

My Recommended Reading About Epistemics

For content, but also the vibe it immerses me in I think makes me better

About going to a hub to do networking:
A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the

I think there's a lot of truth to the points made in this post.

I also think it's worth flagging that several of them: networking with a certain subset of EAs, asking for 1:1 meetings with them, being in certain office spaces - are at least somewhat zero sum, such that the more people take this advice, the less available these things will actually be to each person, and possibly on net if it starts to overwhelm. (I can also imagine increasingly unhealthy or competitive dynamics forming, but I'm hoping that doesn't happen!)

Second flag is that I don't know how many people reading this can expect to have an experience similar to yours. They may, but they may not end up being connected in all the same ways, and I want people to go knowing that they take that as a risk and to decide whether it's worth it for them.

On the other side, people taking this advice can do a lot of great networking and creating a common culture of ambition and taking ideas seriously with each other, without the same set of expectations around what connections they'll end up making.

Third flag is I have an un-fleshed out worry that this advice funges against doing things outside Berkeley/SF that are more valuable career capital in the future for ever doing EA things outside of EA or bringing valuable skills and knowledge to EA (like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?). This concern will need to be fleshed out empirically and will vary a lot in applicability by person.

(I work on CEA's community health team but am not making this post on behalf of that team)

Curated and popular this week
Relevant opportunities