New Comment
22 comments, sorted by Click to highlight new comments since: Today at 7:28 AM

About going to a hub
A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the

For people who consider taking or end up taking this advice, some things I'd say if we were having a 1:1 coffee about it:

  • Being away from home is by its nature intense, this community and the philosophy is intense, and some social dynamics here are unusual, I want you to go in with some sense of the landscape so you can make informed decisions about how to engage.
  • The culture here is full of energy and ambition and truth telling. That's really awesome, but it can be a tricky adjustment. In some spaces, you'll hear a lot of frank discussion of talent and fit (e.g. people might dissuade you from starting a project not because the project is a bad idea but because they don't think you're a good fit for it). Grounding in your own self worth (and your own inside views) will probably be really important.
  • People both are and seem really smart. It's easy to just believe them when they say things. Remember to flag for yourself things you've just heard versus things you've discussed at length  vs things you've really thought about yourself. Try to ask questions about the gears of people's models, ask for credences and cruxes.  Remember that people disagree, including about very big questions. Notice the difference between people's offhand hot takes and their areas of expertise. We want you to be someone who can disagree with high status people, who can think for themselves, who is in touch with reality.
  • I'd recommend staying grounded with friends/connections/family outside the EA space. Making friends over the summer is great, and some of them may be deep connections you can rely on, but as with all new friends and people, you don't have as much evidence about how those connections will develop over time or with any shifts in your relationships or situations. It's easy to get really attached and connected to people in the new space, and that might be great, but I'd keep track of your level of emotional dependency on them.
  • We use the word "community" but I wouldn't go in assuming that if you come on your own you'll find a waiting, welcoming pre -made social scene, or that people will have the capacity to proactively take you under their wing, look out for you and your well being, especially if there are lots of people in a similar boat. I don't want you to feel like you've been promised anything in particular here. That might be up to you to make for yourself.
  • One thing that's intense is the way that the personal and professional networks overlap, so keep that in mind as you think about how you might keep your head on straight and what support you might need if your job situation changes, you have a bad roommate experience, you date and break up with someone (maybe get a friend's take on the EV of casual hookups or dating during this intense time, given that the emotional effects might last a while and play out in your professional life - you know yourself best and how that might play out for you).
  • This might be a good place to flag that just because people are EAs doesn't mean they're automatically nice or trustworthy, pay attention to your own sense of how to interact with strangers.
  • I'd recommend reading this post on power dynamics in EA.
  • Read CS Lewis 's The Inner Ring
  • Feeling lonely or ungrounded or uncertain is normal. There is lots of discussion on the forum about people feeling this way and what they've done about it. There is an EA peer support Facebook group where you can post anonymously if you want. If you're in more need than that, you can contact Julia Wise or Catherine Low on the community health team.
  • As per my other comment, some of this networking is constrained by capacity. Similarly, I wouldn't go in assuming you'll find a mentor or office space or all the networking you want. By all means ask, but also also give affordance for people to say no, respect their time and professional spaces and norms. Given the capacity constraints, I wouldn't be surprised if weird status or competitive dynamics formed, even within people in a similar cohort. That can be hard.
  • Status stuff in general is likely to come up; there's just a ton of the ingredients for feeling like you need to be in the room with the shiniest people and impress them. That seems really hard; be gentle with yourself if it comes up. On the other hand, that would be great to avoid, which I think happens via emotional grounding, cultivating the ability to figure out what you believe even if high status people disagree and keeping your eye on the ball.
  • This comment and this post and even many other things you can read are not all the possible information, this is a community with illegibility like any other, people all theoretically interacting with the same space might have really different experiences. See what ways of navigating it work for you, if you're unsure, treat it as an experiment.
  • Keep your eye on the ball. Remember that the goal is to make incredible things happen and help save the world. Keep in touch with your actual goals, maybe by making a plan in advance of what a great time in the Bay would like, what would count as a success and what wouldn't. Maybe ask friends to check in with you about how that's going.
  • My guess is that having or finding projects and working hard on them or on developing skills will be a better bet for happiness and impact than a more "just hang around and network" approach (unless you approach that as a project - trying to create and develop models of community building, testing hypotheses empirically, etc). If you find that you're not skilling up as much as you'd like, or not getting out of the Bay what you'd hoped, figure out where your impact lies and do that. If you find that the Bay has social dynamics and norms that are making you unhappy and it's limiting your ability to work, take care of yourself and safeguard the impact you'll have over the course of your life.

We all want (I claim) EA to be a high trust, truth-seeking, impact-oriented professional community and social space. Help it be those things. Blurt truth (but be mostly nice), have integrity, try to avoid status and social games, make shit happen.

I hope to flesh this out at some point, but I just want to put somewhere that by default (from personal experience and experience as an instructor and teacher) I think sleepaway experiences (retreats, workshops, camps) are potentially emotionally intense for at least 20% of participants, even entirely setting aside content (CFAR has noted this as well): away from normal environment, new social scene with all kinds of status-stuff to figure out, less sleep, lots of late night conversations that can be very powerful, romantic / sexual stuff in a charged environment, a lot of closeness happening very quickly because of being around each other 24/7, anything going on from outside the environment that's stressful you have less time and space to deal with. This can be valuable in the sense of giving people a chance to fully immerse themselves, but it's a lot, especially for younger people and it is worth organizers explicitly noting this in organizing, talking about it to participants, providing time for chillness / regrounding and being off the clock, and having people around who it's easy to talk to if you're going through a hard time.

Poke holes in my systematizing outreach apologism

Re: Ick at systematizing outreach and human interactions

There's a paradox I'm confused about, where if someone from a group I'm not in - let's say Christians, came to me on a college campus and smiled at me and asked about my interests and connected all of them to Jesus and then I found out I'd been logged in a spreadsheet as "potential convert" or something and then found the questions they'd asked me in a blog post or "Christian evangelist top questions" I might very well feel extremely weird about that (though I think less than others, I kind of respect the hustle).

BUT, when I think about how one gets there, I think, ok:

  1. You're a christian, you care about saving other people from hell
  2. You want to talk to people about this and get a community together + persuade people via arguments you think are in fact persuasive
  3. Other people want to do the same, you discuss approaches
  4. Other people have framings and types of questions that seem better to you than yours, so you switch
  5. You're talking to a lot of people and it's hard to keep track of what each of them said and what they wanted out of a community or worldview, so you start writing it down
  6. You don't want people to get approached for the same conversations over and over again, so you share what you've written with your fellow Christian evangelists
  7. It doesn't seem useful to anyone to keep talking to people who don't seem interested in Christianity, so you let your fellow evangelists know which folks are in that category
  8. People who seem excited about Christianity would probably get a lot out of going to conferences or reading more about it, so you recommend conferences and books and try to make it as easy as possible for them to access those, without having annoying atheists who just want to cause trouble showing up.

This is probably too charitable, there is definitely a thing where you actively want to persuade people because you think your thing is important, and you might lose interest in people who aren't excited about what you're excited about, but those things also seem reasonable to me.

A process that seems bad:

  1. Want to maximize number of EAs 
  2. Use framings, arguments and examples that you don't think hold water but work at getting people to join your group [I don't think EAs do this, I'm gesturing at the extreme other end]
  3. Make people feel weird and bad for disagreeing with you, whether on purpose or not
  4. Encourage people to repress their disagreements
  5. Get energy and labor from people that they won't endorse having given in a few years, or if they knew things you knew

3-5 seem like the worst parts here. 1 seems like a reasonable implication of their beliefs, though I do think we all have to cooperate to not destroy the commons.

2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?

3 and 4 are bad, also hard to avoid.

5 seems really bad, and something I'd like to strongly avoid via things like transparency and some other percolating advice I might end up endorsing for people new to EA, like not letting your feet go faster than your brain, figuring out how much deference you endorse, seeing avoiding resentment as a crucial consideration in your life choices, staying grounded, etc. 

I also think the processes can feel pretty similar from the inside (therefore danger alert!) but also look similar from the outside when they aren't. I certainly have systematically underestimated the moral seriousness and earnestness of many an EA.

What's the difference?

I think people are going to want to say something like "treating people as ends" but I don't know where that obligation stops. I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux. Ex: I endorse protecting the time and energy of other people by not telling everyone who I would talk to if I had a certain question or needed help in a certain way.

I do think it's more about whether you're doing things in such a way that if they knew why you were doing them, they'd mostly not be bothered (ie passing the red face test). But that doesn't really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.

[This is separate from the instrumental reasons to act differently because it weirds people out etc.]

.......................................................................................................................
Later musings:

Presumably the strongest argument is that these feelings are tracking a bunch of the bad stuff that's hard to point at:

  • people not actually understanding the arguments they're making
  • people not having your best interests in mind
  • people being overconfident their thing is correct
  • people not being able to address your ideas / cruxes
  • people having bad epistemics

I do think it's more about whether you're doing things in such a way that if they knew why you were doing them, they'd mostly not be bothered (ie passing the red face test). But that doesn't really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.

Of course this is a spectrum, and we shouldn't put up a public website listing all our beliefs including the most controversial ones or something like that (no one in EA is very close to this extreme). But the implicit jump from "some things shouldn't be explicit" to "digital sentience might weird some people out so there's a decent chance we shouldn't be that explicit about it" seems very non-obvious to me, given how central it is to a lot of longtermist's worldviews and honestly I think it wouldn't turn off many of the most promising people (in the long run; in the short run, it might get an initial "huh??" reaction).

Oh, sorry, those were two different thoughts.  "digital sentience is a weird reason to do a lot of things" is one thing, where it's not most people's crux and so maybe not the first thing you say, but agree, should definitely come up, and separately, "there are lots of things I endorse it being inappropriate to be too explicit about", like the granularity of assessment you might be making of a person at any given time (though possibly more transparency about the fact that you're being assessed in a bunch of contexts would be very good!)

I think steps 1 and 2 in your chain are also questionable, not just 3-5.

  1. Want to maximize number of EAs 

Why do we want to maximize number of EAs, this seems very non-obvious to me? Some people would add much more to the community than others via epistemics, culture, direct talent, etc. If we added enough of certain types of people to the community, especially too quickly, it could easily be net negative.

2. Use framings, arguments and examples that you don't think hold water but work at getting people to join your group [I don't think EAs do this, I'm gesturing at the extreme other end]

[...]

2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?

I think sometimes/often talking about people's cruxes rather than your own is good and fine. The issue is Goodharting via an optimal message to convert as many people to EA as quickly as possible, rather than messages that will lead to a healthy community over the long run.

I think there are two separate processes going on when you think about systematizing and outreach and one of them is acceptable to systematize and the other is not.

The first process is deciding where to put your energy.  This could be deciding whether to set up a booth at a college's involvement fair, buying ads, door-to-door canvassing, etc.  It could also be deciding who to follow up with after these interactions, from the email list collected, to who's door to go to a second time, to which places to spend money on in your second round of ad buys.  These things all lend themselves to systematization. They can be data driven and you can make forecasts on how likely each person was to respond positively and join an event, revisit those forecasts and update them over time.

The second process is the actual interaction/conversation with people.  I think this should not be systematized and should be as authentic as possible. Some of this is a focus on treating people as individuals.  Even if there are certain techniques/arguments/framings that you find work better than others, I'd expect there to be significant variation among people where some work better than others.  A skilled recruiter would be able to figure out what the person they are talking to cares about and focus on that more, but I think this is just good social skills.  They shouldn't be focusing on optimizing for recruitment. They should try to be a likeable person that others will want to be around and that goes a long way to recruitment in and of itself.

I see what you're pointing at, I think, but I don't know that this resolves all my edge cases. For instance, where does "I know this person is especially interested in animal welfare, so talk about that" fall?

I separately don't want to optimize for recruitment in the metric of number of people because of my model of what good additions to the community looks like (e.g. I want especially thoughtful people who have a good sense of the relevant ideas and arguments and what they buy and what their uncertainties are") - maybe your approach comes from that? Or are you saying even if one were trying to maximize numbers, they shouldn't systematize?

Thanks so much for writing this! I think it could be a top-level post, I'm sure many others would find it very helpful.

My 2 cents:

2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?

I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.

Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question).
We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think it's a huge difference from the "cult" mindset.

I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux.

The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better.
From my personal perspective this is strongly related to the point on uncertainty: I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct".
I don't know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.

Thanks, Lorenzo!

I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.

Agree about the "not holding water", I was trying to say that "addresses cruxes you don't have" might look similar to this bad thing, but I'm not totally sure that's true.

I disagree about getting people to join your group - that definitely seems like an instrumental goal, though definitely "get the relevant people to join your group" is more the thing - but different people might have different views on how relevant they need to be, or what their goal with the group is.
 

Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question).

I kind of agree here; I think there are things in EA I'm not particularly uncertain of, and while I'm open to being shown I'm wrong, I don't want to pretend more uncertainty than I have.


The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If value the wellbeing of others then EA can help them achieve their values better.

I've definitely heard that frame, but it honestly doesn't resonate for me. I think some people are wrong about what values are right and arguing with me sometimes convinces them of that. I've definitely had my values changed by argumentation! Or at least values on some level of abstraction - not on the level of solipsism vs altruism, but there are many layers between that and "just an empirical question".

I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct"

I incorporate an inside view on my values - if I didn't think they were right, I'd do something else with my time!

Transparency for undermining the weird feelings around systematizing community building

There's a lot of potential ick as things in EA formalize and professionalize, especially in community building. People might reasonably feel uncomfortable realizing that the intro talk they heard is entirely scripted, or that interactions with them have been logged in a spreadsheet or that the events they've been to are taking them through the ideas on a path from least to most weird (all things I've heard of happening, with a range of how confident I am in them actually happening as described here). I think there's a lot to say here about how to productively engage with this feeling (and things community builders should do to mitigate it), but I also think there's a quick trick that will markedly improve things (though do not fix all problems): transparency.

(This is an outside take from someone who doesn't do community building on college campuses or elsewhere, I think that work is hard and filled with paradoxes, and it's also possible that this is already done by default, but in the spirit of stating the obvious)

I've been updating over and over again over the last few years that earnestness is just very powerful, and I think there are ways (though maybe they require some social / communication skills that aren't universal) to say things like (conditional on them being true):

NB: I don't think these are the best versions of these scripts, this was a first pass to point at the thing I mean

  • "This EA group is one of many around the country and the world. There is a standard intro talk that contains framings we think are exceptionally useful and helps us make sure we don't miss any of the important ideas or caveats, so we are giving it here today. I am excited to convey these core concepts, and then for the group of people who come in subsequent weeks to figure out which  aspects of these they're most interested in pursuing and customizing the group to our needs."
  • "Hey, I'm excited to talk to you about EA stuff. The organizers of this group are hoping to chat with people who seem interested and not be repetitive or annoying to you, would it be ok with you if I took some notes on our conversation that other organizers can see?"
  • "EA ideas span a huge gamut from really straightforward to high-context / less conventional. These early dinners start with the less weird ones because we think the core ideas are really valuable to the world whether or not people buy some of the other potential implications. Later on, with more context, we'll explore a wider range."
  • "I get that the perception that people only get funding or help if they seem interested in EA is uncomfortable / seems bad. From my perspective, I'm engaged in a particular project with my EA time / volunteer time / career / donations / life, and I'm excited to find people who are enthused by that same project and want to work together on it. If people find this is not the project for them, that's a great thing to have learned, and I'm excited for them to find people to work with on the things they care about most"

Not everything needs to be explicit, but this at least tracks whether you're passing the red face test.

I think that being transparent in this way requires:

  • Some communication skills to convey things like the above with nuance and grace
  • Being able to track when explicitness is bad or unhelpful
  • Some social skills in tracking what the other person cares about and is looking for in conversations
  • Non-self-hatingness: Thinking that you are doing something valuable, that matters to you, that you don't have to apologize for caring about, along with its implications
  • A willingness to be honest and earnest about the above.

Ambitious Altruism

When I was doing a bunch of explaining of EA and my potential jobs during my most recent job search to friends, family and anyone else, one framing I landed on I found helpful was "ambitious altruism." It let me explain why just helping one person didn't feel like enough without coming off as a jerk (i.e. "I want to be more ambitious than that" rather than "that's not effective").

It doesn't have the maximizing quality, but it doesn't not have it either, since if there's something more you can do with the same resources, there's room to be more ambitious.

Template for EA Calls

Over the last six months, I've been having more and more calls with people interested in EA and EA careers. Sometimes I'm one of their first calls because they know me from social things, and sometimes I'm an introduction someone else (eg at 80k) has made. I've often found that an hour, my standard length for a call, feels very short. Sometimes I just chat, sometimes I try to have more of a plan.  Of course a lot depends on context, but I'm interested in having a bit of a template so that I can be maximally helpful to them in limited time (I can't be a career coach for everyone I'm introduced to) and with the specifics that I can give (not trying to replicate / replace eg 80k advising).

Posting so that people can give advice / help me with it and/or use it if it seems helpful.

Template

  • What's your relationship to EA?
    • I think I currently either spend no time on this or way too much time. I'm hoping this question (rather than "how did you get involved with EA?" or "what do you know about EA") will keep it short but useful. I'm also considering asking this more as a matter of course before the call.
  • What are your current options / thinking?
    • This is a place where, for people early in their thinking (which is most of the people I talk to) I tend to recommend a 5 minute timer to generate more options, advise taking a more explore attitude
    • Frequently recommend looking for small experiments to find out what they might like or are good at
    • I tend to recommend developing a view on which of the options are best by the metrics they care about, including impact
    • When relevant, I want to make a habit of recommending useful reading / podcast
    • Sometimes trying to raise people's ambitions (https://forum.effectivealtruism.org/posts/dMNFCv7YpSXjsg8e6/how-to-raise-others-aspirations-in-17-easy-steps)
  • I want to find out what sets them apart / their skillset, but I don't currently have a really good way of doing this if I don't already know that doesn't feel interview-y
  • The ways I think I can often be most helpful, especially for people really new to institutional EA is to tell them about landscape
    • giving an overview of orgs, foundations, and types of work
    • tell them what I know about who else is working on things they're excited about
    • sometimes that some of their interests aren't a focus of most EA work / money
    • asking about their views on longtermism
    • what people think are the main bottlenecks and do they have an interest in developing those skills
      • management
      • ops
      • vetting / grantmaking
  • If they're talking to me specifically about community building / outreach, I give my view on the landscape there: what's happening, what people are excited about, etc.
  • I also have given my thoughts on how to make 80k advising most helpful
    • Be honest about your biggest uncertainties and what you want help from them on
    • Really try to generate options
    • I wonder what else I can say here

It's possible I should ask more about the cause areas they care about - that feels like it's such a big conversation that it doesn't fit in an hour, but maybe it's really crucial. Don't know! Still figuring it out.

Engaging seriously with the (nontechnical arguments) for AI Risk: One person's core recommended reading list
(I saw this list in a private message from a more well-read EA than me and wanted to write it up, it's not my list since I haven't read most of these, but I thought it was better to have it be public than not):

If still unconvinced, might recommend (as examples of arguments most uncorrelated with the above)

 

For going deeper:

Might as well put a list of skilling up possibilities (probably this has been done before)

Correct me if there are mistakes here

Scattered Takes and Unsolicited Advice (new ones added to the top)

  • If you care about being able to do EA work longterm, it's worth pretty significant costs to avoid resenting EA. Take that into account when you think about what decisions you're making and with what kind of sacrifice.
  • "Say more?" and "If your thoughts are in a pile, what's on top?" are pretty powerful conversational moves, in my experience
  • You can really inspire people to do a bunch in some cheap ways
  • A lot of our feelings and reactions come reactively / contextually / on the margins - people feel a certain way e.g. when they are immersed in EA spaces and sometimes have critiques, and when they are in non-EA spaces, they miss the good things about EA spaces. This seems normal and healthy and a good way to get multiple frames on something, but also good to keep in mind.
  • People who you think of as touchstones of thinking a particular thing may change their minds or not be as bought in as you'd expect
  • The world has so much detail
  • One of the most valuable things more senior EAs can do for junior EAs is contextualize: EA has had these conversations before, the thing you experienced was a 20th/50th/90th percentile experience, other communities do/don't go through similar things etc.
  • One of the best things we can all do for each other is push on expanding option sets, and ask questions that get us to think more about what we think and what we should do. 
  • About going to hubs to network
  • When you're new to EA, it's very exciting: Don't let your feet go faster than your brain - know what you're doing and why. It's not good for you or the world if in two years you look around and don't believe any of it and don't know how you got there and feel tricked or disoriented.
  • You're not alone in feeling overwhelmed or like an imposter
  • If you're young in EA: Don't go into community building just because the object level feels scarier and you don't have the skills yet
  • Networking is great, but it's not the only form of agency / initiative taking
  • Lots of ick feelings about persuasion and outreach get better if you're honest and transparent
  • Lots of ick feelings about all kinds of things are tracking a lot of different things at once: people's vibes, a sense of honesty or dishonesty, motivated reasoning, underlying empirical disagreements - it's good to track those things separately
  • Ask for a reasonable salary for your work, it's not as virtuous as you think to work for nothing
    • Sets bad norms for other people who can't afford to do that
    • Makes it more like volunteering so you might not take the work as seriously
  • Don't be self-hating about EA; figure out what you believe and don't feel bad about believing it and its implications and acting in the world in accordance with it
  • There are sides of spectra like pro-spending money or longtermism or meta work that aren't just "logic over feelings", they have feelings too.
  • Earnestness is shockingly effective - if you say what you think and why you think it (including "I read the title of a youtube video"), if you say when don't know what to do and what you're confused about, if you say what you're confident in and why, if you say how you feel and why, I find things (at least in this social space) go pretty damn well, way better than I would have expected.

Everyone who works with young people should have two pieces of paper in their pockets: 

In one: "they look up to you and remember things you say years later"

 In the other: "have you ever tried to convince a teenager of anything?" 

Take out as needed.

Reference: https://www.gesher-jds.org/2016/04/15/a-coat-with-two-pockets/

Habits of thought I'm working on

  • Answer questions specifically as asked, looping back into my models of the world
    • I sometimes have a habit of modelling questions more as moves in a game, and I play the move that supports the overall outcome of the conversation I'm going for, which doesn't support truth-seeking
    • I also sometimes say things using some heuristics, and answer other questions with other heuristics and it takes work to notice that they're not consistent
  • When I hear a claim, think about whether I've observed it in my life
  • Notice what "fads" I'm getting caught up in
  • Trying to be more gearsy, less heuristics-y. What's actually good or bad about this, what do they actually think not just what general direction are they pulling the rope, etc
  • Noticing when we're arguing about the wrong thing, when we e.g. should be arguing about the breakdown of what percent one thing versus another
  • Noticing when we're skating over a real object level disagreement
  • Noticing whether I feel able to think thoughts
  • Noticing when I'm only consuming / receiving ideas but not actually thinking
  • Listing all the things that could be true about something
  • More predictions / forecasts / models
  • More often looking things up / tracking down a fact rather than sweeping it by or deciding I don't know
  • Paraphrasing a lot and asking if I've got things right
  • "Is that a lot?" - putting numbers in context
  • If there's a weird fact from a study, you can question the study as well as the fact
  • Say why you think things, including "I saw a headline about this"

Habits of thought I might work on someday

  • Reversal tests: reversing every statement to see if the opposite also seems true

 

More I like: https://twitter.com/ChanaMessinger/status/1287737689849176065

Conversational moves in EA / Rationality that I like for epistemics
 

  • “So you are saying that”
  • “But I’d change my mind if”
  • “But I’m open to push back here”
  • “I’m curious for your take here”
  • “My model says”
  • “My current understanding is…”
  • “...I think this because…”
  • “...but I’m uncertain about…”
  • “What could we bet on?”
  • “Can you lay out your model for me?”
  • “This is a butterfly idea
  • “Let’s do a babble
  • “I want to gesture at something / I think this gestures at something true”

I've been thinking about and promulgating EA as nerdsniping (https://chanamessinger.com/blog/ea-as-nerdsniping) as a good intro to bring in curious people interested intellectually in the questions who can come up with their own ideas, often in contrast to EA as an amazing moral approach, but EAG  x Oxford pushed me to update towards the fact that EA / rationality give seeking people a worldview that makes a lot of sense and is more consistent and honest than many others they encounter is a huge part of the appeal, which is both good to know from an outreach perspective and has implications about how much people might get really into EA / rationality if they're looking for that in particular, which might point to being wary if you don't want to be in some sense "too convincing".

My Recommended Reading About Epistemics

For content, but also the vibe it immerses me in I think makes me better

About going to a hub to do networking:
A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the

I think there's a lot of truth to the points made in this post.

I also think it's worth flagging that several of them: networking with a certain subset of EAs, asking for 1:1 meetings with them, being in certain office spaces - are at least somewhat zero sum, such that the more people take this advice, the less available these things will actually be to each person, and possibly on net if it starts to overwhelm. (I can also imagine increasingly unhealthy or competitive dynamics forming, but I'm hoping that doesn't happen!)

Second flag is that I don't know how many people reading this can expect to have an experience similar to yours. They may, but they may not end up being connected in all the same ways, and I want people to go knowing that they take that as a risk and to decide whether it's worth it for them.

On the other side, people taking this advice can do a lot of great networking and creating a common culture of ambition and taking ideas seriously with each other, without the same set of expectations around what connections they'll end up making.

Third flag is I have an un-fleshed out worry that this advice funges against doing things outside Berkeley/SF that are more valuable career capital in the future for ever doing EA things outside of EA or bringing valuable skills and knowledge to EA (like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?). This concern will need to be fleshed out empirically and will vary a lot in applicability by person.

(I work on CEA's community health team but am not making this post on behalf of that team)