Obviously the answer to this will vary for different people, and obviously each individual EA should spend at least some time engaging with members of both of those very broad categories of people. But it still seems useful to ask questions like:
- In general, how much time should EAs spend engaging with other EAs vs with non-EAs?
- How does the amount of time EAs should spend engaging with other EAs vs with non-EAs differ based on various conditions?
- E.g., whether we're asking what a more vs less engaged EA should do, whether we're asking what a budding researcher vs budding policymaker should do, and whether this engagement in intended for social purposes vs gaining knowledge vs forming connections
- What are the main factors driving answers to those questions?
I've seen various interesting discussions of these sorts of questions, and thought it'd be useful to have one post which collects together (a) a bunch of links to prior statements on this and (b) new statements on this. So please answer with either links, your own views, or both. (I'll provide some links and views in an answer myself.)
Meta comments which you can skip:
- The catalyst for me making this post was Richard Ngo recently writing that he feels he "should have engaged more with people outside the EA community" in prior years.
- I'm not sure precisely what the best way of formulating these questions are, and whether various other questions should be included in the scope of this.
- In particular, this cluster of questions seems hard to fully separate from questions about how often and under what conditions EAs should work at EA vs non-EA orgs. But that's a big topic in its own right, and maybe it'd be worth having a separate post collecting links and views on that.
- For now, I suggest erring towards inclusion of things that seem potentially relevant.
- Feel free to make suggestions about subquestions, related questions, the scope, etc.
- I am not trying to imply that EAs should try to explicitly "optimise" their free time and social lives, that they should treat all interactions with other people in a naively consequentialist or instrumentalising way, or that there'll be simple and widely applicable answers to how much EAs should engage with other EAs vs non-EAs
- But there may be some general heuristics, some heuristics relevant to certain subsets of people/situations, some caveats to those heuristics, etc.
Thank you for starting a thread on this open question! Just reading through.
I wrote some quick thoughts on the value of getting a diversity of views here.