All of Sophia's Comments + Replies

I liked this commentary even if I disagreed with a lot of the bottom line conclusions. Since we have an inferential gap that could be quite large, I don't expect everything you say to make sense to me.

You are probably directionally correct so I have strong upvoted this to encourage you to continue writing.

I don't have the energy right now to get into the object-level but feel free to share future draft posts as your thoughts develop. If I have a spare moment, I'd be very happy to share any feedback I have on your future thoughts with you.

(all good humor tends to be pointing to some angle of the truth that needs time to become nuanced enough to be more widely legible)

I strongly agree. I think this question getting downvoted reveals everything wrong with the EA movement. I am thinking it might be to start a new kind of revolution of compassion, patience and rationality. 🤣

What do you think?

I think that you are pointing to an important grain of truth.

I think that crossing inferential gaps is hard.

Academic writing is one medium. I think that facial expressions have a tonne of information that is hard to capture in writing but can be captured in a picture. To understand maths, writing is fine. To understand the knowledge in people's heads, more high fidelity mediums than writing (like video) is better.

5
Phil Tanny
2y
Hi Sophia, I'm wondering if academics have a built-in bias for complexity and sophistication, because without it, they can't be experts. Many of the academic posts I see here and elsewhere are very articulate expressions of complexity, but they seem to missing what is sometimes a quite simple bottom line.  As example, consider this claim: If we don't gain control of the knowledge explosion, nothing else really matters.

tl;dr:

  • I am not sure that the pressure on community builders to communicate all the things that matter is having good consequences.
  • This pressure makes people try to say too much, too fast.
  • Making too many points too fast makes reasoning less clear.
  • We want a community full of people who have good reasoning skills.
  • We therefore want to make sure community builders are demonstrating good reasoning skills to newcomers
  • We therefore want community builders to take the time they need to communicate the key points
  • This sometimes realistically means not getting
... (read more)

Cool . I'm curious, how does this feeling change for you if you found out today that AI timelines are almost certainly less than a decade?

I'm curious because my intuitions change momentarily whenever a consideration pops into my head that makes me update towards AI timelines being shorter.

I think my intuitions change when I update towards shorter AI timelines because legibility/the above outlined community building strategy has a longer timeline before the payoffs. Managing reputation and goodwill seem like good strategies if we have a couple of decades or... (read more)

3
Rohin Shah
2y
Idk, what are you trying to do with your illegible message? If you're trying to get people to do technical research, then you probably just got them to work on a different version of the problem that isn't the one that actually mattered. You'd probably be better off targeting a smaller number of people with a legible message. If you're trying to get public support for some specific regulation, then yes by all means go ahead with the illegible message (though I'd probably say the same thing even given longer timelines; you just don't get enough attention to convey the legible message). TL;DR: Seems to depend on the action / theory of change more than timelines.

Thanks for this analysis! I would be excited to see this cause area explored/investigated further.

Note: edited significantly for clarity the next day

Tl;dr: Weirdness is still a useful sign of sub-optimal community building. Legibility is the appropriate fix to weirdness. 

I know I used the terms "nuanced" and "high-fidelity" first but after thinking about it a few more days, maybe "legibility" more precisely captures what we're pointing to here?

Me having the hunch that the advice "don't be weird" would lead community builders to be more legible now seems like the underlying reason I liked the advice in the first place. However, you've very much con... (read more)

2
Rohin Shah
2y
Yeah I'm generally pretty happy with "make EA more legible".

I definitely appreciate the enthusiasm in this post, I'm excited about Will's book too. 

However, for the reasons Linch shared in their comment, I would recommend editing this post a little.

 I think it is important to only recommend the book to people who we know well enough to judge that they probably would get a lot of a book like this one and to whom we can legibly articulate why we think they'd get a lot out of the book. 

A recommended edit to this post

I recommend editing the friends bit to something like this (in your own words of course,... (read more)

Goal of this comment: 

This comment fills in more of the gaps I see that I didn't get time to fill out above. It fleshes out more of the connection between the advice "be less weird" and "communicate reasoning over conclusions".

  • Doing my best to be legible to the person I am talking to is, in practice, what I do to avoid coming across as weird/alienating. 
  • there is a trade-off between contextualizing and getting to the final point
  • we could be in danger of never risking saying anything controversial so we do need to encourage people to still get to th
... (read more)

tl;dr: 

  • when effective altruism is communicated in a nuanced way, it doesn't sound weird.
  • rushing to the bottom line and leaving it unjustified means the person has neither a good understanding of the conclusion nor of the reasoning
  • I want newcomers to have a nuanced view of effective altruism
  • I think newcomers only understanding a rushed version of the bottom line without the reasoning is worse than them only understanding the very first step of the reasoning.
  • I think it's fine for people to go away with 10% of the reasoning.
  • I don't think it's fine for pe
... (read more)
3
Rohin Shah
2y
I agree that if the listener interprets "make EA sound less weird" as "communicate all of your reasoning accurately such that it leads the listener to have correct beliefs, which will also sound less weird", then that's better than no advice. I don't think that's how the typical listener will interpret "make EA sound less weird"; I think they would instead come up with surface analogies that sound less weird but don't reflect the underlying mechanisms, which listeners might notice then leading to all the problems you describe. I definitely don't think we should just say all of our conclusions without giving our reasoning. (I think we mostly agree on what things are good to do and we're now hung up on this not-that-relevant question of "should we say 'make EA sound less weird'" and we probably should just drop it. I think both of us would be happier with the advice "communicate a nuanced, accurate view of EA beliefs" and that's what we should go with.)
1
Sophia
2y
Goal of this comment:  This comment fills in more of the gaps I see that I didn't get time to fill out above. It fleshes out more of the connection between the advice "be less weird" and "communicate reasoning over conclusions". * Doing my best to be legible to the person I am talking to is, in practice, what I do to avoid coming across as weird/alienating.  * there is a trade-off between contextualizing and getting to the final point * we could be in danger of never risking saying anything controversial so we do need to encourage people to still get to the bottom line after giving the context that makes it meaningful * right now, we seem to often state an insufficiently contextualized conclusion in a way that seems net negative to me  * we cause bad impressions * we  cause bad impressions while communicating points I see as less fundamentally important to communicate  * communicating reasoning/our way of thinking seems more important than the bottom line without the reasoning * AI risk can often take more than a single conversation to contextualize well enough for it to move from a meaningless topic to an objectionable claim that can be discussed with scepticism but still some curiosity * I think we're better off trying to get community builders to be more patient and jump the gun less on the alienating bottom line * The soundbite "be less weird" probably does move us in a direction I think is net positive I suspect that this is what most community builders will lay the groundwork to more legibly support conclusions when given advice like "get to the point if you can, don't beat around the bush, but don't be weird and jump the gun and say something without the needed context for the person you are talking to to make sense of what you are saying" I feel like making arguments about stuff that is true is a bit like sketching out a maths proof for a maths student. Each link in the chain is obvious if you do it well, at the level of the person with whom

Lucky people maybe just have an easier time doing anything they want to do, including helping others, for so many reasons.

I didn't go to an elite university but I am exceptionally lucky in so many extreme ways (extremely loving family, friends, citizen of a rich country, good at enough stuff to feel valued throughout my life including at work etc).

While there is a counterfactual world where of course I could have put myself in a much worse position, it would have been impossible for most people to have it as good as I have it even if they worked much harde... (read more)

The reputation of the effective altruism society on each campus seems incredibly important for the "effective altruism" brand among key audiences. E.g. Future Deepmind team leaders could come out of MIT, Harvard, Stanford etc.

Are we doing everything we could to leave people with an honest but still good impression? (whether or not they seem interested in engaging further)

Fair enough. 

tl;dr: I now think that EA community-builders should present ideas in a less weird way when it doesn't come at the expense of clarity, but maybe the advice "be less weird" is not good advice because it might make community-builders avoid communicating weird ideas that are worth communicating. 

You probably leave some false impressions either way

In some sense (in the sense I actually care about), both statements are misleading.  

I think that community builders are going to convey more information, on average, if they start with th... (read more)

[This comment is no longer endorsed by its author]Reply

My feeling on this is that there is a distinction between how many people could become interested and how many people we have capacity for right now. The number of people who have the potential to become engaged, have a deep understanding of the ideas along with how they relate to existing conclusions and feel comfortable pushing back on any conclusions that they find less persuasive in a transparent way is much larger than the number of people we can actually engage deeply like this. 

I feel like a low barrier of entry is great when your existing memb... (read more)

Maybe I want silent upvoting and downvoting to be disincentivized (or commenting with reasoning to be more incentivized). Commenting with reasoning is valuable but also hard work. 

After 2 seconds of thought, I think I'd be massively in favour of a forum feature where any upvotes or downvotes count for more (e.g. double or triple the karma) once you've commented.[1]  

Just having this incentive might make more people try and articulate what they think and why they think it. This extra incentive to stop and think might possibly make people... (read more)

[This comment is no longer endorsed by its author]Reply

Hot take: strong upvoting things without great reasoning that also have conclusions I disagree with could be good for improving epistemics. At least, I think this gives us an opportunity to demonstrate common thinking processes in EA and what reasoning transparency looks like to newer people to the community. [1]

My best guess is that it also makes it more likely quality divergent thinking from established ideas happens in EA community spaces like the EA forum.

 My reasoning is in a footnote in my comment here. 

  1. ^

    I'm aware that people on this t

... (read more)
1
Sophia
2y
Maybe I want silent upvoting and downvoting to be disincentivized (or commenting with reasoning to be more incentivized). Commenting with reasoning is valuable but also hard work.  After 2 seconds of thought, I think I'd be massively in favour of a forum feature where any upvotes or downvotes count for more (e.g. double or triple the karma) once you've commented.[1]   Just having this incentive might make more people try and articulate what they think and why they think it. This extra incentive to stop and think might possibly make people change their votes even if they don't end up  submitting their comments.  1. ^ Me commenting on my own comment shouldn't  mean the default upvote on my comment counts for more though: only the first reply should give extra voting power (I'm sure there are other ways to game it that I haven't thought of yet but I feel like there could be something salvageable from the idea anyway). 

Thank you very much for writing this. I think it is a valuable contribution to discussion.

I think there is something to a lot of the points you raised, though I think that this piece isn't quite there yet. I've put some quick thoughts on how to improve the tone (and why I didn't comment on anything substantive) in a footnote, as well as why I've strong upvoted anyway. [1]

I have to admit that I still find Holden's piece more compelling. Nonetheless, I think you posting this was a very valuable thing to do! 

I think the way new ideas develop into th... (read more)

I doubt anyone disagrees with either of our above two comments. 🙂

I just have noticed that when people focus on growing faster, they sometimes push for strategies that I think do more harm than good because we all forget the higher level goals mid project.

I'm not against a lot of faster growth strategies than currently get implemented.

I am against focusing on faster growth because the higher level goal of "faster growth" makes it easy to miss some big picture considerations.

A better higher level goal, in my mind, is focus on fundamentals (like scope insens... (read more)

We don't need everyone to have a 4 dimensional take on EA.

Let's be more inclusive. No need for all the moral philosophy for these ideas to be constructive.

However, it is easy to give an overly simplistic impression. We are asking some of the hardest questions humanity could ask. How do we make this century go well? What should we do with our careers in light of this?

Let's be inclusive but slowly enough to give people a nuanced impression. And slowly enough to be some social support to people questioning their past choices and future plans.

A shorter explainer on why focusing on fast growth could be harmful:

Focusing on fast means focusing on spreading ideas fast. Ideas that are fast to spread tend to be 1 dimensional.

Many 1d versions of the EA ideas could do more harm than good. Let's not do much more harm than good by spreading unhelpful, 1 dimensional takes on extremely complicated and nuanced questions.

Let's spread 2 dimensional takes on EA that are honest, nuanced and intelligent where people think for themselves.

The 2d takes that include the fundamental concepts (scope insensitivity and ... (read more)

9
Sophia
2y
We don't need everyone to have a 4 dimensional take on EA. Let's be more inclusive. No need for all the moral philosophy for these ideas to be constructive. However, it is easy to give an overly simplistic impression. We are asking some of the hardest questions humanity could ask. How do we make this century go well? What should we do with our careers in light of this? Let's be inclusive but slowly enough to give people a nuanced impression. And slowly enough to be some social support to people questioning their past choices and future plans.
2
Zach Stein-Perlman
2y
This all sounds reasonable. But maybe if we're clever we'll find ways to spread EA fast and well. In the possible worlds where UGAP or 80K or EA Virtual Programs or the EA Infrastructure Fund didn't exist, EA would spread slower, but not really better. Maybe there's a possible world where more/bigger things like those exist, where EA spreads very fast and well. 

Changing minds and hearts is a slow process. I unfortunately agree too much with your statement that there are no shortcuts. This is one key reason why I think we can only grow so fast.

Growing this community in a way that allows people to think for themselves in a nuanced and intelligent way seems necessarily a bit slow (so glad that compounding growth makes being enormous this century still totally feasible to me!).

I agree that focusing on epistemics leads to conclusions worth having. I am personally skeptical of fellowships unless they are very focused on first principles and when discussing conclusions, great objections are allowed to take the discussion completely off topic for three hours.

Demonstrating reasoning processes well and racing to a bottom line conclusion don't seem very compatible to me.

4
Sophia
2y
Changing minds and hearts is a slow process. I unfortunately agree too much with your statement that there are no shortcuts. This is one key reason why I think we can only grow so fast. Growing this community in a way that allows people to think for themselves in a nuanced and intelligent way seems necessarily a bit slow (so glad that compounding growth makes being enormous this century still totally feasible to me!).

If it's a question of giving people either a sense of this community's epistemics or the bottom line conclusion, I strongly think you are doing a lot more good if you choose epistemics.

Every objection is an opportunity to add nuance to your view and their view.

If you successfully demonstrate great epistemics and people keep coming back, your worldviews will converge based on the strongest arguments from everyone involved in the many conversations happening at your local group.

Focus on epistemics and you'll all end up with great conclusions (and if they are... (read more)

1
Alejandro Acelas
2y
Oh, I totally agree that giving people the epistemics is mostly preferable to hanging them the bottom line. My doubts come more from my impression that forming good epistemics in a relatively unexplored environment (e.g. cause prioritization within Colombia) is probably harder than in other contexts. I know that at least our explicit aim with the group was to exhibit the kind of patience and rigour you describe and that I ended up somewhat underwhelmed with the results. I initially wanted to try to parse out where our differing positions came from, but this comment eventually got a little long and rambling. For now I'll limit myself to thanking you for making what I think it's a good point.

You don't need to convince everyone of everything you think in a single event. 🙂 You probably didn't form your worldview in the space of two hours either. 😉

When someone says they think giving locally is better, ask them why. Point out exactly what you agree with (e.g. it is easier to have an in-depth understanding of your local context) and why you still hold your view (e.g. that there are such large wealth disparities between different countries that there are some really low hanging fruit, like basic preventative measures of diseases like malaria, that... (read more)

2
Sophia
2y
If it's a question of giving people either a sense of this community's epistemics or the bottom line conclusion, I strongly think you are doing a lot more good if you choose epistemics. Every objection is an opportunity to add nuance to your view and their view. If you successfully demonstrate great epistemics and people keep coming back, your worldviews will converge based on the strongest arguments from everyone involved in the many conversations happening at your local group. Focus on epistemics and you'll all end up with great conclusions (and if they are different to the existing commonly held views in the community, that's even better, write a forum post together and let that insight benefit the whole movement!).

I agree with that. 🙂

I consider myself a part of the community and I am not employed in an EA org, nor do I intend to be anytime soon so I know that having an EA job or funding is not needed for that.

 I meant the capacity to give people a nuanced enough understanding of the existing ideas and thinking processes as well as the capacity to give people the feeling that this is their community, that they belong in EA spaces, and that they can push back on anything they disagree with.

It's quite hard to communicate the fundamental ideas and how they link to... (read more)

I also think faster is better if the end size of our community stays the same. 👌🏼 I also think it's possible that faster growth increases the end size of our community too. 🙂 

Sorry if my past comment came across a bit harshly (I clearly have just been over-thinking this topic recently 😛)![1]

 I do have an intuition, which I explain in more detail below, that lots of ways of growing really fast could end up making our community's end size smaller. 😟

Therefore, I feel like focusing on fast growth is much less important than focusing on laying th... (read more)

I actually think being welcoming to a broad range of people and ideas is really about being focused on conveying to people who are new to effective altruism that the effective altruism project is about a question. 

If they don't agree with the current set of conclusions, that is fine! That's encouraged, in fact. 

People who disagree with our current bottom line conclusions can still be completely on board with the effective altruism project (and decide whether their effective altruism project is helped by engaging with the community for themselves)... (read more)

I agree that EA being enormous eventually would be very good. 🙂

However, I think there are plenty of ways that quick, short-term growth strategies could end up stunting our growth. 😓

I also think that being much more welcoming might be surprisingly significant due to compounding growth (as I explain below). 🌞

It sounds small, "be more welcoming", but a small change in angle between two paths can result in a very different end destination. It is absolutely possible for marginal changes to completely change our trajectory!

We probably don't want effective alt... (read more)

8
Davidmanheim
2y
Strongly agree that being more welcoming is critical! I focused more on the negatives - not being hostile to people who are potential allies, but I definitely think both are important. That said, I really hate the framing of "not having capacity for people" - we aren't, or should not be, telling everyone that they need to work at EA organizations to be EA-oriented. Even ignoring the fact that career capital is probably critical for many of the people joining, it's OK for EAs to have normal jobs and normal careers and donate - and if they are looking for more involvement, reading more, writing / blogging / talking to friends, and attending local meet-ups is a great start.

I actually think being welcoming to a broad range of people and ideas is really about being focused on conveying to people who are new to effective altruism that the effective altruism project is about a question. 

If they don't agree with the current set of conclusions, that is fine! That's encouraged, in fact. 

People who disagree with our current bottom line conclusions can still be completely on board with the effective altruism project (and decide whether their effective altruism project is helped by engaging with the community for themselves)... (read more)

2
Zach Stein-Perlman
2y
(I strongly agree that we should be nice and welcoming. I still think trying to make EA enormous quickly is good if you can identify reasonable such interventions.)


Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.  

E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn't go directly into AI (and instead should build aptitudes). 

Nuanced ideas are harder to spread but also people feeling like they don't have permission in community spaces (in loc... (read more)

This was such a great articulation of such a core tension to effective altruism community building. 

A key part of this tension comes from the fact that most ideas, even good ideas, will sound like bad ideas the first time they are aired. Ideas from extremely intelligent people and ideas that have potential to be iterated into something much stronger do not come into existence fully-formed. 

Leaving more room for curious and open-minded people to put forward their butterfly ideas without being shamed/made to feel unintelligent means having room for... (read more)

1
Sophia
2y
Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.   E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn't go directly into AI (and instead should build aptitudes).  Nuanced ideas are harder to spread but also people feeling like they don't have permission in community spaces (in local groups or on the forum) to say under-developed things means it is much less likely for the off-the-beaten-track stuff that has been mentioned but not fleshed out to come up in conversation (or to get developed further). 

I think a big thing I feel after reading this is a lot more disillusioned about community-building. 

It is really unhealthy that people feel like they can’t dissent from more established (more-fleshed out?) thoughts/arguments/conclusions. 

Where is this pressure to agree with existing ideas and this pressure against dissent coming from? (some early thoughts to flesh out more 🤔)

This post isn’t the only thing that makes me feel that there is way too much pressure to agree and way too little room to develop butterfly ideas (that are never well-argued... (read more)

That was a really clarifying reply! 

tl;dr 

  • I see language and framing as closely related (which is why I conflated them in my previous comment)
  • Using more familiar language (e.g. less unfamiliar jargon) often makes things sound less weird
  • I agree that weird ideas often sound weirder when you make them clearer (e.g. when you are clearer by using language the person you are talking to understands more easily)
  • I agree that it is better to be weird than to be misleading
  • However, weird ideas in plain English often sound weird because there is missing conte
... (read more)
3
Rohin Shah
2y
I mostly agree with this. I think it sounds a lot less weird in large part because you aren't saying that the AI system might kill us all. "Really dangerous" could mean all sorts of things, including "the chess-playing robot mistakes a child's finger for a chess piece and accidentally breaks the child's finger". Once you pin it down to "kills all humans" it sounds a lot weirder. I still do agree with the general point that as you explain more of your reasoning and cover more of the inferential gap, it sounds less weird. I still worry that people will not realize the ways they're being misleading -- I think they'll end up saying true but vague statements that get misinterpreted. (And I worry enough that I feel like I'd still prefer "no advice".)

If few people actually have their own views on why AI is an important cause area to be able to translate them into plain English, then few people should be trying to convince others that AI is a big deal in a local group. 

 I think it is counterproductive for people who don't understand the argument they are making well enough to put the arguments into plain English to instead parrot off some jargon.

If you can't put the point you are trying to express in language the person you are talking to can understand, then there is no point talking to that

... (read more)
3
Lukas_Gloor
2y
I liked this comment! In particular, I think the people who are good at "not making EA seem weird" (while still communicating all the things that matter – I agree with the points Rohin is making in the thread) are also (often) the ones who have a deeper (or more "authentic") understanding of the content. There are counterexamples, but consider, for illustration, that Yudkowsky's argument style and the topics he focuses on would seem a whole lot weirder if he wasn't skilled at explaining complex issues. So, understanding what you talk about doesn't always make your points "not weird," but it (at least) reduces weirdness significantly. I think that's mostly beneficial and I think fewer people coming into contact with EA ideas but where they do come into contact with them, they hear them from exponents with a particularly deep, "authentic" understanding, seems like a good thing! Instead of (just) "jargon" you could also say "talking points."
6
Rohin Shah
2y
I strongly agree that you want to avoid EA jargon when doing outreach. Ideally you would use the jargon of your audience, though if you're talking to a broad enough audience that just means "plain English". I disagree that "sounding weird" is the same thing (or even all that correlated with) "using jargon". For example, This has no jargon, but still sounds weird to a ton of people. Similarly I think with AI risk the major weird part is the thing where the AI kills all the humans, which doesn't seem to me to depend much on jargon. (If anything, I've found that with more jargon the ideas actually sound less weird. I think this is probably because the jargon obscures the meaning and so people can replace it with some different less weird meaning and assume you meant that. If you say "a goal-directed AI system may pursue some goal that we don't want leading to a catastrophic outcome" they can interpret you as saying "I'm worried about AIs mimicking human biases"; that doesn't happen when you say "an AI system may deliberately kill all the humans".)

lol, yeah, totally agree (strong upvoted).

 I think in hindsight I might literally have been subconsciously indicating in-groupness ("indicating in-groupness" means trying to show I fit in 🤮 -- feels so much worse in plain English for a reason, jargon is more precise but still often less obvious what is meant, so it's often easier to hide behind it) because my dumb brain likes for people to think I'm smarter than I am. 

In my defense, it's so easy to, in the moment, to use the first way of expressing what I mean that comes to mind. 

I am sure ... (read more)

I strong upvoted this because:
1) I think AI governance is a big deal (the argument for this has been fleshed out elsewhere by others in the community) and 
2) I think this comment is directionally correct beyond the AI governance bit even if I don't think it quite fully fleshes out the case for it (I'll have a go at fleshing out the case when I have more time but this is a time-consuming thing to do and my first attempt will be crap even if there is actually something to it). 

I think that strong upvoting was appropriate because:
1)  stati... (read more)

I think a necessary condition to us keeping a lot of the amazing trust we have in this community is that we believe that that trust is valuable. I get that grifters are going to be an issue. I also think that grifters are going to have a much easier time if there isn't a lot of openness and transparency within the movement. 

Openness and transparency, like we've seen historically, seems only possible with high degrees of trust. 

Posting a post on the importance of trust seems like a good starting point for getting people on board with the idea that... (read more)

I have written up a draft template post on the importance of trust within the community (and trust with others we might want to cooperate with in the future, eg. like the people who made that UN report on future generations mattering a tonne happen). 

Let me know if you would like a link, anyone reading this is also very welcome to reach out! 

Feedback to the draft content/points and also social accountability are very welcome.

A quick disclaimer: I don't have a perfect historical track record of always doing the things I believe are important so th... (read more)

1
Sophia
2y
I think a necessary condition to us keeping a lot of the amazing trust we have in this community is that we believe that that trust is valuable. I get that grifters are going to be an issue. I also think that grifters are going to have a much easier time if there isn't a lot of openness and transparency within the movement.  Openness and transparency, like we've seen historically, seems only possible with high degrees of trust.  Posting a post on the importance of trust seems like a good starting point for getting people on board with the idea that doing the things that foster trust are worth doing (I think the things that foster trust tend to foster trust because they are good signals/can help us tell grifters and trustworthy people apart so I think this sort of thing hits two birds with one stone).

I am very excited about this event (thanks organisers for putting it on)

It's just my general feeling on the forum recently that a few different groups of people are talking past each other sometimes and all saying valuable true things (but still, as always, people generally are good at finding common ground which is something I love about the EA community). 

Really, I just really want everyone reading to understand where everyone else is coming from. This vaguely makes me want to be more precise when other people are saying the same thing in plain English. It also makes me want to optimise for accessibility when everyone e... (read more)

Some of my personal thoughts on jargon and why I chose, pretty insensitively given the context of this post, to use some anyway

 I used the "second moment of a distribution" jargon here initially (without the definition that I later edited in) because I feel like sometimes people talk past each other. I wanted to say what I meant in a way that could be understood more by people who might not be sure exactly what everyone else precisely meant. Plain English sometimes lacks precision for the sake of being inclusive (inclusivity that I personally think is... (read more)

For what it's worth, I think the term "variance" is much more accessible than "second moment".

Variance is a relatively common word. I think in many cases we can be more inclusive without losing precision (another example is "how much I'm sure of this" vs "epistemic status")

1
Sophia
2y
It's just my general feeling on the forum recently that a few different groups of people are talking past each other sometimes and all saying valuable true things (but still, as always, people generally are good at finding common ground which is something I love about the EA community).  Really, I just really want everyone reading to understand where everyone else is coming from. This vaguely makes me want to be more precise when other people are saying the same thing in plain English. It also makes me want to optimise for accessibility when everyone else is saying something in technical jargon that is an idea that more people could get value from understanding.  Ideally I'd be a good enough at writing to be precise and accessible at the same time though (but both precision and making comments easier to understand for a broader group of readers is so time consuming so I often try to either do one or the other and sometimes I'm terrible and make a quick comment that is definitely neither 🤣). 

Thanks 😊. 

Yeah, I've noticed that this is a big conversation right now. 

My personal take

EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people. 

However, the core bit of effective altruism, something like "help others as much as we can and change our minds when we're given a good reason to", does seem like an idea that has room for a much wider ecosystem than we have. 

I'm personally hopeful we'll get better at strik... (read more)

Great (and also unsurprising so I'm now trying to work out why I felt the need to write the initial comment)

I think I wrote the initial comment less because I expected anyone to reflectively disagree and more because I think we all make snap judgements that maybe take conscious effort to notice and question.

I don't expect anyone to advocate for people because they speak more jargon (largely because I think very highly of people in this community). I do expect it to be harder to understand someone who comes from a different cultural bubble and, therefore, h... (read more)

Hi Linch, I'm sorry for taking so long to reply to this! I mainly just noticed I was conflating several intuitions and I needed to think more to tease them out.

(my head's no longer in this and I honestly never settled on a view/teased out the threads but I wanted to say something because I felt it was quite rude of me to have never replied)

4
Linch
2y
Hi Sophia. Don't sweat it. :)

There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I'm optimistic that 1) this won't be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like  valued members of the community.

Will we permanently have low capacity? 

I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).

2
Max_Daniel
2y
Good question! I'm pretty uncertain about the ideal growth rate and eventual size of "the EA community", in my mind this among the more important unresolved strategic questions (though I suspect it'll only become significantly action-relevant in a few years). In any case, by expressing my agreement with Linch, I didn't mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime "making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere" is (in some cases) the right goal.
2
Sophia
2y
There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I'm optimistic that 1) this won't be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like  valued members of the community.

Yeah, I also find it very de-stabilizing and then completely forget my own journey instantly once I've reconciled everything and am feeling stable and coherent again. 

It's nice to hear I'm not the only one here who isn't 99.999 percentile stoically unaffected by this. 

 I think one way to deal with this is to mainly select for people with these weird dispositions who are unusually good at coping with this. 

I think an issue with this is that the other 99% of planet Earth might be good allies to have in this whole "save the world" project ... (read more)

Load more