I strongly agree. I think this question getting downvoted reveals everything wrong with the EA movement. I am thinking it might be to start a new kind of revolution of compassion, patience and rationality. 🤣
What do you think?
I think that you are pointing to an important grain of truth.
I think that crossing inferential gaps is hard.
Academic writing is one medium. I think that facial expressions have a tonne of information that is hard to capture in writing but can be captured in a picture. To understand maths, writing is fine. To understand the knowledge in people's heads, more high fidelity mediums than writing (like video) is better.
tl;dr:
Cool . I'm curious, how does this feeling change for you if you found out today that AI timelines are almost certainly less than a decade?
I'm curious because my intuitions change momentarily whenever a consideration pops into my head that makes me update towards AI timelines being shorter.
I think my intuitions change when I update towards shorter AI timelines because legibility/the above outlined community building strategy has a longer timeline before the payoffs. Managing reputation and goodwill seem like good strategies if we have a couple of decades or...
Thanks for this analysis! I would be excited to see this cause area explored/investigated further.
Note: edited significantly for clarity the next day
Tl;dr: Weirdness is still a useful sign of sub-optimal community building. Legibility is the appropriate fix to weirdness.
I know I used the terms "nuanced" and "high-fidelity" first but after thinking about it a few more days, maybe "legibility" more precisely captures what we're pointing to here?
Me having the hunch that the advice "don't be weird" would lead community builders to be more legible now seems like the underlying reason I liked the advice in the first place. However, you've very much con...
I definitely appreciate the enthusiasm in this post, I'm excited about Will's book too.
However, for the reasons Linch shared in their comment, I would recommend editing this post a little.
I think it is important to only recommend the book to people who we know well enough to judge that they probably would get a lot of a book like this one and to whom we can legibly articulate why we think they'd get a lot out of the book.
I recommend editing the friends bit to something like this (in your own words of course,...
Goal of this comment:
This comment fills in more of the gaps I see that I didn't get time to fill out above. It fleshes out more of the connection between the advice "be less weird" and "communicate reasoning over conclusions".
tl;dr:
Lucky people maybe just have an easier time doing anything they want to do, including helping others, for so many reasons.
I didn't go to an elite university but I am exceptionally lucky in so many extreme ways (extremely loving family, friends, citizen of a rich country, good at enough stuff to feel valued throughout my life including at work etc).
While there is a counterfactual world where of course I could have put myself in a much worse position, it would have been impossible for most people to have it as good as I have it even if they worked much harde...
The reputation of the effective altruism society on each campus seems incredibly important for the "effective altruism" brand among key audiences. E.g. Future Deepmind team leaders could come out of MIT, Harvard, Stanford etc.
Are we doing everything we could to leave people with an honest but still good impression? (whether or not they seem interested in engaging further)
Fair enough.
tl;dr: I now think that EA community-builders should present ideas in a less weird way when it doesn't come at the expense of clarity, but maybe the advice "be less weird" is not good advice because it might make community-builders avoid communicating weird ideas that are worth communicating.
In some sense (in the sense I actually care about), both statements are misleading.
I think that community builders are going to convey more information, on average, if they start with th...
My feeling on this is that there is a distinction between how many people could become interested and how many people we have capacity for right now. The number of people who have the potential to become engaged, have a deep understanding of the ideas along with how they relate to existing conclusions and feel comfortable pushing back on any conclusions that they find less persuasive in a transparent way is much larger than the number of people we can actually engage deeply like this.
I feel like a low barrier of entry is great when your existing memb...
Maybe I want silent upvoting and downvoting to be disincentivized (or commenting with reasoning to be more incentivized). Commenting with reasoning is valuable but also hard work.
After 2 seconds of thought, I think I'd be massively in favour of a forum feature where any upvotes or downvotes count for more (e.g. double or triple the karma) once you've commented.[1]
Just having this incentive might make more people try and articulate what they think and why they think it. This extra incentive to stop and think might possibly make people...
Hot take: strong upvoting things without great reasoning that also have conclusions I disagree with could be good for improving epistemics. At least, I think this gives us an opportunity to demonstrate common thinking processes in EA and what reasoning transparency looks like to newer people to the community. [1]
My best guess is that it also makes it more likely quality divergent thinking from established ideas happens in EA community spaces like the EA forum.
My reasoning is in a footnote in my comment here.
I'm aware that people on this t
Thank you very much for writing this. I think it is a valuable contribution to discussion.
I think there is something to a lot of the points you raised, though I think that this piece isn't quite there yet. I've put some quick thoughts on how to improve the tone (and why I didn't comment on anything substantive) in a footnote, as well as why I've strong upvoted anyway. [1]
I have to admit that I still find Holden's piece more compelling. Nonetheless, I think you posting this was a very valuable thing to do!
I think the way new ideas develop into th...
I doubt anyone disagrees with either of our above two comments. 🙂
I just have noticed that when people focus on growing faster, they sometimes push for strategies that I think do more harm than good because we all forget the higher level goals mid project.
I'm not against a lot of faster growth strategies than currently get implemented.
I am against focusing on faster growth because the higher level goal of "faster growth" makes it easy to miss some big picture considerations.
A better higher level goal, in my mind, is focus on fundamentals (like scope insens...
We don't need everyone to have a 4 dimensional take on EA.
Let's be more inclusive. No need for all the moral philosophy for these ideas to be constructive.
However, it is easy to give an overly simplistic impression. We are asking some of the hardest questions humanity could ask. How do we make this century go well? What should we do with our careers in light of this?
Let's be inclusive but slowly enough to give people a nuanced impression. And slowly enough to be some social support to people questioning their past choices and future plans.
A shorter explainer on why focusing on fast growth could be harmful:
Focusing on fast means focusing on spreading ideas fast. Ideas that are fast to spread tend to be 1 dimensional.
Many 1d versions of the EA ideas could do more harm than good. Let's not do much more harm than good by spreading unhelpful, 1 dimensional takes on extremely complicated and nuanced questions.
Let's spread 2 dimensional takes on EA that are honest, nuanced and intelligent where people think for themselves.
The 2d takes that include the fundamental concepts (scope insensitivity and ...
Changing minds and hearts is a slow process. I unfortunately agree too much with your statement that there are no shortcuts. This is one key reason why I think we can only grow so fast.
Growing this community in a way that allows people to think for themselves in a nuanced and intelligent way seems necessarily a bit slow (so glad that compounding growth makes being enormous this century still totally feasible to me!).
I agree that focusing on epistemics leads to conclusions worth having. I am personally skeptical of fellowships unless they are very focused on first principles and when discussing conclusions, great objections are allowed to take the discussion completely off topic for three hours.
Demonstrating reasoning processes well and racing to a bottom line conclusion don't seem very compatible to me.
If it's a question of giving people either a sense of this community's epistemics or the bottom line conclusion, I strongly think you are doing a lot more good if you choose epistemics.
Every objection is an opportunity to add nuance to your view and their view.
If you successfully demonstrate great epistemics and people keep coming back, your worldviews will converge based on the strongest arguments from everyone involved in the many conversations happening at your local group.
Focus on epistemics and you'll all end up with great conclusions (and if they are...
You don't need to convince everyone of everything you think in a single event. 🙂 You probably didn't form your worldview in the space of two hours either. 😉
When someone says they think giving locally is better, ask them why. Point out exactly what you agree with (e.g. it is easier to have an in-depth understanding of your local context) and why you still hold your view (e.g. that there are such large wealth disparities between different countries that there are some really low hanging fruit, like basic preventative measures of diseases like malaria, that...
I agree with that. 🙂
I consider myself a part of the community and I am not employed in an EA org, nor do I intend to be anytime soon so I know that having an EA job or funding is not needed for that.
I meant the capacity to give people a nuanced enough understanding of the existing ideas and thinking processes as well as the capacity to give people the feeling that this is their community, that they belong in EA spaces, and that they can push back on anything they disagree with.
It's quite hard to communicate the fundamental ideas and how they link to...
I also think faster is better if the end size of our community stays the same. 👌🏼 I also think it's possible that faster growth increases the end size of our community too. 🙂
Sorry if my past comment came across a bit harshly (I clearly have just been over-thinking this topic recently 😛)![1]
I do have an intuition, which I explain in more detail below, that lots of ways of growing really fast could end up making our community's end size smaller. 😟
Therefore, I feel like focusing on fast growth is much less important than focusing on laying th...
I actually think being welcoming to a broad range of people and ideas is really about being focused on conveying to people who are new to effective altruism that the effective altruism project is about a question.
If they don't agree with the current set of conclusions, that is fine! That's encouraged, in fact.
People who disagree with our current bottom line conclusions can still be completely on board with the effective altruism project (and decide whether their effective altruism project is helped by engaging with the community for themselves)...
I agree that EA being enormous eventually would be very good. 🙂
However, I think there are plenty of ways that quick, short-term growth strategies could end up stunting our growth. 😓
I also think that being much more welcoming might be surprisingly significant due to compounding growth (as I explain below). 🌞
It sounds small, "be more welcoming", but a small change in angle between two paths can result in a very different end destination. It is absolutely possible for marginal changes to completely change our trajectory!
We probably don't want effective alt...
I actually think being welcoming to a broad range of people and ideas is really about being focused on conveying to people who are new to effective altruism that the effective altruism project is about a question.
If they don't agree with the current set of conclusions, that is fine! That's encouraged, in fact.
People who disagree with our current bottom line conclusions can still be completely on board with the effective altruism project (and decide whether their effective altruism project is helped by engaging with the community for themselves)...
Also, I think conversations by the original authors of a lot of the more fleshed-out ideas are much more nuanced than the messages that get spread.
E.g. on 4: 80k has a long list of potential highest priority cause areas that are worth exploring for longtermists and Holden, in his 80k podcast episode and the forum post he wrote says that for most people probably shouldn't go directly into AI (and instead should build aptitudes).
Nuanced ideas are harder to spread but also people feeling like they don't have permission in community spaces (in loc...
This was such a great articulation of such a core tension to effective altruism community building.
A key part of this tension comes from the fact that most ideas, even good ideas, will sound like bad ideas the first time they are aired. Ideas from extremely intelligent people and ideas that have potential to be iterated into something much stronger do not come into existence fully-formed.
Leaving more room for curious and open-minded people to put forward their butterfly ideas without being shamed/made to feel unintelligent means having room for...
I think a big thing I feel after reading this is a lot more disillusioned about community-building.
It is really unhealthy that people feel like they can’t dissent from more established (more-fleshed out?) thoughts/arguments/conclusions.
This post isn’t the only thing that makes me feel that there is way too much pressure to agree and way too little room to develop butterfly ideas (that are never well-argued...
That was a really clarifying reply!
tl;dr
...If few people actually have their own views on why AI is an important cause area to be able to translate them into plain English, then few people should be trying to convince others that AI is a big deal in a local group.
I think it is counterproductive for people who don't understand the argument they are making well enough to put the arguments into plain English to instead parrot off some jargon.
If you can't put the point you are trying to express in language the person you are talking to can understand, then there is no point talking to that
lol, yeah, totally agree (strong upvoted).
I think in hindsight I might literally have been subconsciously indicating in-groupness ("indicating in-groupness" means trying to show I fit in 🤮 -- feels so much worse in plain English for a reason, jargon is more precise but still often less obvious what is meant, so it's often easier to hide behind it) because my dumb brain likes for people to think I'm smarter than I am.
In my defense, it's so easy to, in the moment, to use the first way of expressing what I mean that comes to mind.
I am sure ...
I strong upvoted this because:
1) I think AI governance is a big deal (the argument for this has been fleshed out elsewhere by others in the community) and
2) I think this comment is directionally correct beyond the AI governance bit even if I don't think it quite fully fleshes out the case for it (I'll have a go at fleshing out the case when I have more time but this is a time-consuming thing to do and my first attempt will be crap even if there is actually something to it).
I think that strong upvoting was appropriate because:
1) stati...
I think a necessary condition to us keeping a lot of the amazing trust we have in this community is that we believe that that trust is valuable. I get that grifters are going to be an issue. I also think that grifters are going to have a much easier time if there isn't a lot of openness and transparency within the movement.
Openness and transparency, like we've seen historically, seems only possible with high degrees of trust.
Posting a post on the importance of trust seems like a good starting point for getting people on board with the idea that...
I have written up a draft template post on the importance of trust within the community (and trust with others we might want to cooperate with in the future, eg. like the people who made that UN report on future generations mattering a tonne happen).
Let me know if you would like a link, anyone reading this is also very welcome to reach out!
Feedback to the draft content/points and also social accountability are very welcome.
A quick disclaimer: I don't have a perfect historical track record of always doing the things I believe are important so th...
It's just my general feeling on the forum recently that a few different groups of people are talking past each other sometimes and all saying valuable true things (but still, as always, people generally are good at finding common ground which is something I love about the EA community).
Really, I just really want everyone reading to understand where everyone else is coming from. This vaguely makes me want to be more precise when other people are saying the same thing in plain English. It also makes me want to optimise for accessibility when everyone e...
Some of my personal thoughts on jargon and why I chose, pretty insensitively given the context of this post, to use some anyway
I used the "second moment of a distribution" jargon here initially (without the definition that I later edited in) because I feel like sometimes people talk past each other. I wanted to say what I meant in a way that could be understood more by people who might not be sure exactly what everyone else precisely meant. Plain English sometimes lacks precision for the sake of being inclusive (inclusivity that I personally think is...
For what it's worth, I think the term "variance" is much more accessible than "second moment".
Variance is a relatively common word. I think in many cases we can be more inclusive without losing precision (another example is "how much I'm sure of this" vs "epistemic status")
Thanks 😊.
Yeah, I've noticed that this is a big conversation right now.
EA ideas are nuanced and ideas do/should move quickly as the world changes and our information about it changes too. It is hard to move quickly with a very large group of people.
However, the core bit of effective altruism, something like "help others as much as we can and change our minds when we're given a good reason to", does seem like an idea that has room for a much wider ecosystem than we have.
I'm personally hopeful we'll get better at strik...
Great (and also unsurprising so I'm now trying to work out why I felt the need to write the initial comment)
I think I wrote the initial comment less because I expected anyone to reflectively disagree and more because I think we all make snap judgements that maybe take conscious effort to notice and question.
I don't expect anyone to advocate for people because they speak more jargon (largely because I think very highly of people in this community). I do expect it to be harder to understand someone who comes from a different cultural bubble and, therefore, h...
Hi Linch, I'm sorry for taking so long to reply to this! I mainly just noticed I was conflating several intuitions and I needed to think more to tease them out.
(my head's no longer in this and I honestly never settled on a view/teased out the threads but I wanted to say something because I felt it was quite rude of me to have never replied)
There are also limited positions in organisations as well as limited capacity of senior people to train up junior people but, again, I'm optimistic that 1) this won't be so permanent and 2) we can work out how to better make sure the people who care deeply about effective altruism who have careers outside effective altruism organisations also feel like valued members of the community.
Will we permanently have low capacity?
I think it is hard to grow fast and stay nuanced but I personally am optimistic about ending up as a large community in the long-run (not next year, but maybe next decade) and I think we can sow seeds that help with that (eg. by maybe making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere).
Yeah, I also find it very de-stabilizing and then completely forget my own journey instantly once I've reconciled everything and am feeling stable and coherent again.
It's nice to hear I'm not the only one here who isn't 99.999 percentile stoically unaffected by this.
I think one way to deal with this is to mainly select for people with these weird dispositions who are unusually good at coping with this.
I think an issue with this is that the other 99% of planet Earth might be good allies to have in this whole "save the world" project ...
I liked this commentary even if I disagreed with a lot of the bottom line conclusions. Since we have an inferential gap that could be quite large, I don't expect everything you say to make sense to me.
You are probably directionally correct so I have strong upvoted this to encourage you to continue writing.
I don't have the energy right now to get into the object-level but feel free to share future draft posts as your thoughts develop. If I have a spare moment, I'd be very happy to share any feedback I have on your future thoughts with you.
(all good humor tends to be pointing to some angle of the truth that needs time to become nuanced enough to be more widely legible)