Topic Contributions

Comments

On funding, trust relationships, and scaling our community [PalmCone memo]

Yup, existing EA's do not disappear if we go bust in this way. But I'm pretty convinced that it would still be very bad. Roughly, the community dies, even if the people making it up don't vanish. Trust/discussion/reputation dry up, the cluster of people who consider themselves "EA" are now very different from the current thing, and that cluster kinda starts doing different stuff on its own. Further community-building efforts just grow the new thing, not "real" EA.

I think in this scenario the best thing to do is for the core of old-fashioned EA's to basically disassociate with this new thing, come up with a different name/brand, and start the community-building project over again.

On funding, trust relationships, and scaling our community [PalmCone memo]

But I am also afraid that ... we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them

 

I've had a model of community building at the back of my mind for a while that's something like this:

"New folks come in, and pick up knowledge/epistemics/heuristics/culture/aesthetics from the existing group, for as long as their "state" (wrapping all these things up in one number for simplicity) is "less than the community average". But this is essentially a one way diffusion sort of dynamic, which means that the rate at which newcomers pick stuff up from the community is about proportional to the gap between their state and the community state, and proportional to the size of community vs number of relative newcomers at any given time."

The picture this leads to is kind of a blackjack situation. We want to grow as fast as we can, for impact reasons. But if we grow too fast, we can't onboard people fast enough, the community average starts dropping, and seems unlikely to recover (we go bust). On this view, figuring out how to "teach EA culture" is extremely important - it's a limiting factor for growth, and failure due to going bust is catastrophic while failure from insufficient speed is gradual.

Currently prototyping something at the Claremont uni group to try and accelerate this. Seems like you've thought about this sort of thing a lot - if you've got time to give feedback on a draft, that would be much appreciated.

"Big tent" effective altruism is very important (particularly right now)

I think the best remedy to looking dogmatic is actually having good, legible epistemics, not avoiding coming across as dogmatic by adding false uncertainty.

This is a great sentence, I will be stealing it :)

However, I think "having good legible epistemics" being sufficient for not coming across as dogmatic is partially wishful thinking. A lot of these first impressions are just going to be pattern-matching, whether we like it or not.

I would be excited to find ways to pattern-match better, without actually sacrificing anything substantive. One thing I've found anecdotally is that a sort of "friendly transparency" works pretty well for this - just be up front about what you believe and why, don't try to hide ideas that might scare people off, be open about the optics on things, ways you're worried they might come across badly, and why those bad impressions are misleading, etc.

Emrik's Shortform

Hey, I really like this re-framing! I'm not sure what you meant to say in the second and third sentences tho :/

james.lucassen's Shortform

Question for anyone who has interest/means/time to look into it: which topics on the EA  forum are overrepresented/underrepresented? I would be interested in comparisons of (posts/views/karma/comments) per (person/dollar/survey interest) in various cause areas. Mostly interested in the situation now, but viewing changes over time would be great!

My hypothesis [DO NOT VIEW IF YOU INTEND TO INVESTIGATE]:

I expect longtermism to be WILDLY, like 20x, overrepresented. If this is the case I think it may be responsible for a lot of the recent angst about the relationship between longtermism and EA more broadly, and would point to some concrete actions to take.

Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)

This is a thing I and a lot of other organizers I've talked to have really struggled with. My pet theory that I'll eventually write up and post (I really will, I promise!) is that you need Alignment, Agency, and Ability to have a high impact. Would definitely be interested in actual research on this.

Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)

Nice work! Lots of interesting results in here that I think lead to concrete strategy insights.

only 7.4% of New York University students knew what effective altruism (EA) is. At the same time, 8.8% were extremely sympathetic to EA ... Interestingly, these EA-sympathetic students were largely ignorant about EA; only 14.5% knew about it before the survey.

This is a great core finding! I think I got a couple important lessons from these three numbers alone. Outreach could probably be a few times bigger without the proportion of EA students who know about it getting near enough to 100% for sharply diminishing returns. Knowing what EA is seems like about 2:1 evidence in favor of being EA-sympathetic, which is useful, but not that huge. Getting the impression that "nobody at my school knows about EA :(" isn't actually very bad news - folks who are interested in EA do know about it at a meaningfully higher rate, so even at the ideal level, maybe only 50% of students will know what EA is.

Strikingly, 47.9% of this group did not consider existential risk mitigation to be a global priority.

This seems to suggest that recent thoughts about "longtermism" vs "X-risk" and the importance of arguing for the size of the far future may not be good for outreach. My impression is that maybe the importance of X-risk even without the "size of the future" piece of the argument seems important to someone who's been around EA for a while, but isn't obvious enough to be good for outreach, where attention is a scarce resource. Accepting the argument for high X-risk only leads to prioritizing it about half the time. I wonder how including a size-of-the-future argument would change this?

There were few robust demographic predictors of EA agreement. Neither gender, SAT scores, nor most study subjects significantly correlated with it.

This is a big update! I expected correlations on all three of those things. This suggests the current EA stereotype is more due to founder effect than actual difference in affinity for the ideas, which is huge for outreach targeting.

Longtermist slogans that need to be retired

I'm unsure if I agree or not. I think this could benefit from a bit of clarification on the "why this needs to be retired" parts.

For the first slogan, it seems like you're saying that this is not a complete argument for longtermism - just because the future is big doesn't mean its tractable, or neglected, or valuable at the margin. I agree that it's not a complete argument, and if I saw someone framing it that way I would object. But I don't think that means we need to retire the phrase unless we see it being constantly used as a strawman or something? It's not complete, but it's a quick way to summarize a big part of the argument.

For the second one, it sounds like you're saying this is misleading - it doesn't accurately represent the work being done, which is mostly on lock-in events, not affecting the long-term future. This is true, but it takes only one extra sentence to say "but this is hard so in practice we focus on lock-in". It's a quick way to summarize the philosophical motivations, but does seem pretty detached from practice.

I think my takeaway from thinking thru this comment is this:

  • Longtermism is a complicated argument with a lot of separate pieces
  • We have slogans that summarize some of those pieces and leave out others
  • Those slogans are useful in a high-context environment, but can be misleading for those that don't already know all the context they implicitly rely on
The Effective Altruism culture

Yes, 100% agree. I'm just personally somewhat nervous about community building strategy and the future of EA, so I want to be very careful. I tried to be neutral in my comment because I really don't know how inclusive/exclusive we should be, but I think I might have accidentally framed it in a way that reads implicitly leaning exclusive, probably because I read the original post as implicitly leaning inclusive.

The Effective Altruism culture

This is good and I want to see explicit discussion of it. One framing that I think might be helpful:

It seems like the cause of a lot of the recent "identity crisis" in EA is that we're violating good heuristics. It seems like if you're trying to do the most good, really a lot of the time that means you should be very frugal, and inclusive, and beware the in-group, and stuff like that.

However, it seems like we might live in a really unusual world. If we are in fact massively talent constrained, and the majority of impact comes from really high-powered talent and "EA celebrities", then maybe we are just in one of the worlds where these heuristics lead us astray, despite being good overall.

Ultimately, I think it comes down to: "if we live in a world where inclusiveness leads to the highest impact, I want EA to be inclusive. If we live in a world where elitism leads to the highest impact, I want EA to be elitist". That feels really uncomfortable to say, which I think is good, but we should be able to overcome discomfort IF we need to.

Load More