CL

Chris Leong

Organiser @ AI Safety Australia and NZ
4494 karmaJoined Nov 2015Sydney NSW, Australia

Bio

Participation
2

Currently doing local AI safety Movement Building in Australia and NZ.

Comments
777

One difference between our perspectives is that I don't take for granted that this process will occur unless the conditions are right. And the faster a movement grows, the less likely it is for lessons to be passed on to those who are coming in. This isn't dismissing these people, just how group dynamics work and a reality of more experienced people having less time to engage.

I want to see EA grow fast. But at a high enough speed, I'm not sure what exactly, at which this will most likely degrade our culture.  All this said, I'm less concerned about this than before. As terrible as the FTX collapse and recent events have been, I wouldn't be surprised if we no longer have to worry about potentially growing too fast.

I always get confused about the difference between superposition and poly-semanticy. Would be great if the article clarified this.

I guess a lot of the strange causes people explored weren’t chosen in a top down manner. Rather someone just decided to start a project and seek funding for it.

This is probably changing now that Rethink is incubating new orgs and Charity Entrepreneurship is thinking further afield, but regardless I expect most people who want EA to be weird want people doing this kind of exploration.

This is exciting!

Do you have any thoughts on how the community should be following up on this?

I like your attempt to draw a distinction between two different ways to view community building, however some parts of the table appear strange.

When people say that they want EA to stay weird, they mean that they want people exploring all kinds of crazy cause areas instead of just sticking the main ones (in tension with your definition of cause-first).

Also: one the central arguments for leaning more towards EA being small and weird is that you end up with a community more driven by principle because a) slower growth makes it easier for new members to absorb knowledge from more experienced ones vs. from people who don't really understand the philosophy very well themselves yet b) lower expectations for growth make it easier to focus on people with whom the philosophy really resonates vs. marginally influencing people who aren't that keen on it.

Another point, there's two different ways to build a member first community:

  • The first is to try to build a welcoming community that best meets the needs of everyone who has an interest in the community.
  • The second it to build a community that focuses on the needs of the core members and relies on them to drive impact.

These two different definitions will lead to two different types of community.

To build the first you'd want to engage in broach outreach with diverse messaging. With the second, it would be more about finding the kinds of people who most resonate with your principles. With the first you try to meet people where they are, with the second you're more interested in people who will deeply adopt your principles. With the first, you want engagement with as many people as possible, with the second you want enagement to be as deep as possible.

I guess my perspective is that all that these revealed preferences show is that people prefer to maintain their social status (benefit accrues to them personally) rather than support an unpopular change that is extremely unlikely to happen and where their support is extremely unlikely to make a difference (benefits are distributed).

So even if I accept this method of finding truth, it actually shows less than it might appear at first glance.

I agree with this up until:

If AI doomers think the expected harms of AI are too low to justify even temporary tweaks to US immigration policy, that suggests the risk of AI killing us all isn’t that high.

Focusing on immigration isn't a clear win in that it would require the expenditure of political capital/lobbying resources and potentially burn a lot of credibility among the Democrats.

But I think the deeper issue is that this doesn't seem like a good way of identifying the truth. I guess maybe you could make the argument that if the doom worldview suggests that we should make immigration changes and people with that worldview irrationally reject it, then maybe we can be a bit more skeptical of their reasoning abilities in general.

However, given basically any group, you can find one thing they are irrational about and then try to use this to discredit them. So this isn't a very reliable method of reasoning.

Sorry, I’m trying to reduce the amount of time I spend on the forum.

More weight on community opinions than you suggested.

Load more