Community Organiser for EA UK- https://www.effectivealtruism.uk
Monthly Overload of EA - https://moea.substack.com/
I'm always happy talking to anyone, don't hesitate to reach out. Specific things we may want to chat about include;
Topics I enjoy discussing (not exclusively);
If you're thinking about being a community organiser or are currently organising an EA related group then I'd be happy to share ideas on strategy and community building.
I've been an organiser with EA UK since 2015, working part time since 2017 and full time since 2019. I've also had conversations with people setting up groups around the world and also career, cause, interest and workplace related groups.
I have also had quite a few career 1-1s with people in the UK and could be a good sounding board if you had career/project questions.
I might have missed this but can you say how many people took the survey, and how many people filled out the FTX section?
It might have increased recently, but even in 2015, one survey found 44% of the American public would consider AI an existential threat. It's now 55%.
I think EA would improve with more competition as well, what I'm suggesting is more competition in the 'larger' orgs category. If someone has a disagreement with how things are run at one of the bigger EA orgs, there are very few other places for them to go within EA.
I don't think that the issues you linked to are because of centralisation, there are lots of small organisations badly run, and without larger organisations there often isn't a way to hold bad actors accountable.
I'm talking about increasing the number of large organisations. I don't think I can do much about how many different types of funders we have, which is a separate question.
On the first point, if FTX had happened and there were more large EA organisations, it would have been easier to handle the fall out from that, with more places for smaller organisations and individuals to go to for support.
On the last point it seems like that was a part of why DARPA had success, they had lots of projects and were focused on the best succeeding rather than maintaining failing ideas.
I think every cause can be presented normally/weirdly depending on how you do it, it was just in that example Kelsey was discussing global dev and I think a lot of people in EA assume that more people are interested in global development as they are just looking outside their bubble into a slightly larger bubble.
I would agree that it's usually best to introduce people to ideas closer to their interests (in any cause area) before moving onto related ones. Although sometimes they'll be more interested in the 'weird' ideas before getting involved in EA, and EA helps them approach it practically.
I'm not sure the metaphor holds up.
I imagine there are many more people interested in AI Safety, Biosecurity, Nuclear Risks who would be put off if they had to start by learning about the GWWC pledge.
Kelsey Piper writing about Vox analytics - 'Global poverty stuff doesn’t do very well. This is something that makes me very sad, and it makes my mother very sad. She reads all my articles, and she’s like, “The global poverty stuff is the best, you should do more of that.” I also would love to do more of that. I think it’s a really important topic, but it doesn’t get nearly as many views or as much attention as both the existential risk stuff and sort of the animal stuff and the weird big ideas sort of content.'
It seems to me that the new Community section is more like what a traditional subforum would look like on a forum.
For the other subforums to have succeeded they probably should have been on topics that are getting lots of posts/comments already.
That doesn't seem to match with EA being a front cover story last year, and being shown in a positive light.