Building effective altruism
Building EA
Growing, shaping, or otherwise improving effective altruism as a practical and intellectual project

Quick takes

10
8h
I think eventually, working on changing the EA introductory program is important. I think it is an extremely good thing to do well, and I think it could be improved. I'm running a 6 week version right now, and I'll see if I feel the same way at the end.
14
3d
7
I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and there is lots of time for them to lose interest or get lost along the way. Interestingly, this stands in contrast to my personal experience—I found EA when I was in my early 20s and would have benefited significantly from hearing about it in my teenage years.
36
21d
2
I'm the co-founder and one of the main organizers of EA Purdue. Last fall, we got four signups for our intro seminar; this fall, we got around fifty. Here's what's changed over the last year: * We got officially registered with our university. Last year, we were an unregistered student organization, and as a result lacked access to opportunities like the club fair and were not listed on the official Purdue extracurriculars website. After going through the registration process, we were able to take advantage of these opportunities. * We tabled at club fairs. Last year, we did not attend club fairs, since we weren't yet eligible for them. This year, we were eligible and attended, and we added around 100 people to our mailing list and GroupMe. This is probably the most directly impactful change we made. * We had a seminar sign-up QR code at the club fairs. This item actually changed between the club fairs, since we were a bit slow to get the seminar sign-up form created. A majority of our sign-ups came from the one club fair where we had the QR code, despite the other club fair being ~10-50x larger. * We held our callout meeting earlier. Last year, I delayed the first intro talk meeting until the middle of the third week of school, long after most clubs finished their callouts. This led to around 10 people showing up, which was still more than I expected, but not as much as I had hoped. This year, we held the callout early the second week of school, and ended up getting around 30-35 attendees. We also gave those attendees time to fill out the seminar sign-up form at the callout, and this accounted for most of the rest of our sign-ups. * We brought food to the callout. People are more likely to attend meetings at universities if there is food, especially if they're busy and can skip a long dining court line by listening to your intro talk. I highly recommend bringing food to your regular meetings too - attendance at our general meetings doubled last year after I s
17
9d
2
At this point, we need an 80k page on "What to do after leaving Open AI" 1. Don't start another AI safety lab
21
16d
1
I’m part of a working group at CEA that’s started scoping out improvements for effectivealtruism.org. Our main goals are: 1. Improve understanding of what EA is (clarify and simplify messaging, better address common misconceptions, showcase more tangible examples of impact, people, and projects) 2. Improve perception of EA (show more of the altruistic and other-directedness parts of EA alongside the effective, pragmatic, results-driven parts, feature more testimonials and impact stories from a broader range of people, make it feel more human and up-to-date) 3. Increase high-value actions (improve navigation, increase newsletter and VP signups, make it easier to find actionable info) For the first couple of weeks, I’ll be testing how the current site performs against these goals, then move on to the redesign, which I’ll user-test against the same goals. If you’ve visited the current site and have opinions, I’d love to hear them. Some prompts that might help: * Do you remember what your first impression was? * Have you ever struggled to find specific info on the site? * Is there anything that annoys you? * What do you think could be confusing to someone who hasn't heard about EA before? * What’s been most helpful to you? What do you like? If you prefer to write your thoughts anonymously you can do so here, although I’d encourage you to comment on this quick take so others can agree or disagree vote (and I can get a sense of how much the feedback resonates).
69
4mo
4
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune. Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take! https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
26
1mo
4
I've been thinking a bunch about a fundamental difference between the EA community and the LessWrong community. LessWrong is optimized for the enjoyment of its members. Any LessWrong event I go to in any city the focus is on "what will we find fun to do?" This is great. Notice how the community isn't optimized for "making the world more rational." It is a community that selects for people interested in rationality and then when you get these kinds of people in the same room the community tries to optimize for FUN for these kinds of people. EA as a community is NOT optimized for the enjoyment of its members. It is optimized for making the world a better place. This is a feature, not a bug. And surely it should be net positive since its goal should by definition be net positive. When planning an EAG or EA event you measure it on impact and say professional connections made and how many new high quality AI Alignment researchers you might have created on the margin. You don't measure it on how much people enjoyed themselves (or you do, but for instrumental reasons to get more people to come so that you can continue to have impact). As a community organizer in both spaces, I do notice it is easier that I can leave EA events I organized feeling more burnt out and less fulfilled than compared to similar LW/ACX events. I think the fundamental difference mentioned before explains why. Dunno if I am pointing at anything that resonates with anyone. I don't see this discussed much among community organizers. Seems important to highlight.  Basically in LW/ACX spaces - specifically as an organizer - I more easily feel like a fellow traveller up for a good time. In EA spaces - specifically as an organizer - I more easily feel like an unpaid recruiter.
62
5mo
12
I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing. It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes. I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it. EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again. EDIT 2: For those wondering what this quick-take is reacting to, here's a good summary by David Thorstad.
Load more (8/105)