You can give me anonymous feedback here: https://forms.gle/q65w9Nauaw5e7t2Z9
Great! Sounds interesting!
Two examples of newcomers, whose presence seems positive or productive:
But this doesn't indicate what could happen to forum discussion after an extensive, large deployment of money.
It's prudent to think about bad scenarios for the forum (e.g. large coordinated outside response, or just ~100 outside people coming in, causing weeks of chatter).
The best scenarios probably involve a forum which encourages and filters for good discussion (because the hundreds of thousands of people interested can't all be accommodated and just relying on self selection from a smaller group of people who wander in probably results in adverse selection).
The best outcomes might include bringing in and hosting discussions with great policy expertise, getting EA candidates good exposure, and building understanding and expertise in political campaigning.
I guess a bad scenario is maybe 20-30% probable? I guess most scenarios are just sort of mediocre outcomes, with "streetlight" sort of limitations in discussion, and selecting for the loud voices with less outside options.
Very good scenarios seem unlikely without EA effort. Maybe good scenarios requires active involvement and promotion of discussion.
TLDR; The EA Forum (EA as a whole?) should ready for attention/influx due to political money in about a 12 month horizon from this comment (so like 2023ish?). So maybe designing/implementing structure or norms, e.g. encouraging high quality discussion, using real names is good.
There is a news cycle going around that SBF will increase political spending for 2024.
So, are we going to talk about the awesome aesthetic design, like the five dimensional meta flower?
Can you write about the design choices and who did this work?
Is Aaron Gertler secretly this generation’s Beethoven?
Is Open Phil harboring an immense pool of artistic talent and will EA see a renaissance in design?
This was great!
Thanks for the approachable writing and specific anecdotes, it’s helpful.
This is probably my weird personal bias but maybe consider writing slightly more deeper stories, or even ornate lessons about the “system”. This would be interesting to get your perspective.
I think resource constraints are different between the EA labor pool and young military labor pool (where driver training is a major problem, as you describe), so it’s harder for me to get wisdom from anecdotes that don’t “seem deep”.
Or maybe I’m being ignorant.
My guess is that a reply to my comment above would be:
"Ok, fine, but this rationalizes more effective, better managed, and larger teams for higher impact, since we're earlier on the "curve of returns to labor" than it appears and can accept more labor and capital.
Furthermore, I can, and have been, provisioning this management and setting these conditions, so the argument applies and strong EAs should join my awesome team.
Also, hasn't it been a few months since your last forum ban?"
This is true. One reply to this reply, is that this implies EA organizations should be systemically larger than other organizations (due to the OP's reasoning of internalizing externalities and other conditions, see footnote 1 in above comment).
This is interesting, but doesn't immediately doesn't match our intuition or the general size of EA orgs, which are small (but maybe this is masked by the limited supply of high quality EA talent).
If Ben West's logic in the OP holds, this might have other implications that are important or interesting (again this is relying on EAs differentially supporting externalities).
No one has said it, but the main critique is:
The post doesn't make an argument for large EA organizations, it makes an argument that EAs are just broadly more impactful than they appear.
The fact that EAs might be 50x more impactful, does not imply that we should "add more people than is standard" to an EA organization, because exactly the logic applies if those people joined a smaller organization, created a new one, or started a personal project.
Another way of saying this: Ben West is saying that EAs are worth (50x) more than they appear at CEA. But they are worth 50x to any EA org.
(I don't think this is true but 🏈Something something motivated reasoning🏈.)
This argument probably requires EA to be especially focused on internalizing externalities, otherwise it applies to for-profit orgs, e.g. making Reddit larger too.
This key condition is missing in the OP, but I think EAs do internalize externalities much much more, so it works.
From your comment, I understand that you believe the funding situation is strong and not limiting for TAI, and also that the likely outcomes of current interventions is not promising.
(Not necessarily personally agreeing with the above) given your view, I think one area that could still interest you is "s-risk". This also relevant for your interests in alleviating massive suffering.
I think talking with CLR, or people such as Chi there might be valuable (they might be happy to speak if you are a personal donor).
Leadership development seems good in longtermism or TAI
(Admittedly it's an overloaded, imprecise statement but) the common wisdom that AI and longtermism is talent constrained seems true. The ability to develop new leaders or work is valuable and can give returns, even accounting for your beliefs being correct.
Prosaic animal welfare
Finally, you and other onlookers should be aware that animal welfare, especially the relatively tractable and "prosaic suffering" of farm animals, is one of the areas that has not received a large increase in EA funding.
Some information below should be interesting to cause neutral EAs. Note that based on private information:
This animal welfare work would benefit from money and expertise.
Notably, this is an area where EA has been able to claim significant tangible success (for the fraction that has been able to help).
This isn't exactly the answer you wanted and is sort of ruthless:
One guess (which is very likely to be off because of lack of context) is that the person(s) who told you to run a fellowship, told you to do this as a way of resolving a problem of not getting funding, and assumed that you >80% resolved to continue.
This is because a fellowship would allow funders to observe inputs and outputs better and know the potential organizer better.
So I'm saying, maybe they viewed successful execution of a fellowship as a potential signal. For example, an organizer could express certain kinds of skill or connections (that is unknowable because the people making decisions are far away). At the same time, the absence of this success, or some other lack of promise for a fellowship (which most of the time doesn't mean a lack of ability or that someone is a "bad EA"), would prevent a potential organizer from using this signal to show ability. So a fellowship is a filter.
So, I'm basically saying the "fellowship answer" might have been an answer to a specific situation of someone not getting funding, and giving them a potential path to continue.
This answer you are reading might be beneficial, because it points out this advice might be very different than an "instruction" or "robustly good advice with guaranteed reward".