I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.
Try to ask your first batch of questions by Monday, October 17 (so that people who want to answer questions can know to make some time around then).
Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers. [Edit: prize-winning questions and answers are announced here.]
This is a test thread — we might try variations on it later.[1]
How to ask questions
Ask anything you’re wondering about that has anything to do with effective altruism.
More guidelines:
- Try to post each question as a separate "Answer"-style comment on the post.
- There’s no such thing as a question too basic (or too niche!).
- Follow the Forum norms.[2]
I encourage everyone to view asking questions that you think might be “too basic” as a public service; if you’re wondering about something, others might, too.
Example questions
- I’m confused about Bayesianism; does anyone have a good explainer?
- Is everyone in EA a utilitarian?
- Why would we care about neglectedness?
- Why do people work on farmed animal welfare specifically vs just working on animal welfare?
- Is EA an organization?
- How do people justify working on things that will happen in the future when there’s suffering happening today?
- Why do people think that forecasting or prediction markets work? (Or, do they?)
How to answer questions
Anyone can answer questions, and there can (and should) be multiple answers to many of the questions. I encourage you to point people to relevant resources — you don’t have to write everything from scratch!
Norms and guides:
- Be generous and welcoming (no patronizing).
- Honestly share your uncertainty about your answer.
- Feel free to give partial answers or point people to relevant resources if you can’t or don’t have time to give a full answer.
- Don’t represent your answer as an official answer on behalf of effective altruism.
- Keep to the Forum norms.
You should feel free and welcome to vote on the answers (upvote the ones you like!). You can also give answers to questions that already have an answer, or reply to existing answers, especially if you disagree.
The (small) prize
This isn’t a competition, but just to help kick-start this thing (and to celebrate excellent discussion at the end), the Forum team will award $100 each to my 5 favorite questions, and $100 each to my 5 favorite answers (questions posted before Monday, October 17, answers posted before October 24).
I’ll post a comment on this post with the results, and edit the post itself to list the winners. [Edit: prize-winning questions and answers are announced here.]

- ^
Your feedback is very welcome! We’re considering trying out themed versions in the future; e.g. “Ask anything about cause prioritization” or “Ask anything about AI safety.”
We’re hoping this thread will help get clarity and good answers, counter some impostor syndrome that exists in the community (see 1 and 2), potentially rediscover some good resources, and generally make us collectively more willing to ask about things that confuse us.
- ^
If I think something is rude or otherwise norm-breaking, I’ll delete it.
When I read Critiques of EA that I want to read, one very concerning section seemed to be "People are pretty justified in their fears of critiquing EA leadership/community norms."
1) How seriously is this concern taken by those that are considered EA leadership, major/public facing organizations, or those working on community health? (say, CEA, OpenPhil, GiveWell, 80000 hours, Forethought, GWWC, FHI, FTX)
2a) What plans and actions have been taken or considered?
2b) Do any of these solutions interact with the current EA funding situation and distribution? Why/why not?
3) Are there publicly available compilations of times where EA leadership or major/public facing organizations have made meaningful changes as a result of public or private feedback?
(Additional note: there were a lot of publicly supportive comments [1] on the Democratising Risk - or how EA deals with critics post, yet it seems like the overall impression was that despite these public comments, she was disappointed by what came out of it. It's unclear whether the recent Criticism/Red-teaming contest was a result of these events, though it would be useful to know which organizations considered or adopted any of the suggestions listed[2] or alternate strategies to mitigate concerns raised, and the process behind this consideration. I use this as an example primarily because it was a higher-profile post that involved engagement from many who would be considered "EA Leaders".)
1, 2, 3, 4
"EA needs to diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards…amongst other structural changes."
Thanks for asking this. I can chime in, although obviously I can't speak for all the organizations listed, or for "EA leadership." Also, I'm writing as myself — not a representative of my organization (although I mention the work that my team does).
- I think the Forum team takes this worry seriously, and we hope that the Forum contributes to making the EA community more truth-seeking in a way that disregards status or similar phenomena (as much as possible). One of the goals for the Forum is to improve community norms and epistemics, and this (criticis
... (read more)