I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.
Try to ask your first batch of questions by Monday, October 17 (so that people who want to answer questions can know to make some time around then).
Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers. [Edit: prize-winning questions and answers are announced here.]
This is a test thread — we might try variations on it later.[1]
How to ask questions
Ask anything you’re wondering about that has anything to do with effective altruism.
More guidelines:
- Try to post each question as a separate "Answer"-style comment on the post.
- There’s no such thing as a question too basic (or too niche!).
- Follow the Forum norms.[2]
I encourage everyone to view asking questions that you think might be “too basic” as a public service; if you’re wondering about something, others might, too.
Example questions
- I’m confused about Bayesianism; does anyone have a good explainer?
- Is everyone in EA a utilitarian?
- Why would we care about neglectedness?
- Why do people work on farmed animal welfare specifically vs just working on animal welfare?
- Is EA an organization?
- How do people justify working on things that will happen in the future when there’s suffering happening today?
- Why do people think that forecasting or prediction markets work? (Or, do they?)
How to answer questions
Anyone can answer questions, and there can (and should) be multiple answers to many of the questions. I encourage you to point people to relevant resources — you don’t have to write everything from scratch!
Norms and guides:
- Be generous and welcoming (no patronizing).
- Honestly share your uncertainty about your answer.
- Feel free to give partial answers or point people to relevant resources if you can’t or don’t have time to give a full answer.
- Don’t represent your answer as an official answer on behalf of effective altruism.
- Keep to the Forum norms.
You should feel free and welcome to vote on the answers (upvote the ones you like!). You can also give answers to questions that already have an answer, or reply to existing answers, especially if you disagree.
The (small) prize
This isn’t a competition, but just to help kick-start this thing (and to celebrate excellent discussion at the end), the Forum team will award $100 each to my 5 favorite questions, and $100 each to my 5 favorite answers (questions posted before Monday, October 17, answers posted before October 24).
I’ll post a comment on this post with the results, and edit the post itself to list the winners. [Edit: prize-winning questions and answers are announced here.]

- ^
Your feedback is very welcome! We’re considering trying out themed versions in the future; e.g. “Ask anything about cause prioritization” or “Ask anything about AI safety.”
We’re hoping this thread will help get clarity and good answers, counter some impostor syndrome that exists in the community (see 1 and 2), potentially rediscover some good resources, and generally make us collectively more willing to ask about things that confuse us.
- ^
If I think something is rude or otherwise norm-breaking, I’ll delete it.
Thanks for the link!
I think most examples in the post do not include the part about "as a result of public or private feedback", though I think I communicated this poorly.
My thought process behind going beyond a list of mistakes and changes to including a description of how they discovered this issue or the feedback that prompted it,[1] is that doing so may be more effective at allaying people's fears of critiquing EA leadership.
For example, while mistakes and updates are documented, if you were concerned about, say, gender diversity (~75% men in senior roles) in the organization,[2] but you were an OpenPhil employee or someone receiving money from OpenPhil, would the the contents of the post [3] you linked actually make you feel comfortable raising these concerns?[4] Or would you feel better if there was an explicit acknowledgement that someone in a similar situation had previously spoken up and contributed to positive change?
I also think curating something like this could be beneficial not just for the EA community, but also for leaders and organizations who have a large influence in this space. I'll leave the rest of my thoughts in a footnote to minimize derailing the thread, but would be happy to discuss further elsewhere with anyone who has thoughts or pushbacks about this.[5]
Anonymized as necessary
I am not saying that I think OpenPhil in fact has a gender diversity problem (is 3/4 men too much? what about 2/3? what about 3/5? Is this even the right way of thinking about this question?), nor am I saying that people working in OpenPhil or receiving their funding don't feel comfortable voicing concerns.
I am not using OpenPhil as an example because I believe they are bad, but because they seem especially important as both a major funder of EA and as folks who are influential in object-level discussions on a range of EA cause areas.
Specifically, this would be Holden's Three Key Issues I've Changed My Mind About
This also applies to CEA's "Our Mistakes" page, which includes a line "downplaying critical feedback from the community". Since the page does not talk about why the feedback was downplayed or steps taken specifically to address the causes of the downplaying, one might even update away from providing feedback after reading this.
(In this hypothetical, I am starting from the place where the original concern: "People are pretty justified in their fears of critiquing EA leadership/community norms." is true. This is an assumption, not because I personally know this is the case.)
Among the many bad criticisms of EA, I have heard some good-faith criticisms of major organizations in the EA community (some by people directly involved) that I would consider fairly serious. I think having these criticisms circulating in the community through word of mouth might be worse than having a public compilation, because:
1) it contributes to distrust in EA leaders or their ability to steer the EA movement (which might make it harder for them to do so);
2) it means there are less opportunities for this to be fixed if EA leaders aren't getting an accurate sense of the severity of mistakes they might be making, which might also further exacerbate this problem; and
3) it might mean an increased difficulty in attracting or retain the best people in the EA space
I think it could be in the interest of organizations who play a large role in steering the EA movement to make compilations of all the good-faith pieces of feedback and criticisms they've received, as well as a response that includes points of (dis)agreement, and any updates as a result of the feedback (possibly even a reward, if it has contributed to a meaningfully positive change).
If the criticism is misplaced, it provides an opportunity to provide a justification that might have been overlooked, and to minimize rumors or speculation about these concerns. The extent to which the criticism is not misplaced, it provides an opportunity for accountability and responsiveness that builds and maintains the trust of the community. It also means that those who disagree at a fundamental level with the direction the movement is being steered can make better-informed decisions about the extent of their involvement with the movement.
This also means other organizations who might be subject to similar criticisms can benefit from the feedback and responses, without having to make the same mistakes themselves.
One final benefit of including responses to substantial pieces of feedback and not just "mistakes", is that feedback can be relevant even if not in response to a mistake. For example, the post Red Teaming CEA’s Community Building Work claims that CEA's mistakes page has "routinely omitted major problems, significantly downplayed the problems that are discussed, and regularly suggests problems have been resolved when that has not been the case".
Part of Max's response here suggests some of these weren't considered "mistakes" but were "suboptimal". While I agree it would be unrealistic to include every inefficiency in every project, I can imagine two classes of scenarios where responses to feedback could capture important things that responses to mistakes do not.
The first class is when there's no clear seriously concerning event that one can point to, but the status quo is detrimental in the long run if not changed.
For example, if a leader of a research organization is a great researcher but bad at running a research organization, at what stage does this count as a "mistake"? If an organization lacks diversity, to whatever extent this is harmful, at what stage does this count as a "mistake"?
The second class is when the organization itself is perpetuating harm in the community but aren't subject to any formal accountability mechanisms. If an organization funded by CEA does harm, they can have their funding pulled. If an individual is harmful to the community, they can be banned. While there have been some form of accountability in what appears to be an informal, crowd-sourced list of concerns, this seemed to be prompted by egregious and obvious cases of alleged misconduct, and might not work for all organizations. Imagine an alternate universe where OpenPhil started actively contributing to harm in the EA community, and this harm grew slowly over time. How much harm would they need to be doing for internal or external feedback to be made public to the rest of the EA community? How much harm would they need to do for a similar crowd-sourced list of concerns to arise? How much harm would they need to do for the EA community to disavow them and their funding? Do we have accountability mechanisms and systems in place to reduce the risk here, or notice it early?