I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.
Try to ask your first batch of questions by Monday, October 17 (so that people who want to answer questions can know to make some time around then).
Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers. [Edit: prize-winning questions and answers are announced here.]
This is a test thread — we might try variations on it later.[1]
How to ask questions
Ask anything you’re wondering about that has anything to do with effective altruism.
More guidelines:
- Try to post each question as a separate "Answer"-style comment on the post.
- There’s no such thing as a question too basic (or too niche!).
- Follow the Forum norms.[2]
I encourage everyone to view asking questions that you think might be “too basic” as a public service; if you’re wondering about something, others might, too.
Example questions
- I’m confused about Bayesianism; does anyone have a good explainer?
- Is everyone in EA a utilitarian?
- Why would we care about neglectedness?
- Why do people work on farmed animal welfare specifically vs just working on animal welfare?
- Is EA an organization?
- How do people justify working on things that will happen in the future when there’s suffering happening today?
- Why do people think that forecasting or prediction markets work? (Or, do they?)
How to answer questions
Anyone can answer questions, and there can (and should) be multiple answers to many of the questions. I encourage you to point people to relevant resources — you don’t have to write everything from scratch!
Norms and guides:
- Be generous and welcoming (no patronizing).
- Honestly share your uncertainty about your answer.
- Feel free to give partial answers or point people to relevant resources if you can’t or don’t have time to give a full answer.
- Don’t represent your answer as an official answer on behalf of effective altruism.
- Keep to the Forum norms.
You should feel free and welcome to vote on the answers (upvote the ones you like!). You can also give answers to questions that already have an answer, or reply to existing answers, especially if you disagree.
The (small) prize
This isn’t a competition, but just to help kick-start this thing (and to celebrate excellent discussion at the end), the Forum team will award $100 each to my 5 favorite questions, and $100 each to my 5 favorite answers (questions posted before Monday, October 17, answers posted before October 24).
I’ll post a comment on this post with the results, and edit the post itself to list the winners. [Edit: prize-winning questions and answers are announced here.]

- ^
Your feedback is very welcome! We’re considering trying out themed versions in the future; e.g. “Ask anything about cause prioritization” or “Ask anything about AI safety.”
We’re hoping this thread will help get clarity and good answers, counter some impostor syndrome that exists in the community (see 1 and 2), potentially rediscover some good resources, and generally make us collectively more willing to ask about things that confuse us.
- ^
If I think something is rude or otherwise norm-breaking, I’ll delete it.
I agree with your approach to the question but perhaps if we really take the simulation hypothesis seriously (or at least consider it probable enough to concern us) the first step should be finding a way to tell whether or not we actually live in a simulation. Research in Physics/Astronomy could explicitly look for and device experiments looking to demonstrate systematic inconsistencies in the fabric of our universe that could give a hint on the made up nature of all laws. This in a way is an indirect answer to your last question. If effective altruisms is not an ideology just to be followed but a rational enterprise grounded on the actual nature of our universe, then it should also be concerned with improving our understanding of it. Even if this eventually leads to a radical re-think of what effective altruisms should be.