I invite you to ask anything you’re wondering about that’s remotely related to effective altruism. There’s no such thing as a question too basic.
Try to ask your first batch of questions by Monday, October 17 (so that people who want to answer questions can know to make some time around then).
Everyone is encouraged to answer (see more on this below). There’s a small prize for questions and answers. [Edit: prize-winning questions and answers are announced here.]
This is a test thread — we might try variations on it later.[1]
How to ask questions
Ask anything you’re wondering about that has anything to do with effective altruism.
More guidelines:
- Try to post each question as a separate "Answer"-style comment on the post.
- There’s no such thing as a question too basic (or too niche!).
- Follow the Forum norms.[2]
I encourage everyone to view asking questions that you think might be “too basic” as a public service; if you’re wondering about something, others might, too.
Example questions
- I’m confused about Bayesianism; does anyone have a good explainer?
- Is everyone in EA a utilitarian?
- Why would we care about neglectedness?
- Why do people work on farmed animal welfare specifically vs just working on animal welfare?
- Is EA an organization?
- How do people justify working on things that will happen in the future when there’s suffering happening today?
- Why do people think that forecasting or prediction markets work? (Or, do they?)
How to answer questions
Anyone can answer questions, and there can (and should) be multiple answers to many of the questions. I encourage you to point people to relevant resources — you don’t have to write everything from scratch!
Norms and guides:
- Be generous and welcoming (no patronizing).
- Honestly share your uncertainty about your answer.
- Feel free to give partial answers or point people to relevant resources if you can’t or don’t have time to give a full answer.
- Don’t represent your answer as an official answer on behalf of effective altruism.
- Keep to the Forum norms.
You should feel free and welcome to vote on the answers (upvote the ones you like!). You can also give answers to questions that already have an answer, or reply to existing answers, especially if you disagree.
The (small) prize
This isn’t a competition, but just to help kick-start this thing (and to celebrate excellent discussion at the end), the Forum team will award $100 each to my 5 favorite questions, and $100 each to my 5 favorite answers (questions posted before Monday, October 17, answers posted before October 24).
I’ll post a comment on this post with the results, and edit the post itself to list the winners. [Edit: prize-winning questions and answers are announced here.]

- ^
Your feedback is very welcome! We’re considering trying out themed versions in the future; e.g. “Ask anything about cause prioritization” or “Ask anything about AI safety.”
We’re hoping this thread will help get clarity and good answers, counter some impostor syndrome that exists in the community (see 1 and 2), potentially rediscover some good resources, and generally make us collectively more willing to ask about things that confuse us.
- ^
If I think something is rude or otherwise norm-breaking, I’ll delete it.
I think the cost/benefit ratio for this kind of accuracy is very good. The downsides are much, much larger than people realize/admit – it basically turns most of their conversations unproductive and prevents them from having very high quality or true knowledge that isn't already popular/standard (which leads to e.g. some of EA's funding priorities being incorrect). Put another way, it's a blocker for being persuaded of, and learning, many good ideas.
The straightforward costs of accuracy go down a lot with practice and automatization – if people tried, they'd get better at it. Not misquoting isn't really that hard once you get used to it (e.g. copy/pasting quotes and then refraining from editing them is, in some senses, easy – people fail at that mainly because they are pursuing some other kind of benefit, not because the cost is too high, though there are some details to learn like that Grammarly and spellcheck can be dangerous to accurate quotes). I agree it's hard initially to change mindsets to e.g. care about accuracy. Lots of ways of being a better thinker are hard initially but I'd expect a rationality-oriented community like this to have some interest in putting effort into better thinker – at least e.g. comparing with other options for improvement.
Also, (unlike most imprecision) misrepresenting what people said is deeply violating. It's important that people get to choose their own words and speak for themselves. It's treating someone immorally to put words in their mouth, of your choice not theirs, without their consent. Thinking the words are close enough or similar enough doesn't make that OK – that's their judgment call to make, not yours. Assuming they won't disagree, and that you didn't make a mistake, shows a lack of humility, fallibilism, tolerance and respect for ideas different than your own, understanding that different cultures and mindsets exist, etc. (E.g., you could think to yourself, before misquoting, that the person you're going to misquote might be a precise or autistic thinker, rather than being more like you, and then have enough respect for those other types of people not to risk doing something to them that they wouldn't be OK with. Also if the quote involves any concept that matters a lot to a subculture they're in but you're not, then you risk making a change that means a lot to that subculture without realizing what you did.) Immorally treating another human being is another cost to take into account. Misquoting is also especially effective at tricking your audience into forming inaccurate beliefs, because they expect quotes to be accurate, so that's another cost. Most people don't actually believe that they have to look up every quote in a primary source themselves before believing it – instead they believe quotes in general are trustworthy. The norm that quotes must be 100% accurate is pretty widespread (and taught in schools) despite violations also being widespread.
There are other important factors, e.g. the social pressure to speak with plausible deniability when trying to climb a social hierarchy is a reason to avoid becoming a precise thinker even if more precise thinking and communicating would be less work on balance (due to e.g. fewer miscommunications). Or the mindset of a precise thinker can make it harder to establish rapport with some imprecise thinkers (so one way to approach that is to dumb yourself down).
Also, lots of people here can code or math, so things like looking at text with character-level precision is a skill they already developed significantly. There are many people in the world who would struggle to put a semi-colon at the end of every line of code, or who would struggle to learn and use markdown formatting rules correctly. Better precision would have larger upfront learning costs for those people. But I don't think those kind of inabilities are what stops this forum from having higher standards.
I have a lot more I could say and the issue of raising standards as a path to making EA more effective is one of the few topics I consider high enough priority to try to discuss. Do you want to have a serious conversation about this? If so, I'd start a new topic for it. Also it's hard to talk about complex topics with people who might stop responding, at any moment, with no explanation. That context makes it hard to decide how much to say, and hard to bring stuff up that might get no resolution or be misunderstood and not clarified.