Hide table of contents

Effective altruism is about figuring out how to do the most good, and then doing it. We want the EA Forum to be the online hub for figuring out how to do good.

Figuring out how to do good is important. Singer’s drowning child argument has redirected tens of millions of dollars towards effective charities. Bostrom’s work on existential risk has created a thriving field of people working to brighten the future of humanity. There are probably other important ideas that we haven’t found yet, and the Forum is designed to help us find them.

In order to figure out how to do good as a community, we need to coordinate and share ideas. Effective altruism needs a central hub for discussion, and a place where people can learn about all the movement’s best ideas, old and new. We want to make the EA Forum the hub for both of those things.

We think that everyone in the community can contribute to this project, and we encourage you to do so by posting, cross-posting, and commenting.

Why intellectual progress matters

Figuring out how to do good is important. We still don’t understand many important facts about the world, so to change the world we need to discover new things and gain more accurate beliefs. This allows us to do more in the future.

Indeed, the community is founded upon ideas and analysis, which are themselves examples of intellectual process -- and often very recent examples. The Drowning Child argument is less than 50 years old. GiveWell's analysis of the effectiveness of global health charities began roughly a decade ago. Many of EA’s core ideas have developed within the last few years; it’s highly probable that some of the ideas we’ll think of as “core” ten years from now have yet to be proposed.

Our intellectual progress matters because it can significantly increase the amount of good we do as a community. Realizing that we should care about a wide range of beings (humans in other countries, animals in factory farms, people in the future), seems like it increases our effectiveness by an order of magnitude or more. And improving our empirical understanding is likely to lead to similarly vast gains. So although intellectual progress is difficult, it yields significant benefits.

Making progress as a community

Much intellectual progress is done as part of a community: even when an individual has an important new insight, it tends to take many people to bring the idea to maturity. Academic science is perhaps the most obvious example of this.

Intellectual progress is not just something done by professional philosophers. It’s something that everyone in the community can contribute to.

Just as you might donate some of your income without working full-time for a charity, you can contribute intellectually without becoming a full-time researcher. For example, you could aggregate data on the number of invertebrates in the world, tweak GiveWell’s cost-effectiveness model, or analyze the history of an interesting social movement. And you can also help by writing up explanations or summaries of others’ work. Whilst you do so, you’ll be sharpening your thinking, understanding others’ ideas better, and learning about important topics.

This is why open forums have historically been places where key ideas behind effective altruism have been hammered out -- for instance, Felicifia, LessWrong, SL4, and many on the EA Forum itself.

To make intellectual progress, the community needs the right infrastructure, amongst other things:

  1. A place to share ideas (e.g. journals)
  2. A way of getting feedback on intellectual work (e.g. discussion groups, online forums)
  3. An easy way to search for existing work (e.g. literature reviews)
  4. Shared norms of discussion, and standards for work (e.g. karma voting)
  5. Common knowledge of core ideas, so that ideas can be built upon rather than constantly rediscovered (e.g. textbooks)
  6. A place for contributors to find important open questions in the field (e.g. research agendas)

The EA Forum intends to provide this infrastructure, and so become the central place where EAs make intellectual progress online.

The Forum is a place where anyone can share ideas (1) and get feedback on them (2). If most content is posted on the Forum, then its search function will be an easy way to find previous work (3). It will have shared norms of discussion (4), specifically those detailed in the moderation guidelines. In the next few months, and in consultation with the community, we aim to produce a core series of posts outlining the common knowledge that we can build on as a community (5). Listing open research questions (6) is an important problem that we are likely to work on in the next year.

Update (5/3/19): As a result of our leadership transition, we've deprioritized the creation of the core series of posts mentioned in point #5, and we can't commit to a release date.

So we think that we will make more progress as a community if we discuss ideas on the Forum. It’s a place to share rough notes and get feedback, a place to finally explain the concept that keeps coming up in conversation, and a place to cross-post and discuss the most interesting and important content that you find.

How can you contribute?

Write

Here are some reasons for posting to the EA Forum:

  • Learning
    • Writing posts is a great way to come up with important new ideas, as well as to practice your writing.
    • Getting feedback from other users can help you improve your thinking, and notice the errors you’re making.
  • Sharing
    • Writing a post for the Forum is an efficient way to explain an idea: you can do it once, then refer people to the post, rather than explaining anew each time. And you can explain it better than you would on the fly.
    • If you’ve written the idea up, others can also link to your explanation, which saves them time.
    • You can explain your ideas to someone who’s new to the community, or someone on the other side of the world, who you’ve never met.
    • When you write, you share not just your ideas, but your thinking patterns - others can learn from these, and can use that knowledge to coordinate more easily with you.
  • Self-promoting
    • Writing is good way to make a name for yourself. Many researchers in the community got started by writing a blog, and it’s something that you can show to prospective employers.

Of course, there are also reasons that you should think carefully before posting on the Forum:

  • There might be other more useful things for you to do.
  • You might find it difficult to deal with criticism of your ideas.
  • Spreading ideas can cause harm: be careful to avoid unproductive discord, and try to check your reasoning so that you don’t accidentally mislead people (community members will also let you know if they are worried about this).

Comment

Commenting is useful for many of the reasons above. In particular:

  • Commenting allows you to share your expertise.
  • It also allows you to engage more deeply with others’ work and understand it better.
  • It is an important way to reinforce useful norms of discussion.

Cross-Post

The Forum is a place to discuss all content related to effective altruism. Cross-posting an interesting post from elsewhere can help share the post’s ideas, and also creates a space for moderated discussion about the post with other community members.

Learn more about the Forum’s features, and how to use them, on our about page.

Comments9


Sorted by Click to highlight new comments since:

One thing which causes me and probably many others to avoid writing more on the forum, is the feeling of writing posts which "spams" or lowers the standards of the forum. This is not mentioned here as a reason not to post. I guess that the voting system and the option of writing on the personal blog solves most of this issue, and that we prefer to encourage more people to write more instead of focusing on quality for now?

That's right - one of the main goals of having posts sorted by karma (as well as having two sections) - is to allow people to feel more comfortable posting, knowing that the best posts will rise to the top.

Of course there's a reverse incentive here, where getting downvoted feelsbadman, and therefore you may be even less likely to want to post up unfinished thoughts, as compared to them simply getting displayed in chronological order.

The problem is that if your post got downvoted and displayed in chronological order, this often means you will get even more downvotes (in parts because having things in chronological order means people vote more harshly because people want to directly discourage people posting bad content, and also because your visibility doesn't reduce, which means more people have the opportunity to downvote)

Yes, this is more an argument for "don't have downvotes at all" like hacker news or traditional forum.

Note I think your team has made the correct tradeoffs so far, this was more paying devils advocate.

Hacker news has downvotes, though they are locked behind a karma threshold, though overall I see more comments downvoted on HN than on LW or the EA Forum (you can identify them by the text being more greyish and harder to read).

I'm not sure where to write this, so I'll write here for now.

It seems like Google doesn't properly index articles posted on this forum, which seems problematic. It makes it harder for me to retrieve articles I read for example, and also makes it harder to discover new posts although this problem is more invisible.

Any query I do with "site:forum.effectivealtruism.org" never links to articles directly, but only to other pages like user pages.

Huh, that's particularly weird because I don't have any of that problem with LessWrong.com, which runs on the same codebase. So it must be something unique to the EA forum situation.

Moderation notice: stickied on community.

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal