I'm responsible for maintaining effectivealtruism.org/impact.

Ideally, I'd like the page to be a really excellent summary that answers the questions:

  • "What has EA actually accomplished?"
  • "What kinds of things do people in EA actually work on?"

Without doing any of the following:

  • Taking credit for people who do EA-like things but have no formal connection to us, or whose total connection consists of "speaking at EA Global one time"
  • Heavily weighting one area over the rest (50 animal welfare suggestions = helpful, but I'm not going to include all of them)
  • Totally overwhelming visitors to the page (I plan to add a floating table of contents, but I still think this should be more like "greatest hits" than "full sum of everything everyone has ever done")

And I'd like the epistemics to be top-notch — "may" and "might have" where appropriate, not conflating the funding of a program with a problem actually getting solved, etc.

Obviously, the current page is far from optimal. I'd highly appreciate any suggestions for items to include or edits to make, as well as upvotes for items you agree with. 

No need to check all the other answers to see whether yours is redundant — if the same thing gets multiple suggestions, that's good to know!

New Answer
Ask Related Question
New Comment

6 Answers sorted by

Somewhat building on one that is currently mentioned on the page. Advocates have secured thousands of corporate pledges for cage-free eggs globally since 2015. That’s built global pressure for legislation, e.g. the European Commission, UK governments, and various US states have cited corporate progress as a major motivator for them to act. (I think as of latest figures about ~100M (?) US hens were cage-free vs. about 20M in 2015, when the campaigns started ramping up.) In the US, the cage-free flock size has dramatically increased in size these past few years. See, e.g., p.4. 

Definitely one of my favorite examples, and one we're using now:

Still, that's exactly what makes this a good suggestion! If I'd forgotten to add the initiative, this would have been a critical reminder.

Much of the concrete life saving and life improvement that GiveWell top charities have done with GiveWell-influenced donations.

80,000 Hours as a (very thorough) resource for individuals trying to do good/maximize their impact with their careers feels like a big accomplishment. I found EA when I googled "Highest impact careers/how to have the biggest impact with your career", and didn't find anything anywhere near as compelling as 80,000 Hours. I think their counterfactual impact is probably quite massive given how insufficient impact-oriented career advice is outside of 80K (and the broader communities/research/thinking/work that have led to 80K being what it is). 

Most of the impact is indirect so I'm not sure how much this answers the original question. But 80K's impact from community building (e.g. being  the most common entry point into EA nowadays, the podcast, etc.), career plan changes, and maybe most importantly, being the best resource for impact-prioritizing people looking for career advice (and especially students), feel very noteworthy. 

Giving what we can now has over 6,000 people (the accomplishments page says 5,000)

Apologies for a quick answer, rather than a thorough answer where I looked up all the links and details, but one potential source:

I believe Charity Entrepreneurship partly see one of their key outputs as being creating tangible achievements of the EA community. I guess a lot of it is still pretty new, but to the extent you can find any impressive achievements from CE-incubated orgs, those are pretty clearly attributable to EA. Fish Welfare Initiative have some impressive commitments from producers in India I think, and my impression was that some of the global health charities have achieved quite a lot in a small space of time.

8 comments, sorted by Click to highlight new comments since: Today at 4:24 AM

Strictly speaking, a lot of the examples are outputs or outcomes, not impacts, and some readers may not like that. It could be good to make that more explicit at the top.

I also want to suggest using more imagery, graphs, etc. – more like visual storytelling and less like just a list of bullet points.

If I define impact as change and outcome as a result, then isn't every occurrence of an impact an outcome? Are you defining those words differently? 

$X donated to Y is an outcome, but not a real impact in a moral sense until Y does things that benefit moral patients in some way or another.

(I agree that Jonas could've been clearer).

Anyone have good sources on EA's role in establishing AI Safety as a research field? (Specifically, sources that readers who don't already trust the EA movement would find compelling.)

Some ideas:

  • The publication of "Superintelligence" by Nick Bostrom in July 2014 and its successful communication have been hugely impactful in establishing the field of AI safety, notably by getting recommendations from Bill Gates, Stephen Hawkin, and Elon Musk.
  • The Future of Life Institute's organization of the "Beneficial AI conferences", including facilitating the signing of the Open Letter on Artificial Intelligence and the Asilomar Conference, which established foundational AI principles
  • Probably the launching of several organizations with a focus on AI Safety. See more here (but need prioritization and attribution to the EA movement).

Do you have a specific definition of AI Safety in mind? From my (biased) point of view, it looks like large fractions of work that is explicitly branded "AI Safety" is done by people who are at least somewhat adjacent to the EA community. But this becomes a lot less true if you widen the definition to include all work that could be called "AI Safety" (so anything that could conceivably help with avoiding any kind of dangerous malfunction of AI systems, including small scale and easily fixable problems).

Thanks for flagging this!

I didn't have a very specific definition in mind. I was roughly thinking of this cluster of traits:

  • calls itself "AI Safety"
  • is concerned with the alignment problem
  • is concerned with making AI systems safe in the long term

Using a narrower definition of the field at least seems consistent with how fields are usually defined. For example, the field that calls itself "Economics" is much smaller than all work that could conceivably be relevant to economics (which could include much of psychology, political science, sociology, history, statistics, math...).

Good framing of the question.