Today, a few Bay Area EAs and myself are asking the question, "How can we measure whether the EA movement is winning?"

Intutively, deciding on a win condition seems important for answering this question. Most social movements appear to have win conditions. These win conditions refer to a state of the world that looks different from the world's present state, and they are often even implicit within the movement's name (e.g., abolishionism, animal rights).

What does winning look like for EA? And how do we know if we're winning?

Discuss!

5

0
0

Reactions

0
0
Comments11


Sorted by Click to highlight new comments since:

Some possible criteria:

  • Number of GWWC members
  • Number of GWWC pledge signers
  • Number of EA facebook group members
  • Amount of traffic on this website
  • Amount of money moved by GiveWell
  • Amount of 80k Hours career advice requests
  • Number of applications
  • Amount of media coverage
  • Amount of positive media coverage
  • Number of EA organizations and projects
  • Size and scale of EA organizations and projects
  • Number of job applications at EA organizations
  • Number of applications for EA funds, contests, and projects
  • Amount of money donated by EAs
  • Level of credibility EA holds in academia

I like this list. We could improve on it by establishing a hierarchy of metrics.

1st Tier: more quantifiable and objective metrics which are also most strongly tied or correlated with direct impact.

  • Amount of money moved by Givewell and/or other effective altruist organizations.
  • Amount of money donated by effective altruists

2nd Tier: quantifiable metrics which aren't directly tied to increased impact, but are strongly expected to lead to increased impact. In this tier I include memberships which are expected to lead to more donations, and to overcome constraints on talent and human capital.

  • Number of GWWC members
  • Number of GWWC pledge signers
  • Amount of 80,000 Hours career advice requests
  • Number of effective altruism organizations and projects
  • Number of job applications at effective altruism organizations
  • Number of applications for effective altruism funds, contests and projects
  • Scale and scope of effective altruism organizations and projects

3rd Tier: metrics which are less direct, more subjective, less quantifiable, and are more about awareness than exactly expected impact.

  • Amount of traffic on this website
  • Amount of media coverage
  • Amount of positive media coverage
  • Level of credibility effective altruism holds in academia

I think it's possible for one metric to jump from one tier to the next in terms of how much confidence we put on it. This can happen under dramatic circumstances. For example, "media coverage" or "positive media coverage" would be something we would have much confidence in as impactful if effective altruism gets a cover story on, e.g., TIME magazine.

I'm skeptical of explicit metrics like "number of GWWC pledge signers", "money moved", etc. Any metrics that get proposed will be imperfect and may fall prey to Goodhart's law.

To me, careful analysis and thoughtful discussion are the most important aspects of EA. Good intentions are not enough. (After you read the previous article, imagine if an earlier EA movement had focused on "money moved to Africa" as its success metric.)

The default case is for humans to act altruistically in order to look good, not do good. It's very important for us to resist the pull of this attractor for as long as possible.

Turning the current negative feedback loop (donors give based on "warm glow", not impact-> charities dis-incentivized to gather/provide meaningful impact info->donors who want impact info can't find it and give based on warm glow) into a positive feedback loop (donors give based on impact-> charities incentivized to achieve/measure/report impact->easier for donors to conduct better analysis).

More generally, drastically shifting incentives people face re: EA behavior (giving effectively, impact-based career decisions, keeping robots from killing us, etc.)

A sustainable flourishing world!

I was reading Lifeblood by Alex Perry (it details the story of malaria bed nets). The book initially criticizes a lot of aid organizations because Perry claims that the aim of aid should be "for the day it's no longer needed". E.g., the goal of the Canadian Cancer Society should be to aim for the day when cancer research is unnecessary because we've already figured out how to beat it. However, what aid organizations actually do is expand to fill a whole range of other needs, which is somewhat suboptimal.

In this case, EA is really no exception. Suppose that in the future, we've tackled global poverty, animal welfare, and climate change/AI risk/etc. We would just move on to the next most important thing in EA. Of course, EA is separate from classical aid organizations, because it's closer to a movement/philosophy than a single aid effort. Nevertheless, I still think it might be useful to define "winning" as "alleviating a need for something". This could be something like "to reach a day when we no longer need to support GiveDirectly [because we've already eliminated poverty/destitution/because we've reached a quality of wealth redistribution such that nobody is living below X dollars a year]."

On that note, for Effective Altruist organizations, I imagine that 'not being needed' means 'not continuing to be the best use of our resources', or, 'have faced significant diminishing marginal returns to additional work'. That said, the condition for an organization to rationally end is different than their success condition.

On obvious point: Most organizations/causes have multiple increasingly-large success conditions. There's not one 'success condition', but a progressive set of improvements. We won't 'win' as an abstract term. I mean, I don't think Martin Luther King would say that he 'won', he accomplished a lot, but things got complicated at the end and there was still a lot to be done; needless to say though, he did quite well.

A better set of questions may be 'what are some reasonable goals to aim for?' Then, 'how can we measure how far we are from those specific goals?'

In completely pragmatic matters, I think that the best goals for us is not legislation, but monetary donations to EA-related causes.

Goal 1: 100m/year

3: 1b/year

4: 10b/year

etc

The ultimate goal for all of us may be a positive-singularity, though that is separate from effective altruism itself and harder to measure. Also, of course the money above would have to be adjusted for quality of the EA org relative to the best.

There is of course, still the question of how good the interventions are and how good the intervention-deciding mechanisms are. However, I feel like measuring / estimating those are quite a bit more challenging and also present a very orthogonal and distinct challenge from raising money. For instance, growing a movement and convincing people in the large would be an 'EA popularity goal', which would be measured in money, while finding new research to understand effectiveness would be more of a 'EA Research Goal'. Two very different things.

Hitting sharply diminishing returns on QALYs/$

Currently you can buy decades and decades of QALYs for a year's salary or less. And that's just straight forward, low variance, uncontroversial purchases. If you cast your net wider (far future concerns) you could potentially be purchasing trillions of QALYs on expectation. I'll consider EA to have won once those numbers drop to something reasonable.

Clippy wants to point out that this goal could easily be achieved through a deadly virus that wipes out the human race, planetwide nuclear winter, etc. :P

Yep, that's fundamental. Also, we don't want to give the impression that our obligations are limited to opportunities that land in our lap. If we seem to be hitting diminishing returns, it's time to try looking for awesome opportunities in different domains.

I would think winning is likely to depend sharply on cause area, or at least on particular assumptions that are not agreed upon in the EA community, at least if it is to be sufficiently concrete. Most EAs could probably agree that a world where utility is maximized (or some fairly similar metric, or optimization function) is a win. What world will realize this depends on views about the value of nonhuman animals, the value of good vs. bad experiences, and other issues where I've seen quite a bit of disagreement in the EA community.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel