Hide table of contents

What topics do you think the EA community should actually focus on if we were being our best selves. 

37

0
0

Reactions

0
0
New Answer
New Comment


19 Answers sorted by

Animal welfare is far more effective per $ than Global Health. 

Edit:

How about "The marginal $100 mn on animal welfare is 10x the impact of the marginal $100 mn on Global Health"

I think this is a good topic, but including the word "far" kind of ruins the debate from the start as it seems like the person positing it may already have made up their mind and it introduces unnecessary bias.

6
MichaelStJules
Ya, we could just use a more neutral framing: Is animal welfare or global health more cost-effective?
2
Nathan Young
What do you think is the 50/50 point? Where half of people believe more, half less.
2
MichaelStJules
Not sure. We could replace the agree/disagree slider with a cost-effectiveness ratio slider. One issue could be that animal welfare has more quickly diminishing returns than GHD.
1
Nathan Young
Maybe but let's not overcomplicate things.
6
Toby Tremlett🔹
Late to this conversation, but I like the debate idea. A simple way to get a cost-effectiveness slider might be just to have the statement be "On the current margin $100m should go to:" and the slider go from 100% animal welfare to 100% global health, with a mid-point being 50/50. 
0
Nathan Young
Sure then quantify it, right?
3
NickLaing
Sure but 10x seems a weird place to start, surely start with "more cost effective" before applying arbitrary multipliers...
-2
Nathan Young
1x is an arbitrary multiplier too. I would want to put the number at the 50th percentile belief on the forum.

Does this basically just reflect how much people value human lives in relation to animal lives? If Alex values a chicken WALY at .00002 that of a human WALY, and Bob values a chicken WALY a 0.5 of a human WALY, then global health either is or isn't more effective.

Thanks for suggesting that, Nathan! For context:

I arrived at a cost-effectiveness of corporate campaigns for chicken welfare of 15.0 DALY/$ (= 8.20*2.10*0.870), assuming:

  • Campaigns affect 8.20 chicken-years per $ (= 41*1/5), multiplying:
    • Saulius Šimčikas’ estimate of 41 chicken-years per $.
    • An adjustment factor of 1/5, since OP [Open Philanthropy] thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis [which is linked just above]”.
  • An improvement in chicken welfare per ti

... (read more)

Why just compare to Global Health here, surely it should be "Animal Welfare is far more effective per $ than other cause areas'?

I think they are natural to compare because they both have interventions that cash out in short-term measurable outcomes, and can absorb a lot of funding to churn out these outcomes.

Comparing e.g. AI safety and Global Health brings in a lot more points of contention which I expect would make it harder to make progress in a narrowly scoped debate (in terms of pinning down what the cruxes are, actually changing people's minds etc).

7
JWS 🔸
I think I'd rather talk about the important topic even if it's harder? My concern is, for example, that the debate happens and let's say people agree and start to pressure for moving $ from GHD to AW. But this ignores a third option, move $ from 'longtermist' work to fund both. Feels like this is a 'looking under the streetlight because it's easier effect' kind of phenomenon. If Longtermist/AI Safety work can't even to begin to cash out measurable incomes that should be a strong case against it. This is EA, we want the things we're funding to be effective.

I would like a discussion week once a month-ish.

I think we could give that a go, but it might make sense to have a vote after three months about whether it was too much.

I'd like them to be regular, but a little bit less frequent. Maybe once every two months? Once every six weeks?

How can we best find new EA donors?

I have a lot of respect for OP, but I think it's clear that we could really use a larger funding base. My guess is that there should be a lot more thinking here.

This is a great one

Should Global Health comprise more than 15% of EA funding? 

Hi Nathan,

I wonder whether it may be better to frame the discussion around personal donations. Open Philanthropy accounts for the vast majority of what I guess you are calling EA funding, and my impression is that they are not very amenable to changing the allocation across their 3 major areas (global catastrophic risks, farmed animal welfare, and human global health and wellbeing) based on EA Forum discussions.

Feels like maybe a broader discussion about how much EA should focus on long-termism vs near-termist interventions.

Where do we want EA to be in ~20 years?

I'd like there to be more envisioning of what sorts of cultures, strengths, and community we want to aim for. I think there's not much attention here now.

AI Safety Advocates have been responsible for over half of the leading AI companies. We don't take that seriously enough.

Why, if anyone, should be leaders within Effective Altruism?

I think that OP often actively doesn't want much responsibility. CEA is the more obvious fit, but they often can only do so much, and also they arguably very much represent OP's interests more than that of EA community members. (just look at where their funding is coming from, or the fact that there's no way for EA community members to vote on their board or anything). 

I think that there's a clear responsibility gap and would like to see more understanding here, along with ideally plans of how things can improve.

Epistemics/forecasting should be an EA cause area

I'd like a debate week once every 2 months-ish.

Worldview diversity isn't a coherent concept and mainly exists to manage internal OpenPhil conflict.

Seems needlessly provocative as a title, and almost purposefully designed to generate more heat than light in the resulting discussion.

Decision making is a personal favorite cause area of mine and I'd like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.

Decision making is a personal favorite cause are of mine and I'd like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.

Sensemaking of AI governance. What do people think is most promising and what are their cruxes.

Besides posts, I would like to see some kind of survey that quantifies and graphs people's believes.

I really liked the discussion week on PauseAI. I'd like to see another one on this topic, taking up the new developments in reasons and evidence.

When?
Probably there are other topics that didn't have a week, so they should be prioritized. I think PauseAI is one of the most important topics. So, maybe in the next 3 - 9 months?

While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area

Wild animal welfare and longtermist animal welfare versus farmed animal welfare? 

Non-consequentialist effective altruism/animal welfare/cause prio/longtermism

We still have not had satisfactory answers for why the FTX Future Fund was so sending cheques via strange bank accounts.

Definitely not worth spending a whole week debating vs. someone just writing a post if they feel strongly that this hasn't been sufficiently discussed.

My quick guess is that the answer is pretty simple and boring. Like, "things were just a mess on the future fund level, and they were expecting things to get better over time." I'd expect that there are like 5 people who really know the answer, and speculation by the rest of us won't help much.

Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
Thomas Kwa
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Community
47
Ivan Burduk
· · 2m read