Hide table of contents

What topics do you think the EA community should actually focus on if we were being our best selves. 

37

0
0

Reactions

0
0
New Answer
New Comment


19 Answers sorted by

Animal welfare is far more effective per $ than Global Health. 

Edit:

How about "The marginal $100 mn on animal welfare is 10x the impact of the marginal $100 mn on Global Health"

I think this is a good topic, but including the word "far" kind of ruins the debate from the start as it seems like the person positing it may already have made up their mind and it introduces unnecessary bias.

6
MichaelStJules
Ya, we could just use a more neutral framing: Is animal welfare or global health more cost-effective?
2
Nathan Young
What do you think is the 50/50 point? Where half of people believe more, half less.
2
MichaelStJules
Not sure. We could replace the agree/disagree slider with a cost-effectiveness ratio slider. One issue could be that animal welfare has more quickly diminishing returns than GHD.
1
Nathan Young
Maybe but let's not overcomplicate things.
6
Toby Tremlett🔹
Late to this conversation, but I like the debate idea. A simple way to get a cost-effectiveness slider might be just to have the statement be "On the current margin $100m should go to:" and the slider go from 100% animal welfare to 100% global health, with a mid-point being 50/50. 
0
Nathan Young
Sure then quantify it, right?
3
NickLaing
Sure but 10x seems a weird place to start, surely start with "more cost effective" before applying arbitrary multipliers...
-2
Nathan Young
1x is an arbitrary multiplier too. I would want to put the number at the 50th percentile belief on the forum.

Does this basically just reflect how much people value human lives in relation to animal lives? If Alex values a chicken WALY at .00002 that of a human WALY, and Bob values a chicken WALY a 0.5 of a human WALY, then global health either is or isn't more effective.

Thanks for suggesting that, Nathan! For context:

I arrived at a cost-effectiveness of corporate campaigns for chicken welfare of 15.0 DALY/$ (= 8.20*2.10*0.870), assuming:

  • Campaigns affect 8.20 chicken-years per $ (= 41*1/5), multiplying:
    • Saulius Šimčikas’ estimate of 41 chicken-years per $.
    • An adjustment factor of 1/5, since OP [Open Philanthropy] thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis [which is linked just above]”.
  • An improvement in chicken welfare per time of 2.10 times the intensity of the mean human experience, as I estimated for moving broilers from a conventional to a reformed scenario based on Rethink Priorities’ median welfare range for chickens of 0.332[6].

  • A ratio between humans’ healthy and total life expectancy at birth in 2016 of 87.0 % (= 63.1/72.5).

In light of the above, corporate campaigns for chicken welfare are 1.51 k (= 15.0/0.00994) times as cost-effective as TCF [GiveWell's Top Charities Fund].

Why just compare to Global Health here, surely it should be "Animal Welfare is far more effective per $ than other cause areas'?

I think they are natural to compare because they both have interventions that cash out in short-term measurable outcomes, and can absorb a lot of funding to churn out these outcomes.

Comparing e.g. AI safety and Global Health brings in a lot more points of contention which I expect would make it harder to make progress in a narrowly scoped debate (in terms of pinning down what the cruxes are, actually changing people's minds etc).

7
JWS 🔸
I think I'd rather talk about the important topic even if it's harder? My concern is, for example, that the debate happens and let's say people agree and start to pressure for moving $ from GHD to AW. But this ignores a third option, move $ from 'longtermist' work to fund both. Feels like this is a 'looking under the streetlight because it's easier effect' kind of phenomenon. If Longtermist/AI Safety work can't even to begin to cash out measurable incomes that should be a strong case against it. This is EA, we want the things we're funding to be effective.

I would like a discussion week once a month-ish.

I think we could give that a go, but it might make sense to have a vote after three months about whether it was too much.

I'd like them to be regular, but a little bit less frequent. Maybe once every two months? Once every six weeks?

How can we best find new EA donors?

I have a lot of respect for OP, but I think it's clear that we could really use a larger funding base. My guess is that there should be a lot more thinking here.

This is a great one

Should Global Health comprise more than 15% of EA funding? 

Hi Nathan,

I wonder whether it may be better to frame the discussion around personal donations. Open Philanthropy accounts for the vast majority of what I guess you are calling EA funding, and my impression is that they are not very amenable to changing the allocation across their 3 major areas (global catastrophic risks, farmed animal welfare, and human global health and wellbeing) based on EA Forum discussions.

Feels like maybe a broader discussion about how much EA should focus on long-termism vs near-termist interventions.

Where do we want EA to be in ~20 years?

I'd like there to be more envisioning of what sorts of cultures, strengths, and community we want to aim for. I think there's not much attention here now.

AI Safety Advocates have been responsible for over half of the leading AI companies. We don't take that seriously enough.

Why, if anyone, should be leaders within Effective Altruism?

I think that OP often actively doesn't want much responsibility. CEA is the more obvious fit, but they often can only do so much, and also they arguably very much represent OP's interests more than that of EA community members. (just look at where their funding is coming from, or the fact that there's no way for EA community members to vote on their board or anything). 

I think that there's a clear responsibility gap and would like to see more understanding here, along with ideally plans of how things can improve.

Epistemics/forecasting should be an EA cause area

I'd like a debate week once every 2 months-ish.

Worldview diversity isn't a coherent concept and mainly exists to manage internal OpenPhil conflict.

Seems needlessly provocative as a title, and almost purposefully designed to generate more heat than light in the resulting discussion.

Decision making is a personal favorite cause area of mine and I'd like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.

Decision making is a personal favorite cause are of mine and I'd like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.

Sensemaking of AI governance. What do people think is most promising and what are their cruxes.

Besides posts, I would like to see some kind of survey that quantifies and graphs people's believes.

I really liked the discussion week on PauseAI. I'd like to see another one on this topic, taking up the new developments in reasons and evidence.

When?
Probably there are other topics that didn't have a week, so they should be prioritized. I think PauseAI is one of the most important topics. So, maybe in the next 3 - 9 months?

While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area

Wild animal welfare and longtermist animal welfare versus farmed animal welfare? 

Non-consequentialist effective altruism/animal welfare/cause prio/longtermism

We still have not had satisfactory answers for why the FTX Future Fund was so sending cheques via strange bank accounts.

Definitely not worth spending a whole week debating vs. someone just writing a post if they feel strongly that this hasn't been sufficiently discussed.

My quick guess is that the answer is pretty simple and boring. Like, "things were just a mess on the future fund level, and they were expecting things to get better over time." I'd expect that there are like 5 people who really know the answer, and speculation by the rest of us won't help much.

Curated and popular this week
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Relevant opportunities