All of ElliotTep's Comments + Replies

Despite being the one who wrote the original post I did think in writing it that trying to figure out if one cause is being underfunded compared another cause is a really difficult question to answer. Part of my motivation to write this was to see if anyone had any insights as to whether my claims were right or not.

I agree that EA funds shouldn't be distributed democratically, nor that "EA leaders" or survey participants are necessarily the right allocators. Do you think that the current resource allocation is being made by experts with "judgment, track record, and depth of thinking about cause prioritization"? 

If I had to guess, I would say it is a combination of this, but also EA UHNW donor preferences, a cause's ability to attract funding from other sources, etc. 

Ideally we would survey some of the best grantmaking experts on cause prio, but I still found the EA survey and MCF survey to be a useful proxy, albeit flawed.

1
Noah Birnbaum
I guess that I am just not very confident that this is a good proxy. The allocation is probably not what it should be, but I'm not yet convinced that animal stuff is being underfunded. 

Ohh I like this. I think this articulates the pheomenon well. Thanks. 

I agree re the career problem. I wonder how much additional money would fix the problem vs other issues like the cultures of the two movements/ecosystems, status of working in the spaces, etc.

Glad to hear. Welcome to the community Karen!

One take is that what is happening is that the movement cares more about animal welfare as a cause area over time, but that the care and concern for AI safety/x-risk reduction has increased even more, and so people are shifting their limited time and resources towards those cause areas. This leads to the dynamic of the movement wanting animal advocacy efforts to win, but not being the ones to dedicate their donations or career to the effort. 

Thanks for sharing your thoughts Tyler. I tend to think that 2 & 3 tends to account for funding discrepancies. 

I do think at the same time there might be a discrepancy in ideal and actual allocation of talent, with so many EAs focused on working in AI safety/x-risk reduction. To be clear I think these are incredibly important and think every, but that maybe a few EAs who are on the fence should work in animal advocacy.

I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than 'share your best guess of how you would allocate a billion dollars'.

 I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they've been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.

Even more speculative: Maybe part of what's going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these n... (read more)

4
Vasco Grilo🔸
I see some value in this. However, I would be much more interested in how they would decrease the uncertainty about cause prioritisation, which is super large. I would spend at least 1 %, 10 M$ (= 0.01*1*10^9), decreasing the uncertainty about comparisons of expected hedonistic welfare across species and substrates (biological or not). Relatedly, RP has a research agenda about interspecies welfare comparisons more broadly (not just under expectational total hedonistic utilitarianism).

Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP's moral weights project). 

Some rough thoughts: It's when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it's harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world's poorest p... (read more)

3
Vasco Grilo🔸
Hi Elliot and Nathan.  I think being able to compare the welfare of shrimps and humans is far from enough. I do not know about any interventions which robustly increase welfare in expectation due to dominant uncertain effects on soil animals. I would be curious to know your thoughts on these. I believe there is a very long way to robust results from Rethink Priorities' (RP's) moral weight project, and Bob Fischer's book about comparing welfare across species, which contains what RP stands behind now. For example, the estimate in Bob's book for the welfare range of shrimps is 8.0 % that of humans, but I would say it would be quite reasonable for someone to have a best guess of 10^-6, the ratio between the number of neurons of shrimps and humans.

 I think the moment you try and compare charities across causes, especially for the ones that have harder-to-evaluate assumptions like global catastrophic risk and animal welfare, it very quickly becomes clear how impossibly crazy any solid numbers are, and how much they rest on uncertain philosophical assumptions, and how wide the error margins are. I think at that point you're either left with worldview diversification or some incredibly complex, as-yet-not-very-well-settled, cause prioritisation. 

My understanding is that all of the EA high net... (read more)

Naaaah, seems cheems. Seems worth trying. If we can't then fair enough. But it doesn't feel to me like we've tried.

Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don't have a good handle on it yet. And I think that if we'd decided that difficult things weren't worth doing we wouldn't have done a lot of the things we've already done.

Also, hey Elliot, I hope you're doing well.

 It's great to hear that being on the front foot and reaching out to people with specific offers has worked for you.

I actually want to push back on your advice for many readers here. I think for many people who aren't getting jobs, the reason is not because the jobs are too competitive, but that they're not meeting the bar for that role. This seems more common for EAs with little professional experience, as many employers want applicants who have already been trained. In AI Safety, it also seems like for some parts of the problem, an exceptional level... (read more)

As someone who just participated in a name change recently I can assure you the pros and cons of this name with other contenders was probably discussed ad nauseam by the team involved, and they decided on this name despite the nerdy and clunky vibe. 

Answer by ElliotTep14
6
0

Approx how much absorbency/room for more funding is there in each cause area? How many good additional opportunities are there over what is currently being funded? How steep are the diminishing returns for an additional $10m, $50m, $100m, $500m?

Thanks for writing this, as someone who feels more at home in EA spaces I do sometimes feel like EAs are pretty critical of rationalist sub-culture (often reasonably) but take for granted the valuable things rationalism has contributed to EA ideas and norms.

Hi David, if I've understood you correctly, I agree that a reason to return home as for other priorities that have nothing to do with impact. I personally did not return home for the extra happiness or motivation required to stay productive, but because I valued these other things intrinsically, which Julia articulates better here: https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine  

Ah man I feel you. To be honest I've been avoiding the abyss recently with some recent career vs family dilemmas. Lemme know if you want to have a chat sometime.

For sure. I think Chana does a good job of talking about some of the downsides of living in a hub similar to what you mention: https://forum.effectivealtruism.org/posts/ZRZHJ3qSitXQ6NGez/about-going-to-a-hub-1

Wow that's gotta be one of the fastest forum post to plan changes on record. I'm glad to hear this resolved what sounds like a big and tough question in your life. As I mentioned in the post, I do think stints in hubs can be a great experience.

I do think the messaging is a little gentler than it used to be, such as the 80k content and a few forum posts emphasising that there are a lot of reasons to make life choices besides impact, and that that is ok. This is hard in general with written content aimed at a broad audience because some people probably need to hear the message to sacrifice a little more, and some a little less.

This is a good question. I'm honestly not sure what I would have done differently overall. My guess is I would have gone back a little sooner, and invested a little more in maintaining friendships in Melbourne while away. 

Thinking about this sooner also might have changed how I approached dating while in London if I would have known in advance I was always heading home. 

For anyone wondering whether to subscribe, I’ve been subscribed for a month and it’s an excellent newsletter. Once a week email covering things happening in the news with forecasts, reasoning, and aiming to cover what actually matters. It’s great. 

Thanks for sharing Lucas. I appreciate the fact that I'm reading a post about you stepping back before burnout or something similarly difficult to recover from, and moving down to a level that feels sustainable long term.

One thing that strikes me as interesting when I think about my own experience and my impression of the people around me is that it can be hard to tell what my own reasons are when I might distance myself from EA. I might describe myself as EA adjacent and this could be some combination of:

  1. Seeing the 'typical' EA as someone who is much more hardcore and believes in all of it.
  2. Some part of my brain is always unconsciously tracking status.
  3. I am worried about the impact it will have on my ability to get jobs in the future.
  4. I might be more persuasive or likeable t
... (read more)

Just wanted to express that I really appreciate your newsletter as a source of thought leadership. Often I find myself wondering about one aspect of the animal advocacy movement or another, and then a few months later you provide a thoughtful summary of the state of play and next steps. 

Thanks! I think that was actually Matt Reardon's line. The man has dropped in golden paragraphs into 2 of my posts now.

Glad to hear. The goal was very much to write the kind of post people want to reference when chatting to friends who are making this mistake.

Yeah absolutely. There's so much noise and chaos and uncertainty in the world, I sometimes like the (arguably depressing) frame that the EA project is trying to increase your chance of doing good from 51% to 52%, and that this is totally worth fighting for, but also, just being clear on how hard it is to know the long term effects of any action.

Hmm I'm not sure if I have a very considered answer to this question, except for the main argument that I think it's much harder for people to see animals as having rights/moral value since they look different, are different species, and often act in foreign ways that make us more likely to discount their capacity to feel and think (e.g. fish don't talk, scream, or visibly emote). 

On some level I think the answer is always the same, regardless of the headwinds or tailwinds: you do what you can with your limited resources to improve the world as much as you can. In some sense I think slowing the growth of factory farming in a world where it was growing is the same as a world where it is stagnant and we reduce the number of animals raised. In both worlds there's a reduction in suffering. I wrote a creative piece on this exact topic here if that is at all appealing.

I also think on the front of factory farming we focus too much on the e... (read more)

Thanks for the comment Alene. I think I agree with all of it and that it does a great job of articulating things I didn't get to or think of.

Hi Sam, I'm finding it hard to respond to your request because IMO the scenarios are too vague. To use your basketball metaphor, a specific player is something that I can integrate meaningfully into a prediction, but executing the strategy flawlessly is much more nebulous. Do you have specific ideas in mind of what scenario 3 might look like? How much increased funding is there? I think to make a good conditional prediction it would need to be something we could clearly decide whether or not we achieved it? Raised an extra $50m for the movement has a clear yes/no, whereas "achieve maximum coordination and efficiency" seems very subjective to me.

Thanks for the answers. Sounds like a big crux for us is that I am sadly much more cynical about (a) how much optimism can shift probabilities. I think it can make a difference, but I don't think it can change probabilities from 10% to 70%. And (b) I am just much more cynical on our chances of ending factory farming by 2060. I'd probably put the number at around 1-5%. 

2[anonymous]
My point was that the 10-70% range reflects different outcomes depending on the actions we take, not just optimism as a feeling or a belief. Optimism can certainly motivate us to act, but without those actions, it has little to no impact on the actual probabilities. It seems our main disagreement lies in how we view these probabilities. I see them as dynamic and heavily influenced by the actions we take as a movement, while you seem to view them as more static and inherent to the situation, essentially outside of our control. I think that's an extremely important distinction because it fundamentally shifts how we approach this challenge. If we believe the odds are fixed, we become passive observers, resigned to whatever fate has in store. But if we recognise our power to influence those odds through strategic action, technological innovation, and effective execution, we become active participants in creating the future we want. This empowers us to take responsibility, to strive for optimal solutions, and to push beyond the limitations of the status quo. To better understand your perspective, could you provide your estimated probabilities for ending factory farming by 2060, considering these scenarios: * Scenario 1: Complacency. We maintain the status quo, with minimal changes to our current approach. We fail to identify and effectively target the key pressure points within the system, and we don't create the necessary feedback loops to amplify our impact and hinder the growth of industrial animal agriculture. * Scenario 2: Moderate Improvement. We make incremental progress in our strategies and adoption of new technologies, but we don't fully capitalize on opportunities for exponential growth. We achieve some success in identifying and influencing key pressure points, but our efforts are not comprehensive or optimally coordinated. * Scenario 3: Optimal Execution. We proactively identify and exploit every opportunity for exponential growth within the movement.

Edit: Just re-read this and realised the tone seemed off and more brisk than I meant it. Apologies, don't comment much and was trying to get out a comment quickly.

  1. Thanks for the response, and for the detailed answer. Sorry, I don't want to be a stickler here, but can you give me your best guess probability? The reason I ask is because to me it seems like if one of these scenarios is much more likely than the other then this is relevant, no? Like if we think there's a less than 99% vs 1% chance that we continue with current strategies, this seems relevant n
... (read more)
1[anonymous]
1. It's important to emphasise how much our actions as a movement influence those odds. We're not just bystanders, our strategies, dedication, and execution all play a role. That's why I've given a range of probabilities based on the actions we take as a movement. Each individual member has the power to decide their level of contribution—0% or 100%—so that probability is something everyone can decide for themselves. 2. You're right about the balance needed with optimism. Over-optimism about one solution, like cultivated meat, can hinder exploration of other options. But optimism about our overall goal is a different story. It leads to self-fulfilling prophecies: if we don't believe ending factory farming is possible, we automatically decrease the probability. But if we believe it's possible and that our actions determine the outcome, we massively increase our chances of success. 3. Yes, I use an LLM for almost everything I write. I usually draft my ideas and then refine them through a conversation with the LLM, making them clearer and easier to understand. This saves me time and improves the quality of my writing. I'm also autistic and sometimes find it challenging to get the right tone across in my writing, and LLMs helps me with that too.

Hi Sam, I'm wondering how much of our difference in optimism is in our beliefs about the likelihood of ending factory farming in our lifetimes vs what is the best framing. You say in your blog post that there's "a realistic chance of ending this system within our lifetimes". Do you care to define a version of 'ending this system', pick a year and put a percentage number on 'realistic chance'? If you pick a year and definition of ending factory farming, I can put a percentage chance on it too and see where the difference lies. 

These numbers can be very rough of course, not asking for a super well calibrated prediction, more of just putting a number on an intuition.  

1[anonymous]
By "ending factory farming," I mean a 95% reduction in animals raised in intensive industrial farming operations globally by 2060. Predicting the likelihood of this is complex, but I'd estimate it as: 10-30% if we continue with current strategies and resource allocation. * Pros: Growing veganism, plant-based options, investment in alternatives, and public awareness are all positive signs. * Cons: Cultural habits around meat consumption, powerful industry lobbies, and potential increased meat consumption in developing countries pose serious challenges. 35-50% if we effectively leverage exponential technologies and focus on strategic leverage points. * Pros: AI-powered advocacy shows promise, and targeting key global hubs can create ripple effects. The movement itself appears to be entering a phase of rapid growth, and history suggests society tends to expand its moral circle over time. * Cons: AI advocacy may need a strong global movement to be effective across diverse cultural contexts. The identified leverage points may not be as influential as predicted, and preventing animal agriculture from leveraging those same technologies to compete is crucial. 55-70% if we achieve exceptional movement coordination and execute optimally on key interventions. * Pros: Combining exponential social movement growth with technological advancement creates immense potential for rapid change. AI advocacy is proving effective, and a systems-level approach generates powerful feedback loops. * Cons: Achieving and sustaining global coordination is incredibly difficult. Unforeseen consequences, potential loss of momentum, and adaptation by the industry are all risks. These are rough estimates, and many unknowns could influence the outcome. It's easy to get caught up in predictions, but the future of factory farming rests in our hands. Our strategies, dedication, and ability to overcome challenges will ultimately determine success or failure. Instead of fixating on a fixed pro

Yeah I agree with this and wish I was clearer from the get go. 

I think for the folks in the 'ending factory farming' camp that (IMO) are not being realistic, this can lead to adopting specific theories about how all of society will change their minds. This could include claims about meat being financially unviable if we just got the meat industry to internalise their externalities (the word just is doing a lot of lifting here), or theories about tipping points where once 25% of people believe something everyone else will follow, so we need to focus on consciousness-raising (I've butchered this argument, sorry to the folks who understand it better).  

Good point. I feel weird admitting it but it does seem like some cows probably have net-positive lives right now

1
Hazo
I agree with this, although I'm not an expert on cattle rearing. It seems to me like cows on grazeland generally have net positive lives, and cows on feedlots have arguably net negative lives (although it still seems way less bad than a pig or chicken CAFO). The longer a cow spends on pasture the more likely they had a net positive life, e.g. 100% grass-fed cows in the US might have pretty decent lives. 
1
Vasco Grilo🔸
Agreed. I guess farmed animals have positive lives under the conditions required by the Naturland standard.

Hi Matthew,

I think my analogy isn't claiming that we shouldn't try to end malaria because it will always be with us, but rather that we shouldn't view ending malaria as making a small dent in the real fight of ending preventable deaths, but that rather we should view it as a big win on its own merits. In fact I think ending cages for hens in at least Europe and the US is a realistic goal.

I think we might never eradicate factory farming. I think it's plausible that we end factory farming with some combination of cultivated meat, moral circle expansion, new ... (read more)

Hi Lucas, I like your point about being careful about celebrating small wins too much. To me the big difference between going from -100 to -90 and going from -90 to 0 is I see the expected value calculation as very different because the first one (going cage free) is clearly quite tractable, whereas the second one (reducing egg consumption?) I see as being really hard and unclear how to pursue it.

I definitely think there should be some effort that goes towards 'ending factory farming' type work. But I'm also quite skeptical of many proposed solutions. Or a... (read more)

1
Lucas Lewit-Mendes
Hey Elliot, sorry for the slow response on this.  Yeah for sure, it's hard to know how the EV calculation pans out. Using my made-up numbers, the interventions that end factory farming would need to have >10% as much chance of success to be better - I think that is plausible but there's so much uncertainty here.  Agree these are big questions that are hard to discuss online, but let's chat when we get a chance in person! 

Good question, I wasn't sure how much to err on the side of brevity vs thoroughness.

To phrase it differently I think sometimes advocates start their strategy with the final line 'and then we end factory farming', and then try to develop a strategy about how do we get there. I don't think it is reasonable to assume this is going to happen, and I think this leads to overly optimistic theories of change. From time to time I see a claim about how meat consumption will be drastically reduced in the next few decades based on a theory that is far too optimistic a... (read more)

1
Keyvan Mostafavi
Thanks for your reply Elliot. I was specifically asking about your views on why the problem animal advocates are trying to solve is much harder and disanalogous than the problem the emancipation and the gay marriage movements were tryng to solve.
3
Jacob_Peacock
Hi Elliot, I cite a couple of studies similar to that in my review Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat; I suspect you're thinking of Malan 2022.

So this involves a bit of potentially tenuous evolutionary psychology, but I think part of what is going on here is that people are judging moral character based on what would have made sense to judge people on 10,000 years ago which is, is this person loyal to their friends (ie me), empathetic, helps the person in front of them without question, etc. 

I think it's important to distinguish between morality (what is right and wrong) from moral psychology (how do people think about what is right and wrong). On this account, buying animal products tells you that a person is a normal member of society, and hitting an animal tells you someone is cruel, not to be trusted, potentially psychopathic, etc.   

1[anonymous]
Okay, sounds like we indeed agree on the object-level. I guess it's just not intuitive to me to refer to things like 'will this person be loyal to me' as 'moral character'

Hi Quila,

If I understand you correctly I think we broadly agree that people tend to use how someone acts to judge moral character. I think though this point is underappreciated in EA, as evidenced by the existence of this forum post. The question is 'why do people get so much more upset about hitting one horse than the horrors of factory farming', when clearly in terms of the badness of an act, factory farming is much worse. The point is that when people view a moral/immoral act, psychologically they are evaluating the moral character of the person, not the act in and of itself.

1[anonymous]
My point was that purchasing animal products usually suggests a bad 'moral character' trait: the willingness to cause immense individual harm when this is normative/convenient. I'm saying that average people's judgements of others' characters are not best described as 'moral' per se, because if they were, they would judge each other harshly for consuming animals.

I think it was the first one. Well done for finding it!

I can't recall the paper, but I remember reading a paper in moral psychology that argues that on a psychological level, we think of morality in terms of 'is this person moral', not 'is this act moral'. We are trying to figure out if the person in front of us is trustworthy, loyal, kind, etc.

In the study, participants do say that a human experiencing harm is worse than an animal experiencing harm, but view a person who hits a cat as more immoral than a person who hits their spouse. I think what people are implicitly recoiling at is that the person who hits ... (read more)

1[anonymous]
I think this, as written, is not explanatory, because one could regard another to be of immoral character on the basis that they perform immoral acts. I'm not sure what else 'moral character' could mean, other than "their inner character would endorse acting in {moral or immoral way}". I think it would be correct to say that average humans act on various non-moral judgements in ways we think should be reserved for moral judgements. Hmm, I might share this view (I'm unsure which evidences the more bad character), but I don't think it comes from something irrational. It's more like: inferring underlying principles they might have in some deep, unconscious level. E.g., someone who hits a cat might have a deep attitude of finding it okay to hurt the weak. But someone hitting a spouse is also evidence of different bad 'deep attitudes'. This way of thinking about the question is compatible with my consequentialism, because how those individuals act is a result of these 'deep attitudes'.

Perhaps Uhlman et al (2015) or Landy & Uhlmann (2018)?

From the latter:

Evidence for this assertion comes from studies involving two jilted lovers (Tannenbaum et al., 2011, Studies 1a and 1b).  Participants were presented with information about two men who had learned that their girlfriends were cheating on them.  Both men flew into a rage, and one beat up his unfaithful girlfriend, while the other beat up her cat.  Participants judged the former action as more immoral, but judged the catbeater as having worse character (specifically, as b

... (read more)
5
Denis
That's really interesting, and makes a lot of sense. Thanks for sharing! 

I agree this is an important point I probably didn't discuss enough. Value drift is real, as is getting used to a high salary. 

I suspect that a strong community is one way to reduce this, but might be easier said than done depending on where someone lives.

Can you elaborate on why? Is it the career capital, direct impact, or something else altogether?

Yes, I was thinking all of those:

  • Career capital generally seems good for a variety of jobs in think tanks. You could also take a high-paying job as a lobbyist and earn-to-give. (Obviously you still want to be choosy what you are a lobbyist for, so as to not do actual harm with your job.)

  • I think the direct impact is underrated, especially if you can get to the Legislative Director level or something senior it does seem like some staff get a surprising amount of autonomy to pursue policies they care most about and that a lot of good policy is bottlenecked on having someone to champion it and aggressively push for it.

Farmed Animal Funders (FAF) is hiring an Operations & Community ManagerWe are accepting applications until Monday, May 20, 2024. The role is remote (United States), full time, and compensation is $70,000-$80,000.

In short: the Operations and Community Manager will focus mostly on building and running internal operations, support of FAF’s programs for members and prospective funders, and will play a leadership role in delivering a variety of excellent events.

Farmed Animal Funders (FAF) is a donor network whose members give $250K+ annuall... (read more)

+1 as the person who writes the EA Australia newsletter

As one of the people who attended the course I can say it was really really good! It (hopefully) shouldn't come as a surprise that a course on how to facilitate better was very well facilitated. The sessions were practical, engaging, and I learned a lot. 

This is my way of saying if you have the opportunity to attend the course, or have Mike and Zan run it, I highly recommend you do! 

Load more