I agree that EA funds shouldn't be distributed democratically, nor that "EA leaders" or survey participants are necessarily the right allocators. Do you think that the current resource allocation is being made by experts with "judgment, track record, and depth of thinking about cause prioritization"?
If I had to guess, I would say it is a combination of this, but also EA UHNW donor preferences, a cause's ability to attract funding from other sources, etc.
Ideally we would survey some of the best grantmaking experts on cause prio, but I still found the EA survey and MCF survey to be a useful proxy, albeit flawed.
One take is that what is happening is that the movement cares more about animal welfare as a cause area over time, but that the care and concern for AI safety/x-risk reduction has increased even more, and so people are shifting their limited time and resources towards those cause areas. This leads to the dynamic of the movement wanting animal advocacy efforts to win, but not being the ones to dedicate their donations or career to the effort.
Thanks for sharing your thoughts Tyler. I tend to think that 2 & 3 tends to account for funding discrepancies.
I do think at the same time there might be a discrepancy in ideal and actual allocation of talent, with so many EAs focused on working in AI safety/x-risk reduction. To be clear I think these are incredibly important and think every, but that maybe a few EAs who are on the fence should work in animal advocacy.
I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they've been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what's going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these n...
Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP's moral weights project).
Some rough thoughts: It's when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it's harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world's poorest p...
I think the moment you try and compare charities across causes, especially for the ones that have harder-to-evaluate assumptions like global catastrophic risk and animal welfare, it very quickly becomes clear how impossibly crazy any solid numbers are, and how much they rest on uncertain philosophical assumptions, and how wide the error margins are. I think at that point you're either left with worldview diversification or some incredibly complex, as-yet-not-very-well-settled, cause prioritisation.
My understanding is that all of the EA high net...
Naaaah, seems cheems. Seems worth trying. If we can't then fair enough. But it doesn't feel to me like we've tried.
Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don't have a good handle on it yet. And I think that if we'd decided that difficult things weren't worth doing we wouldn't have done a lot of the things we've already done.
Also, hey Elliot, I hope you're doing well.
It's great to hear that being on the front foot and reaching out to people with specific offers has worked for you.
I actually want to push back on your advice for many readers here. I think for many people who aren't getting jobs, the reason is not because the jobs are too competitive, but that they're not meeting the bar for that role. This seems more common for EAs with little professional experience, as many employers want applicants who have already been trained. In AI Safety, it also seems like for some parts of the problem, an exceptional level...
Hi David, if I've understood you correctly, I agree that a reason to return home as for other priorities that have nothing to do with impact. I personally did not return home for the extra happiness or motivation required to stay productive, but because I valued these other things intrinsically, which Julia articulates better here: https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine
I do think the messaging is a little gentler than it used to be, such as the 80k content and a few forum posts emphasising that there are a lot of reasons to make life choices besides impact, and that that is ok. This is hard in general with written content aimed at a broad audience because some people probably need to hear the message to sacrifice a little more, and some a little less.
This is a good question. I'm honestly not sure what I would have done differently overall. My guess is I would have gone back a little sooner, and invested a little more in maintaining friendships in Melbourne while away.
Thinking about this sooner also might have changed how I approached dating while in London if I would have known in advance I was always heading home.
One thing that strikes me as interesting when I think about my own experience and my impression of the people around me is that it can be hard to tell what my own reasons are when I might distance myself from EA. I might describe myself as EA adjacent and this could be some combination of:
Yeah absolutely. There's so much noise and chaos and uncertainty in the world, I sometimes like the (arguably depressing) frame that the EA project is trying to increase your chance of doing good from 51% to 52%, and that this is totally worth fighting for, but also, just being clear on how hard it is to know the long term effects of any action.
Hmm I'm not sure if I have a very considered answer to this question, except for the main argument that I think it's much harder for people to see animals as having rights/moral value since they look different, are different species, and often act in foreign ways that make us more likely to discount their capacity to feel and think (e.g. fish don't talk, scream, or visibly emote).
On some level I think the answer is always the same, regardless of the headwinds or tailwinds: you do what you can with your limited resources to improve the world as much as you can. In some sense I think slowing the growth of factory farming in a world where it was growing is the same as a world where it is stagnant and we reduce the number of animals raised. In both worlds there's a reduction in suffering. I wrote a creative piece on this exact topic here if that is at all appealing.
I also think on the front of factory farming we focus too much on the e...
Hi Sam, I'm finding it hard to respond to your request because IMO the scenarios are too vague. To use your basketball metaphor, a specific player is something that I can integrate meaningfully into a prediction, but executing the strategy flawlessly is much more nebulous. Do you have specific ideas in mind of what scenario 3 might look like? How much increased funding is there? I think to make a good conditional prediction it would need to be something we could clearly decide whether or not we achieved it? Raised an extra $50m for the movement has a clear yes/no, whereas "achieve maximum coordination and efficiency" seems very subjective to me.
Thanks for the answers. Sounds like a big crux for us is that I am sadly much more cynical about (a) how much optimism can shift probabilities. I think it can make a difference, but I don't think it can change probabilities from 10% to 70%. And (b) I am just much more cynical on our chances of ending factory farming by 2060. I'd probably put the number at around 1-5%.
Edit: Just re-read this and realised the tone seemed off and more brisk than I meant it. Apologies, don't comment much and was trying to get out a comment quickly.
Hi Sam, I'm wondering how much of our difference in optimism is in our beliefs about the likelihood of ending factory farming in our lifetimes vs what is the best framing. You say in your blog post that there's "a realistic chance of ending this system within our lifetimes". Do you care to define a version of 'ending this system', pick a year and put a percentage number on 'realistic chance'? If you pick a year and definition of ending factory farming, I can put a percentage chance on it too and see where the difference lies.
These numbers can be very rough of course, not asking for a super well calibrated prediction, more of just putting a number on an intuition.
I think for the folks in the 'ending factory farming' camp that (IMO) are not being realistic, this can lead to adopting specific theories about how all of society will change their minds. This could include claims about meat being financially unviable if we just got the meat industry to internalise their externalities (the word just is doing a lot of lifting here), or theories about tipping points where once 25% of people believe something everyone else will follow, so we need to focus on consciousness-raising (I've butchered this argument, sorry to the folks who understand it better).
Hi Matthew,
I think my analogy isn't claiming that we shouldn't try to end malaria because it will always be with us, but rather that we shouldn't view ending malaria as making a small dent in the real fight of ending preventable deaths, but that rather we should view it as a big win on its own merits. In fact I think ending cages for hens in at least Europe and the US is a realistic goal.
I think we might never eradicate factory farming. I think it's plausible that we end factory farming with some combination of cultivated meat, moral circle expansion, new ...
Hi Lucas, I like your point about being careful about celebrating small wins too much. To me the big difference between going from -100 to -90 and going from -90 to 0 is I see the expected value calculation as very different because the first one (going cage free) is clearly quite tractable, whereas the second one (reducing egg consumption?) I see as being really hard and unclear how to pursue it.
I definitely think there should be some effort that goes towards 'ending factory farming' type work. But I'm also quite skeptical of many proposed solutions. Or a...
Good question, I wasn't sure how much to err on the side of brevity vs thoroughness.
To phrase it differently I think sometimes advocates start their strategy with the final line 'and then we end factory farming', and then try to develop a strategy about how do we get there. I don't think it is reasonable to assume this is going to happen, and I think this leads to overly optimistic theories of change. From time to time I see a claim about how meat consumption will be drastically reduced in the next few decades based on a theory that is far too optimistic a...
So this involves a bit of potentially tenuous evolutionary psychology, but I think part of what is going on here is that people are judging moral character based on what would have made sense to judge people on 10,000 years ago which is, is this person loyal to their friends (ie me), empathetic, helps the person in front of them without question, etc.
I think it's important to distinguish between morality (what is right and wrong) from moral psychology (how do people think about what is right and wrong). On this account, buying animal products tells you that a person is a normal member of society, and hitting an animal tells you someone is cruel, not to be trusted, potentially psychopathic, etc.
Hi Quila,
If I understand you correctly I think we broadly agree that people tend to use how someone acts to judge moral character. I think though this point is underappreciated in EA, as evidenced by the existence of this forum post. The question is 'why do people get so much more upset about hitting one horse than the horrors of factory farming', when clearly in terms of the badness of an act, factory farming is much worse. The point is that when people view a moral/immoral act, psychologically they are evaluating the moral character of the person, not the act in and of itself.
I can't recall the paper, but I remember reading a paper in moral psychology that argues that on a psychological level, we think of morality in terms of 'is this person moral', not 'is this act moral'. We are trying to figure out if the person in front of us is trustworthy, loyal, kind, etc.
In the study, participants do say that a human experiencing harm is worse than an animal experiencing harm, but view a person who hits a cat as more immoral than a person who hits their spouse. I think what people are implicitly recoiling at is that the person who hits ...
Perhaps Uhlman et al (2015) or Landy & Uhlmann (2018)?
From the latter:
...Evidence for this assertion comes from studies involving two jilted lovers (Tannenbaum et al., 2011, Studies 1a and 1b). Participants were presented with information about two men who had learned that their girlfriends were cheating on them. Both men flew into a rage, and one beat up his unfaithful girlfriend, while the other beat up her cat. Participants judged the former action as more immoral, but judged the catbeater as having worse character (specifically, as b
Yes, I was thinking all of those:
Career capital generally seems good for a variety of jobs in think tanks. You could also take a high-paying job as a lobbyist and earn-to-give. (Obviously you still want to be choosy what you are a lobbyist for, so as to not do actual harm with your job.)
I think the direct impact is underrated, especially if you can get to the Legislative Director level or something senior it does seem like some staff get a surprising amount of autonomy to pursue policies they care most about and that a lot of good policy is bottlenecked on having someone to champion it and aggressively push for it.
Farmed Animal Funders (FAF) is hiring an Operations & Community Manager. We are accepting applications until Monday, May 20, 2024. The role is remote (United States), full time, and compensation is $70,000-$80,000.
In short: the Operations and Community Manager will focus mostly on building and running internal operations, support of FAF’s programs for members and prospective funders, and will play a leadership role in delivering a variety of excellent events.
Farmed Animal Funders (FAF) is a donor network whose members give $250K+ annuall...
As one of the people who attended the course I can say it was really really good! It (hopefully) shouldn't come as a surprise that a course on how to facilitate better was very well facilitated. The sessions were practical, engaging, and I learned a lot.
This is my way of saying if you have the opportunity to attend the course, or have Mike and Zan run it, I highly recommend you do!
Despite being the one who wrote the original post I did think in writing it that trying to figure out if one cause is being underfunded compared another cause is a really difficult question to answer. Part of my motivation to write this was to see if anyone had any insights as to whether my claims were right or not.