Farmed animal welfare is one of the most important cause areas out there. Though we’ve written about animal welfare broadly before, we recently published a dedicated piece on farmed animals specifically. Given how often this cause area shows up on our job board and throughout our content, we thought it deserved its own standalone overview, which covers:
* How different farmed animals are treated, including fish, crustaceans, and insects.
* Promising approaches already reducing suffering at scale.
* Why farmed animal welfare remains highly neglected despite its enormous scale.
* Concrete ways you can get involved, whether through your career or otherwise.
It’s intended as an approachable introduction to the cause area; if you're already familiar with farmed animal welfare, especially through other EA content, you probably won't be surprised by much here. But if you're new to the topic or looking for a solid overview to share with others, you might find it useful.
You can read the full article here.
Whenever I talk about Effective Altruism (EA) to someone new, I talk about EA-the-Movement and EA-the-Philosophy. EA-the-Movement draws a specific kind of person (quantitative, techy, philosophical) and has selected a few causes it has determined to be the most effective. EA-the-Philosophy is about asking whether our donations and volunteering are going to places that get the most bang for our buck and those questions can be applied to anything we care about.
It's a way of easing people into our way of thinking without insisting that they join our particular group or adopt our priorities. I find it's especially useful if the quantitative or strong recommendations from EA-the-Movement to be offputting, or if they have previous associations with the movement. I think it's worth making people who are doing good in some way more effective, even if it doesn't end up getting them to do what we'd consider the most good. Although if someone spends enough time thinking with the EA Philosophy, it might end up leading the straight back to the EA Movement.
PSA: regression to the mean/mean reversion is a statistical artifact, not a causal mechanism.
So mean regression says that children of tall parents are likely to be shorter than their parents, but it also says parents of tall children are likely to be shorter than their children.
Put in a different way, mean regression goes in both directions.
This is well-understood enough here in principle, but imo enough people get this wrong in practice that the PSA is worthwhile nonetheless.
Is there any possibility of the forum having an AI-writing detector in the background which perhaps only the admins can see, but could be queried by suspicious users? I really don't like AI writing and have called it out a number of times but have been wrong once. I imagine this has been thought about and there might even be a form of this going on already.
In saying this my first post on LessWrong was scrapped because they identified it as AI written even though I have NEVER used AI in online writing not even for checking/polishing. So that system obviously isnt' perfect.
I.
Once upon a time, there was an EA named Alice. EA made a lot of sense to Alice, and she believed that some niche problems/causes were astronomically bigger than others. But she eventually decided that (1) the theories of change were confusing/suspicious and (2) there's substantial evidence that a bunch of EA work is net-negative. So she decided to become a teacher or doctor or something.
II.
Alice made a mistake! If she thinks that some problems/causes are astronomically bigger than others, and she's skeptical of certain approaches, she should look for better approaches, not give up on those problems/causes! For example, she could:
* Find an intervention (in the great problems/causes) that she believes in, and do that
* Defer to people who she really respects on the topic
* Try to understand the problem and possible interventions; do strategy/prioritization/deconfusion work (for herself or maybe benefitting the whole community)
* Develop relevant skills and/or save up money, and set herself up to notice if there's more clarity or great opportunities in the future
* Accept sign-uncertainty and do positive-EV stuff
III.
This is actually about my friend Bob who's sometimes like I work on AI safety but I feel clueless about whether we're actually helping, and I see that farmed animal suffering is a huge problem, and I want to go work on farmed animal welfare. If Bob still believes that the AI stuff is astronomically more important than the animal stuff, Bob is making the same mistake as Alice!
A week for posting incomplete, scrappy, or otherwise draft-y posts. Read more.