Farmed animal welfare is one of the most important cause areas out there. Though we’ve written about animal welfare broadly before, we recently published a dedicated piece on farmed animals specifically. Given how often this cause area shows up on our job board and throughout our content, we thought it deserved its own standalone overview, which covers:
* How different farmed animals are treated, including fish, crustaceans, and insects.
* Promising approaches already reducing suffering at scale.
* Why farmed animal welfare remains highly neglected despite its enormous scale.
* Concrete ways you can get involved, whether through your career or otherwise.
It’s intended as an approachable introduction to the cause area; if you're already familiar with farmed animal welfare, especially through other EA content, you probably won't be surprised by much here. But if you're new to the topic or looking for a solid overview to share with others, you might find it useful.
You can read the full article here.
This might feel obvious, but I think it's under-appreciated how much disagreement on AI progress just comes down to priors (in a pretty specific way) rather than object-level reasoning.
I was recently arguing the case for shorter timelines to a friend who leans longer. We kept disagreeing on a surprising number of object-level claims, which was weird because we usually agree more on the kinda stuff we were arguing about.
Then I basically realized what I think was going on: she had a pretty strong prior against what I was saying, and that prior is abstract enough that there's no clear mechanism by which I can push against it. So whenever I made a good object-level case, she'd just take the other side — not necessarily because her reasons were better all else equal, but because the prior was doing the work underneath without either of us really knowing it.
There's something clearly rational here that's kinda unintuitive to get a grip on. If you have a strong prior, and someone makes a persuasive argument against it, but you can't identify the specific mechanism by which their argument defeats it, you should probably update that the arguments against their case are better than they appear, even if you can't articulate them yet. From the outside, this totally just looks like motivated reasoning (and often is), but I think it can be pretty importantly different.
The reason this is so hard to disentangle is that (unless your belief web is extremely clear to you, which seems practically impossible) it's just enormously complicated. Your prior on timelines isn't an isolate thing — it's load-bearing for a bunch of downstream beliefs all at once. So the resistance isn't obviously irrational, it's more like... the system protecting its own coherence.
I think this means that people should try their best to disentangle whether some object level argument they’re having comes from real object level beliefs or pretty abstract priors (in which case, it seems less worthwhile to
I'm surprised that there hasn't been an attempt (as far as I know) to fund/create a competitor to Epoch.ai.
It wouldn't have to compete on all benchmarks, but it would be good to have a forecasting organisation that could be trusted with potentially dual use insights into capabilities trajectories. I don't believe this would require uniformity of views: it would just require people with a proper sense of responsibility.
I also think that the bad judgement displayed by some of their employees impinges on some of their research (emphasis on some, particularly the more subjective elements, Epoch is still my go-to-source in many cases). Unfortunately, I think there's a difference between being intelligent and being wise and one common way that this distinction plays out is that some quite intelligent folks follow the incentive gradient towards being excessively and reflexively contrarian. Just to be clear, I'm not trying to attack their research, just noting that whilst a second opinion would always have been valuable, the fact that I trust them less on the margin, makes the need for such a second opinion feel more pressing to me.
In terms of producing high-quality research, I'd orient to how Epoch has done many things well, but also made a few mistakes that I would controversially call clear mistakes.
I'm also pretty sure that there's sufficient talent in the space now to create a second such effort. It could also start small and funders could help it scale if it proves itself.
[Crossposted from social media, in the spirit of Draft Amnesty Week]
After a lot of thinking, I am updating my Giving What We Can🔸10% donation allocation, shifting about a third of my donation portfolio to the Center for Land Economics 🔰.
There are several reasons why I am excited about this donation opportunity.
I believe that Georgism has the potential to radically transform our economy and society. 'Land is a Big Deal', as they say. Raising public funds without deadweight costs is a big part of this. But more fundamentally, by reducing the costs of living and the role of rent-seeking, I hope that it could shift our society from scarcity and zero-sum thinking to abundance and positive-sum collaboration.
Within this cause area, I believe that CLE is the most cost-effective donation opportunity. In their first year, they have achieved much more tangible benefits than I would have anticipated, and seeing this change has made me much more optimistic about the prospects for Georgist reform today than I was a year ago. They combine an incremental approach of giving legislators and tax assessors the tools necessary to improve the situation on the ground, with movement building and consistent high-quality public outreach through the Progress & Poverty Substack. And they have done this with a small but dedicated team, with only 1 funded FTE.
This means that my donations, as a small, private donor, will actually constitute a few percentage points of their annual budget. It is rare to ever have the opportunity to make such a counterfactual difference. We can often have the most impactful donation opportunities in areas where we have access to idiosyncratic information that is not yet widely recognized by the wider 'donation market'. In my case, I think that the world severely under-appreciates the potential of Georgist reform generally, and the work of CLE specifically.
However, such idiosyncratic information can often be connected to unusual interests, which often comes w
Whenever I talk about Effective Altruism (EA) to someone new, I talk about EA-the-Movement and EA-the-Philosophy. EA-the-Movement draws a specific kind of person (quantitative, techy, philosophical) and has selected a few causes it has determined to be the most effective. EA-the-Philosophy is about asking whether our donations and volunteering are going to places that get the most bang for our buck and those questions can be applied to anything we care about.
It's a way of easing people into our way of thinking without insisting that they join our particular group or adopt our priorities. I find it's especially useful if the quantitative or strong recommendations from EA-the-Movement to be offputting, or if they have previous associations with the movement. I think it's worth making people who are doing good in some way more effective, even if it doesn't end up getting them to do what we'd consider the most good. Although if someone spends enough time thinking with the EA Philosophy, it might end up leading the straight back to the EA Movement.
A week for posting incomplete, scrappy, or otherwise draft-y posts. Read more.