Thanks for this. I already had some sense that historical productivity data varied, but this prompted me to look at how large those differences are and they are bigger than I realised. I made an edit to my original comment.
TL;DR: Current productivity people mostly agree about. Historical productivity they do not. Some sources, including those in the previous comment, think Germany was more productive than the US in the past, which makes being less productive now more damning compared to a perspective where this has always been the case.
***
For s...
Just to respond to a narrow point because I think this is worth correcting as it arises: Most of the US/EU GDP growth gap you highlight is just population growth. In 2000 to 2022 the US population grew ~20%, vs. ~5% for the EU. That almost exactly explains the 55% vs. 35% growth gap in that time period on your graph; 1.55 / 1.2 * 1.05 = 1.36.
This shouldn't be surprising, because productivity in the 'big 3' of US / France / Germany track each other very closely and have done for quite some time. (Edit: I wasn't expecting this comment to blow up, and it seem...
This is weird because other sources do point towards a productivity gap. For example, this report concludes that "European productivity has experienced a marked deceleration since the 1970s, with the productivity gap between the Euro area and the United States widening significantly since 1995, a trend further intensified by the COVID-19 pandemic".
Specifically, it looks as if, since 1995, the GDP per capita gap between the US and the eurozone has remained very similar, but this is due to a widening productivity gap being cancelled out by a shrinking employ...
Re. 2, that maths is the right ballpark is trying to save but if donating I do want to remind people that UK donations are tax-deductible and this deduction is not limited the way I gather it is in some countries like the US.
So you wouldn’t be paying £95k in taxes if donating a large fraction of £250k/yr. Doing quick calcs, if living off £45k then the split ends up being something like:
Income: 250k
Donations: 185k
Tax: 20k
Personal: 45k
(I agree with the spirit of your points.)
PS- Are donations tax deductible in the UK (Besides giftaid)? I've been operating on the assumption that they aren't, but if they were, I could give more.
I think the short answer is 'depends what you mean?'. Longer answer:
Stylistically, some commenters don't seem to understand how this differs from a normal cause prioritisation exercise. Put simply, there's a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things. Most cause prioritisation is the former, this post is the latter.
As for why the latter is a problem, I agree with JWS's observation that this type of 'For The Greater Good' reasoning leads to great harm when applied ...
there's a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things.
This is a fantastic summary of why I feel much more averse to this argument than to statements like "animal welfare is more important than human welfare" (which I am neutral-to-positive on).
I appreciate you writing this up at the top level, since it feels more productive to engage here than on one of a dozen comment threads.
I have substantive and 'stylistic' issues with this line of thinking, which I'll address in separate comments. Substantively, on the 'Suggestions' section:
...At the very least, I think GiveWell and Ambitious Impact should practice reasoning transparency, and explain in some detail why they neglect effects on farmed animals. By ignoring uncertain effects on farmed animals, GiveWell and Ambitious Impact are implicitly ass
Stylistically, some commenters don't seem to understand how this differs from a normal cause prioritisation exercise. Put simply, there's a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things. Most cause prioritisation is the former, this post is the latter.
As for why the latter is a problem, I agree with JWS's observation that this type of 'For The Greater Good' reasoning leads to great harm when applied ...
I think my if-the-stars-align minimum is probably around £45k these days. But then it starts going up once there are suboptimal circumstances like the ones you mention. In practice I might expect it to land at 125% to 250% of that figure depending how the non-salary aspects of the job look.
I'm curious about the motivation of the question; FWIW my figure here is a complicated function of my expenses, anticipated flexibility on those expenses, past savings, future plans, etc. in a way that I wouldn't treat it as much of a guide to what anyone else would or should say.
It does indeed depend a lot. I think the critical thing to remember is that the figure should be the minimum of what it costs to get a certain type of talent and how valuable that talent is. Clean Water is worth thousands of dollars per year to me, but if you turned up on my doorstep with a one-year supply of water for $1k I'd tell you to stop wasting my time because I can get it far more cheaply than that.
When assessing the cost of acquiring talent, the hard thing to track is how many people aren't in the pool of applicants at all due to funding con...
I got very lucky that I was born in a city that is objectively one of the best places in the world to do what I do, so reasons to move location are limited.
More generally I don't feel like I'm doing anything particularly out of the ordinary here compared to a world where I am not donating; I like money, more of it is better than less of it, but there are sometimes costs to getting more money that outweigh the money. Though I would say that as you go up the earnings curve it gets easier and easier to mitigate the personal costs, e.g. by spending money to sa...
This really depends how broadly I define things; does reading the EA Forum count? In terms of time that feels like it's being pretty directly spent on deciding, my sense is ~50 hours per year. That's roughly evenly split between checking whether the considerations that inform my cause prioritisaition have changed - e.g. has a big new funder moved into a space - and evaluating individual opportunities.
I touched on the evaluation question in a couple of other answers.
My views have not changed directionally, but I do feel happier with them than I did at the time for a couple of reasons:
It has varied. Giving both of us half the budget is in some ways most natural but we quickly noticed it was gameable to the extent we can predict each other's actions, similar to what is described here. At the moment we're much closer to 'discuss a lot and fund based on consensus'.
Even with attempts to prevent it, I think annual risk of value drift for me is greater than the annual expected real return on equities, which tends to defeat the usual argument for giving later.
Another exercise I've done occasionally is to look at my donations from say 5-10 years ago and muse on whether I would rather have invested the money and given now. So far that hasn't been close to true, and that's in spite of an impressive bull market in stocks over the last decade. Money was just so much more of an issue back then. I thought this from Will ...
I sometimes think about whether we have or should have language for a mental health equivalent of Second-Impact syndrome. At the time I burned out I would say I was dealing with four ~independent situations or circumstances that most people would recognise as challenging, but my attitude to each one was 'this is fine, I can handle this'. Taken one at a time that was probably true, all at once was demonstrably false.
Somehow I needed to notice that I was already dealing with one or two challenging situations and strongly pivot to a defensive posture to...
This was a surprising question to me, because that's not how I think about my donations. I think there are a few things going on there:
EA's relationship with earn-to-givers is weird.
On the one hand, my post from last year is currently the 2nd-highest-upvoted post of all time on this Forum. People in EA are mostly nice about what I do, especially online. And when EA comes in for criticism, I often feel like my donations are effectively being wheeled out as a defense. To be clear, in many ways this is reasonable; I probably wouldn't have donated anything like as much if it weren't for EA.
On the other hand, I'm sometimes reminded of the observation that it is 'necessary to get be...
Would you say currently, the median EA should consider trying some E2G (or at least non-EA work while giving significantly) early on in their career?
That's quite a cautious phrasing! Let me strengthen it a bit then respond to that:
As of 2024, should the median EA try some E2G (or at least non-EA work while giving significantly) early on in their career?
My thoughts on this now depend a fair bit on where you draw the boundaries of 'EA'.
For the median EA survey taker, I pretty strongly lean 'yes' here. Full disclosure that I am moderately influenced by ...
How do you balance your earning to give/effective giving commitments with your family commitments? (e.g. in my own experience, one's partner may disapprove of or be stressed out by you giving >=10%, and of course with a mortgage/kids things get even tougher)
To your last observation, I actually think this has gotten easier over the years. When I was younger I had so much uncertainty about my life/career trajectory that I found it difficult to understand the marginal value of both spending and saving. What if I save too little and then turn down an ...
That sounds plausible. I do think of ACX as much more 'accelerationist' than the doomer circles, for lack of a better term. Here's a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scott's position.
https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate
...Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitar
Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on 'we need to beat China' arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an 'overwhelming majority of EAs involved in AI safety' disagree with it even now.
Example from August 2022:
https://www.astralcodexten.com/p/why-not-slow-ai-progress
...So
For the record, I have a few places I think EA is burning >$30m per year, not that AW is actually one of them. Most EAs I speak to seem to have similarly-sized bugbears? Though unsurprisingly they don't agree about where the money is getting burned..
So from where I stand I don't recognise your guess of how people respond to that situation. A few things I believe that might help explain the difference:
If you weigh desires/preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain ov...
Hi Michael,
Sorry for putting off responding to this. I wrote this post quickly on a Sunday night, so naturally work got in the way of spending the time to put this together. Also, I just expect people to get very upset with me here regardless of what I say, which I understand - from their point of view I'm potentially causing a lot of harm - but naturally causes procrastination.
I still don't have a comprehensive response, but I think there are now a few things I can flag for where I'm diverging here. I found titotal's post helpful for establishing th...
Ah, gotcha, I guess that works. No, I don't have anything I would consider strong evidence, I just know it's come up more than anything else in my few dozen conversations over the years. I suppose I assumed it was coming up for others as well.
they should definitely post these and potentially redirect a great deal of altruistic funding towards global health
FWIW this seems wrong, not least because as was correctly pointed out many times there just isn't a lot of money in the AW space. I'm pretty sure GHD has far better places to fundraise from.
To...
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger.
I'm confused how this works, could you elaborate?
My usual causal chain linking these would be 'argument is weak' -> '~nobody believes it' -> 'nobody posts it'.
The middle step fails here. Do you have something else in mind?
FWIW, I thought these two comments were reasonable guesses at what may be going on here.
First, want to flag that what I said was at the post level and then defined stronger as:
the points I'm most likely to hear and give most weight to when discussing this with longtime EAs in person
You said:
I haven’t heard any arguments I’d consider strong for prioritizing global health which weren’t mentioned during Debate Week
So I can give examples of what I was referring to, but to be clear we're talking somewhat at cross purposes here:
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that "animals don't count at all". I think it's somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, ...
I'm surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails).
On animals just "not counting" - I've been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.
Jeff because he doesn't seem to have provided any justification (from what I've seen) for the claim that animals don't have relevant experiences that make them moral patients. He simply asserts this as his view. It's not eve...
Thanks for this post, I was also struggling with how scattered the numbers seemed to be despite many shared assumptions. One thing I would add:
...Another thing I want to emphasise: this is an estimate of past performance of the entire animal rights movement. It is not an estimate of the future cost effectiveness of campaigns done by EA in particular. They are not accounting for tractableness, neglectedness, etc of future donations....
In the RP report, they accounted for this probable drop in effectiveness by dropping the effectiveness by a range of 20%-60%. T
Yeah I think there's something to this, and I did redraft this particular point a few times as I was writing it for reasons in this vicinity. I was reluctant to remove it entirely, but it was close and I won't be surprised if I feel like it was the wrong call in hindsight. It's the type of thing I expect I would have found a kinder framing for given more time.
Having failed to find a kinder framing, one reason I went ahead anyway is that I mostly expect the other post-level pro-GH people to feel similarly.
I can try, but honestly I don't know where to start; I'm well-aware that I'm out of my depth philosophically, and this section just doesn't chime with my own experience at all. I sense a lot of inferential distance here.
Trying anyway: That section felt closer to empirical claim that 'we' already do things a certain way than an argument for why we should do things that way, and I don't seem to be part of the 'we'. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to '...
For you this works in favor of global health, for others it may not.
In theory I of course agree this can go either way; the maths doesn't care which base you use.
In practice, Animal Welfare interventions get evaluated with a Global Health base far more than vice-versa; see the rest of Debate Week. So I expect my primary conclusion/TL;DR[1] to mostly push one way, and didn't want to pretend that I was being 'neutral' here.
...For starters I have a feeling that many in the EA community place higher credence on moral theories that would lead to prioritizing
Hi Michael, just quickly: I'm sorry if I misinterpreted your post. For concreteness, the specific claim I was noting was:
I argue that we should fix and normalize relative to the moral value of human welfare, because our understanding of the value of welfare is based on our own experiences of welfare, which we directly value.
In particular, the bolded section seems straightforwardly false for me, and I don't believe it's something you argued for directly?
Thanks for taking the time to respond.
I think we’re pretty close to agreement, so I’ll leave it here except to clarify that when I’ve talked about engaging/engagement I mean something close to ‘public engagement’; responses that the person who raised the issue sees or could reasonably be expected to see. So what you’re doing here, Zach elsewhere in the comments, etc.
CEA discussing internally is also valuable of course, and is a type of engagement, but is not what I was trying to point at. Sorry for any confusion, and thanks for differentiating.
Thanks for sharing your experience of working on the Forum Sarah. It's good to hear that your internal experience of the Forum team is that it sees feedback as vital.
I hope the below can help with understanding the type of thing which can contribute to an opposing external impression. Perhaps some types of feedback get more response than others?
If you take one thing away from my comment, please remember that we love feedback - there are multiple ways to contact us listed here, including an anonymous option.
AFAICT I have done this twice, once asking a ...
That's fair, I didn't really explain that footnote. Note the original point was in the context of cause prioritisation, and I should probably have linked to this previous comment from Jason which captured my feeling as well:
...A name change would be a good start.
By analogy, suppose there were a Center for Medical Studies that was funded ~80% by a group interested in just cardiology. Influenced by the resultant incentives, the CMS hires a bunch of cardiologists, pushes medical students toward cardiology residencies, and devotes an entire instance of its flagsh
Note: I had drafted a longer comment before Arepo's comment, given the overlap I cut parts that they already covered and posted the rest here rather than in a new thread.
...it also presupposes that CEA exists solely to serve the EA community. I view the community as CEA’s team, not its customers. While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members
I agree with Arepo that both halves of this claim seem wrong. Four of CEA's five prog...
I'm sorry you hear it that way, but that's not what it says; I'm making an empirical claim about how norms work / don't work. If you think the situation I describe is tenable, feel free to disagree.
But if we agree it is not tenable, then we need a (much?) narrower community norm than 'no donation matching', such as 'no donation matching without communication around counterfactuals', or Open Phil / EAF needs to take significantly more flak than I think they did.
I hoped pointing that out might help focus minds, since the discussion so far had focused on the weak players not the powerful ones.
A question I genuinely don’t know the answer to, for the anti-donation-match people: why wasn’t any of this criticism directed at Open Phil or EA funds when they did a large donation match?
I have mixed feelings on donation matching. But I feel strongly that it is not tenable to have a general community norm against something your most influential actors are doing without pushback, and checking the comments on the linked post I’m not seeing that pushback.
Relatedly, I didn’t like the assertion that the increased number of matches comes from the ‘fundraising’...
I wasn't an enormous fan of the LTFF/OP matching campaign, but I felt it was actually a reasonable mechanism for the exact kind of dynamic that was going on between the LTFF and Open Phil.
The key component that for me was at stake in the relationship between the LTFF and OP was to reduce Open Phil influence on the LTFF. Thinking through the game theory of donations that are made on the basis of future impact and how that affects power dynamics is very messy, and going into all my thoughts on the LTFF/OP relationship here would be far too much, but wi...
As Michael says, there was discussion of it, but it was in a different thread and I did push back in one small place against what I saw as misleading phrasing by an EA fund manager. I don't fully remember what I was thinking at the time, so anything else I say here is a bit speculative.
Overall, I would have preferred that OP + EA Funds had instead done a fixed-size exit grant. This would have required much less donor reasoning about how to balance OP having more funding available for other priorities vs these two EA funds having more to work with. How I...
But I feel strongly that it is not tenable to have a general community norm against something your most influential actors are doing without pushback, and checking the comments on the linked post I’m not seeing that pushback.
I hear this as "you can't complain about FarmKind, because you didn't complain about OpenPhil". But:
FWIW, I had started a thread on the EA Funds fundraising post here about Open Phil's counterfactuals, because there was no discussion of it.
I'm not in the anti-donation-match camp, though.
Thanks for clarifying. I agree that Gift Aid eligibility is the key question; HMRC does not expect me to have insight into the administration of every charity I donate to, and it’s not like they care if charities don’t take the ‘free’ money they are entitled to! In other words, whether CEA claims does not matter but whether it could claim does.
However, in order for the charity to be entitled a Gift Aid declaration must be completed:
How sure are you about this? The boxes on the UK Self Assessment Tax Return (link below, it’s on page 6) where I declare my donations ask for things like “Gift Aid Payments made in the year…”. So I wouldn’t include non-Gift-Aid payments there and I’m not sure where else they would go.
In general, the core tax concept for various reliefs in the UK is Adjusted Net Income. The page defining it (linked below) explicitly calls out Gift Aid donations as reducing it but not anything else.
I’d appreciate a link if I’m wrong about this.
Thanks Arden. I suspect you don't disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself.
One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:
...We are cause neutral[1] – we prioritise x-risk reduction because we think it's most pressing, but it’s possible we co
Meta note: I wouldn’t normally write a comment like this. I don’t seriously consider 99.99% of charities when making my donations; why single out one? I’m writing anyway because comments so far are not engaging with my perspective, and I hope more detail can help 80,000 hours themselves and others engage better if they wish to do so. As I note at the end, they may quite reasonably not wish to do so.
For background, I was one of the people interviewed for this report, and in 2014-2018 my wife and I were one of 80,000 hours’ largest donors. In recent years it...
I think it is very clear that 80,000 hours have had a tremendous influence on the EA community... so references to things like the EA survey are not very relevant. But influence is not impact... 80,000 hours prioritises AI well above other cause areas. As a result they commonly push people off paths which are high-impact per other worldviews.
Many of the things the EA Survey shows 80,000 Hours doing (e.g. introducing people to EA in the first place, helping people get more involved with EA, making people more likely to remain engaged with EA, introduc...
Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it's totally reasonable to:
CEA has now confirmed that Miri was correct to understand their budget - not EVF's budget - as around $30m.
In terms of things that would have helped when I was younger, I'm pretty on board with GWWC's new community strategy,[1] and Grace's thoughts on why a gap opened up in this space. I was routinely working 60-70 hour weeks at the time, so doing something like an EA fellowship would have been an implausibly large ask and a lot of related things seem vibed in a way I would have found very offputting. My actual starting contact points with the EA community consisted of no-obligation low-effort socials and prior versions of EA Global.
In terms of things now,...
I even explicitly said I am less familiar with BP as a debate format.
The fact that you are unfamiliar with the format, and yet are making a number of claims about it, is pretty much exactly my issue. Lack of familiarity is an anti-excuse for overconfidence.
The OP is about an event conducted in BP. Any future events will presumably also be conducted in BP. Information about other formats is only relevant to the extent that they provide information about BP.
I can understand not realising how large the differences between formats are initially, and so a...
Finally, even after a re-read and showing your comment to two other people seeking alternative interpretations, I think you did say the thing you claim not to have said. Perhaps you meant to say something else, in which case I'd suggest editing to say whatever you meant to say. I would suggest an edit myself, but in this case I don't know what it was you meant to say.
I've edited the relevant section. The edit was simply "This is also pretty common in other debate formats (though I don't know how common in BP in particular)".
...By contrast, criticisms I think
You did give some responses elsewhere, so a few thoughts on your responses:
But this is really far from the only way policy debate is broken. Indeed, a large fraction of policy debates end up not debating the topic at all, but end up being full of people debating the institution of debating in various ways, and making various arguments for why they should be declared the winner for instrumental reasons. This is also pretty common in other debate formats.
(Emphasis added). This seems like a classic case for 'what do you think you know, and how do you think yo...
Note that a world where Insect suffering is 50% to be 10,000x as important as human suffering, and 50% to be 0.0001x as important as human suffering, is also a world where you can say exactly the same thing with humans and insects reversed.
That should make it clear that the ‘in expectation, [insects are] 5000x more important’ claim that follows is false, or more precisely requires additional assumptions.
This is the type of argument I was trying to eliminate when I wrote this:
https://forum.effectivealtruism.org/posts/atdmkTAnoPMfmHJsX/multiplier-arguments-are-often-flawed