All of AGB 🔸's Comments + Replies

Note that a world where Insect suffering is 50% to be 10,000x as important as human suffering, and 50% to be 0.0001x as important as human suffering, is also a world where you can say exactly the same thing with humans and insects reversed. 

That should make it clear that the ‘in expectation, [insects are] 5000x more important’ claim that follows is false, or more precisely requires additional assumptions.

This is the type of argument I was trying to eliminate when I wrote this:

https://forum.effectivealtruism.org/posts/atdmkTAnoPMfmHJsX/multiplier-arguments-are-often-flawed

1
nonn
I think this is a good point about precise phrasing, but I think the argument still basically goes through that insects should be treated as extremely important in expectation. You can eliminate the two envelope problem by either make the numbers fixed/concrete, or you can use conditional probabilities. Namely, "50% to be 10,000x as important as human suffering | insect suffering matters" = 50% chance there's huge stakes in the world, far more than we thought. "50% to be 0.0001x as important as human suffering | insect suffering doesn't matter at all" = 50% chance the stakes are much smaller, in line with what we thought. Which makes it clear the first world should be prioritized ---------------------------------------- More intuitively: suppose you thought there was an 50% chance you prevent a holocaust-level (10,000,000 lives) event happening to humans, but a 50% chance that this intervention would be completely useless. Alternately, you could do a normal intervention to save 1000 lives. You could say "the normal intervention as a 50% chance to be ~infinitely more valuable than the holocaust-prevention thing" But it's obvious you should do the holocaust prevention thing. Because here it's more obvious what the comparative/conditional stakes are. In one possible world, the 'world you can affect' is vastly larger, and that world should be prioritized. ---------------------------------------- Caveats: ignoring longtermist arguments, and the probability insects matter is << 50%

Thanks for this. I already had some sense that historical productivity data varied, but this prompted me to look at how large those differences are and they are bigger than I realised. I made an edit to my original comment.

TL;DR: Current productivity people mostly agree about. Historical productivity they do not. Some sources, including those in the previous comment, think Germany was more productive than the US in the past, which makes being less productive now more damning compared to a perspective where this has always been the case. 

*** 

For s... (read more)

AGB 🔸
*135
14
2
3
13

Just to respond to a narrow point because I think this is worth correcting as it arises: Most of the US/EU GDP growth gap you highlight is just population growth. In 2000 to 2022 the US population grew ~20%, vs. ~5% for the EU. That almost exactly explains the 55% vs. 35% growth gap in that time period on your graph; 1.55 / 1.2 * 1.05 = 1.36.

This shouldn't be surprising, because productivity in the 'big 3' of US / France / Germany track each other very closely and have done for quite some time. (Edit: I wasn't expecting this comment to blow up, and it seem... (read more)

This is weird because other sources do point towards a productivity gap. For example, this report concludes that "European productivity has experienced a marked deceleration since the 1970s, with the productivity gap between the Euro area and the United States widening significantly since 1995, a trend further intensified by the COVID-19 pandemic".

Specifically, it looks as if, since 1995, the GDP per capita gap between the US and the eurozone has remained very similar, but this is due to a widening productivity gap being cancelled out by a shrinking employ... (read more)

3
David Mathers🔸
If productivity is so similar, how come the US is quite a bit richer per capita? Is that solely accounted for by workers working longer hours? 

Most changed mind votes in history of EA comments? This blew my mind a bit, I feel like I've read so much about American productivity outpacing Europe, think this deserves a full length article.

Re. 2, that maths is the right ballpark is trying to save but if donating I do want to remind people that UK donations are tax-deductible and this deduction is not limited the way I gather it is in some countries like the US. 

So you wouldn’t be paying £95k in taxes if donating a large fraction of £250k/yr. Doing quick calcs, if living off £45k then the split ends up being something like:


Income: 250k

Donations: 185k

Tax: 20k

Personal: 45k

(I agree with the spirit of your points.)

PS- Are donations tax deductible in the UK (Besides giftaid)? I've been operating on the assumption that they aren't, but if they were, I could give more. 


I think the short answer is 'depends what you mean?'. Longer answer:

  • Income tax is fully tax deductible. But if you are a basic rate (20%) taxpayer, this is what Gift Aid is handling and there isn't much further to do. If you are a higher or additional (40% or 45%) taxpayer then there is additional relief you can claim.
  • National Insurance is not deductible.
  • Some
... (read more)
2
Toby Tremlett🔹
Thank you! That's really helpful. I also just saw Will's post, which has also been useful. 
AGB 🔸
39
13
3
2
4

Stylistically, some commenters don't seem to understand how this differs from a normal cause prioritisation exercise. Put simply, there's a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things. Most cause prioritisation is the former, this post is the latter.

As for why the latter is a problem, I agree with JWS's observation that this type of 'For The Greater Good' reasoning leads to great harm when applied ... (read more)

2
Vasco Grilo🔸
Thanks, Alex. As you said, I strongly endorse expected total hedonistic utilitarianism, so I do think one should consider effects across all time, space and beings. One reason I prefer interventions improving the conditions of farmed animals instead of ones reducing their consumption is that the former have smaller effects on wild animals. Another is that, although I think reducing the consumption of farmed animals is beneficial nearterm (next few years) because farmed animals have negative lives now, it may be harmful longer term (next few decades) if it is sufficiently permanent because farmed animals' lives may become positive. It is super unclear whether wild animals have positive or negative lives, which means the expected impact on them is lower than it would otherwise be if one could more confidently say they are positive or negative. I believe it is way clearer, although not totally clear, that farmed chickens and shrimp in standard conditions have negative lives, because there is data on the time they spend in pain (which I used in my post), which is not the case for the lives of wild arthropods (the most relevant group to assess the effects on wild animals). For chickens in improved conditions, I would say there is room for disagreement about whether they have positive or negative lives. Relatedly, I estimated the effects on wild animals of saving human lives in a random country of the beneficiaries of GiveWell's top charities are 1.15 k times as large as the effects on humans. I welcome estimates for the effects on wild animals of interventions improving the conditions of farmed animals, and I am open to changing my prioritisation based on the results. I think acting with the goal of trying to decrease the chance of harm reduces this in expectation relative to the counterfactual of doing nothing.

there's a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things.

This is a fantastic summary of why I feel much more averse to this argument than to statements like "animal welfare is more important than human welfare" (which I am neutral-to-positive on).

I appreciate you writing this up at the top level, since it feels more productive to engage here than on one of a dozen comment threads. 

I have substantive and 'stylistic' issues with this line of thinking, which I'll address in separate comments. Substantively, on the 'Suggestions' section:

At the very least, I think GiveWell and Ambitious Impact should practice reasoning transparency, and explain in some detail why they neglect effects on farmed animals. By ignoring uncertain effects on farmed animals, GiveWell and Ambitious Impact are implicitly ass

... (read more)
6
Vasco Grilo🔸
Thanks, Alex. Many people who donate to GiveWell's interventions care about animal welfare, often donating to animal welfare interventions at the same time. Some of these people may want to know about harms caused to animals nearterm due to supporting GiveWell's interventions. Some of these people may even endorse RP's median welfare ranges, although still support GiveWell's interventions due to not wanting to maximise impartial welfare. In general, people have complex preferences about their giving, so I think it is better to be transparent instead of assuming no one would care about the additional information. I agree. However, it would still be good to go from your 2nd allocation to one where the 10 $ still go to the best animal welfare organisation, but the 90 $ go to an intervention which is more cost-effective than the best human welfare intervention, which may be one global health and development intervention with improved impacts on animals. I agree with this prioritisation framing, and commented 4 months ago the meat eating problem is mostly a distraction in this sense. However, many people do not think there is a single most important thing, and so may be open to donating to a global health and development interventions with improved impacts on animals even if donating to animal welfare would be more cost-effective. In addition, it still seems worth analysing the meat eating problem to arrive to more accurate beliefs about the world, and because, in some hard to specify way, many value decreasing the probability of causing harm more than prioritising the most cost-effective interventions. GiveWell has made many other choices for all of their donors, and the ones related to how much they value saving lives (as a function of age), and increasing income influence way more money than what would be needed to offset potential negative impacts on animals.
AGB 🔸
39
13
3
2
4

Stylistically, some commenters don't seem to understand how this differs from a normal cause prioritisation exercise. Put simply, there's a difference between choosing to ignore the Drowning Child because there are even more children in the next pond over, and ignoring the drowning children entirely because they might grow up to do bad things. Most cause prioritisation is the former, this post is the latter.

As for why the latter is a problem, I agree with JWS's observation that this type of 'For The Greater Good' reasoning leads to great harm when applied ... (read more)

I think my if-the-stars-align minimum is probably around £45k these days. But then it starts going up once there are suboptimal circumstances like the ones you mention. In practice I might expect it to land at 125% to 250% of that figure depending how the non-salary aspects of the job look. 

I'm curious about the motivation of the question; FWIW my figure here is a complicated function of my expenses, anticipated flexibility on those expenses, past savings, future plans, etc. in a way that I wouldn't treat it as much of a guide to what anyone else would or should say. 

It does indeed depend a lot. I think the critical thing to remember is that the figure should be the minimum of what it costs to get a certain type of talent and how valuable that talent is. Clean Water is worth thousands of dollars per year to me, but if you turned up on my doorstep with a one-year supply of water for $1k I'd tell you to stop wasting my time because I can get it far more cheaply than that. 

When assessing the cost of acquiring talent, the hard thing to track is how many people aren't in the pool of applicants at all due to funding con... (read more)

I got very lucky that I was born in a city that is objectively one of the best places in the world to do what I do, so reasons to move location are limited.

More generally I don't feel like I'm doing anything particularly out of the ordinary here compared to a world where I am not donating; I like money, more of it is better than less of it, but there are sometimes costs to getting more money that outweigh the money. Though I would say that as you go up the earnings curve it gets easier and easier to mitigate the personal costs, e.g. by spending money to sa... (read more)

This really depends how broadly I define things; does reading the EA Forum count? In terms of time that feels like it's being pretty directly spent on deciding, my sense is ~50 hours per year. That's roughly evenly split between checking whether the considerations that inform my cause prioritisaition have changed - e.g. has a big new funder moved into a space - and evaluating individual opportunities. 

I touched on the evaluation question in a couple of other answers. 

It's either my 2014 donations to 80k or my 2015 donations to Charity Science, which eventually evolved into AIM. Both orgs were pretty small at the time, from memory we were >15% of their budgets in those years.

4
James Snowden🔸
Any thoughts on the impact multiple of funding them then vs. funding them now?

My views have not changed directionally, but I do feel happier with them than I did at the time for a couple of reasons:

  • I thought and continue to think that the best argument is some version of 'clever arguments aside, from a layperson perspective what you're doing looks awfully similar to what caused the GFC, and the GFC was a huge disaster which society has not learned the lessons from'.
    • If you talk to people inside finance, they will usually reject the second claim and say a huge amount has changed since the GFC.
    • In particular, regulatory pressure shifted
... (read more)
3
Aaron Gertler 🔸
Thanks!  ETFs do sound like a big win. I suppose someone could look at them as "finance solving a problem that finance created" (if the "problem" is e.g. expensive mutual funds). But even the mutual funds may be better than the "state of nature" (people buying individual stocks based on personal preference?). And expensive funds being outpaced by cheaper, better products sounds like finance working the way any competitive market should.

It has varied. Giving both of us half the budget is in some ways most natural but we quickly noticed it was gameable to the extent we can predict each other's actions, similar to what is described here. At the moment we're much closer to 'discuss a lot and fund based on consensus'. 

Even with attempts to prevent it, I think annual risk of value drift for me is greater than the annual expected real return on equities, which tends to defeat the usual argument for giving later. 

Another exercise I've done occasionally is to look at my donations from say 5-10 years ago and muse on whether I would rather have invested the money and given now. So far that hasn't been close to true, and that's in spite of an impressive bull market in stocks over the last decade. Money was just so much more of an issue back then. I thought this from Will ... (read more)

7
Neel Nanda
Would value drift be mitigated by donating to a DAF and investing there? Or are you afraid your views on where to donate might also shift

I sometimes think about whether we have or should have language for a mental health equivalent of Second-Impact syndrome. At the time I burned out I would say I was dealing with four ~independent situations or circumstances that most people would recognise as challenging, but my attitude to each one was 'this is fine, I can handle this'. Taken one at a time that was probably true, all at once was demonstrably false. 

Somehow I needed to notice that I was already dealing with one or two challenging situations and strongly pivot to a defensive posture to... (read more)

This was a surprising question to me, because that's not how I think about my donations. I think there are a few things going on there:

  • I only listed the four largest recipients that account for around 2/3rds of the total, so smaller orgs were naturally not listed.
    • As it happens another cluster I very nearly mentioned was AIM. I've donated roughly 150k (10% of donations) to AIM / AIM's predecessors / AIM-incubated charities.
  • At the time I gave to 80k, in 2014-2018, they were much less of an 'established institution' and much more of a fast-expanding startup.
  • S
... (read more)

EA's relationship with earn-to-givers is weird. 

On the one hand, my post from last year is currently the 2nd-highest-upvoted post of all time on this Forum. People in EA are mostly nice about what I do, especially online. And when EA comes in for criticism, I often feel like my donations are effectively being wheeled out as a defense. To be clear, in many ways this is reasonable; I probably wouldn't have donated anything like as much if it weren't for EA. 

On the other hand, I'm sometimes reminded of the observation that it is 'necessary to get be... (read more)

Would you say currently, the median EA should consider trying some E2G (or at least non-EA work while giving significantly) early on in their career?

That's quite a cautious phrasing! Let me strengthen it a bit then respond to that:

As of 2024, should the median EA try some E2G (or at least non-EA work while giving significantly) early on in their career?

My thoughts on this now depend a fair bit on where you draw the boundaries of 'EA'. 

For the median EA survey taker, I pretty strongly lean 'yes' here. Full disclosure that I am moderately influenced by ... (read more)

6
Joel Tan🔸
Thanks for the thoughts! (1)   I'm in strong agreement with worries over people leaving/disengaging from EA due to applying for a huge number of jobs and getting disillusioned when not landing any. From my conversations with various EAs, this seems a genuine problem, and there are probably structural reasons for this: (a) the current EA job market (demand > supply); and (b) selection effects in terms of who gives advice (by definition, us EA folks at EA organizations giving advice on EA jobs, have been successful in landing a direct EA job, and may underrate the difficulties of doing so). (2)   On whether the average early career EA should try for E2G - I'm not sure about this. It's true that they've been selected for, but they're still fundamentally at a big disadvantage in terms of experience, and I'm seriously worried about a lot of selection into low career-capital but nominally EA roles that disadvantage them later on, both in terms of impact and financial security. In any case, at EAGx Singapore last weekend, I did a talk to a crowd of mainly these early career EAs on having impact with and without an EA career, and I basically pitched trying for an EA job but also seriously considering impact by effective giving in a non-EA job as a Plan B. I think it's especially relevant for LMIC EAs, who cannot move to the UK/US for high-impact roles (or find it harder to do so).

How do you balance your earning to give/effective giving commitments with your family commitments? (e.g. in my own experience, one's partner may disapprove of or be stressed out by you giving >=10%, and of course with a mortgage/kids things get even tougher)

 

To your last observation, I actually think this has gotten easier over the years. When I was younger I had so much uncertainty about my life/career trajectory that I found it difficult to understand the marginal value of both spending and saving. What if I save too little and then turn down an ... (read more)

That sounds plausible. I do think of ACX as much more 'accelerationist' than the doomer circles, for lack of a better term. Here's a more recent post from October 2023 informing that impression, below probably does a better job than I can do of adding nuance to Scott's position. 

https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate

Second, if we never get AI, I expect the future to be short and grim. Most likely we kill ourselves with synthetic biology. If not, some combination of technological and economic stagnation, rising totalitar

... (read more)

Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on 'we need to beat China' arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an 'overwhelming majority of EAs involved in AI safety' disagree with it even now.

Example from August 2022:

https://www.astralcodexten.com/p/why-not-slow-ai-progress

So

... (read more)
5
Ben_West🔸
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowsky's "don't fight over which monkey gets to eat the poison banana first" framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
4
MichaelDickens
Scott's last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn't race). But I can see how a politician reading this article wouldn't see that implication.

For the record, I have a few places I think EA is burning >$30m per year, not that AW is actually one of them. Most EAs I speak to seem to have similarly-sized bugbears? Though unsurprisingly they don't agree about where the money is getting burned..

So from where I stand I don't recognise your guess of how people respond to that situation. A few things I believe that might help explain the difference:

  1. Most of the money is directed by people who don't read or otherwise have a fairly low opinion of the forum.
  2. Posting on the forum is 'not for the faint of he
... (read more)
4
JackM
Maybe I don't speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isn't optimal, but I wasn't aware that many EAs think we are giving tens of millions of dollars to interventions/areas that do NO good in expectation (which is what I mean by "burning money"). Maybe the burning money point is a bit of a red herring though if the amount you're burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point. To be honest you might be right overall that people who don't think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. I'd love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Ariel's on the topic of animal welfare vs global health.

If you weigh desires/preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot.

Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain ov... (read more)

2
Michael St Jules 🔸
It's worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then there's top-down/voluntary/endogenous attention, the executive function you use to intentionally focus on things. We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesn't. I don't mean to discount preferences if interpersonal comparisons can't be grounded. I mean that if animals have such preferences, you can't say they're less important (there's no fact of the matter either way), as I said in my top-level comment.

Hi Michael,

Sorry for putting off responding to this. I wrote this post quickly on a Sunday night, so naturally work got in the way of spending the time to put this together. Also, I just expect people to get very upset with me here regardless of what I say, which I understand - from their point of view I'm potentially causing a lot of harm - but naturally causes procrastination. 

I still don't have a comprehensive response, but I think there are now a few things I can flag for where I'm diverging here. I found titotal's post helpful for establishing th... (read more)

5
Michael St Jules 🔸
I mostly agree with your reasoning before even getting into moral uncertainty and up to and including this: However, if we're assuming hedonism, I think your starting point is plausibly too low for animal welfare interventions, because it underestimates the disvalue of pain relative to life in full health, as I argue here.   I also think your response to the Tortured Tim thought experiment is reasonable. Still, I would say: 1. If you weigh desires/preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot, supporting RP's take. And if you weigh desires/preferences by attention or their effects on attention, it seems nonhuman animals matter a lot (but something like neuron count weighting isn't unreasonable). 1. I assume this is not how you weigh desires/preferences, though, or else you probably wouldn't disagree with RP here, and especially in the ways you do! 2. If you don't weigh desires by attention or their effects on attention, I don't see how you can ground interpersonal utility comparisons at all, especially between humans and other animals but even between humans, who may differ dramatically in their values. I still don't see a positive case for animals not mattering much.

Ah, gotcha, I guess that works. No, I don't have anything I would consider strong evidence, I just know it's come up more than anything else in my few dozen conversations over the years. I suppose I assumed it was coming up for others as well. 

they should definitely post these and potentially redirect a great deal of altruistic funding towards global health

FWIW this seems wrong, not least because as was correctly pointed out many times there just isn't a lot of money in the AW space. I'm pretty sure GHD has far better places to fundraise from. 

To... (read more)

7
JackM
This is bizarre to me. This post suggests that between $30 and 40 million goes towards animal welfare each year (and it could be more now as that post was written four years ago). If animals are not moral patients, this money is as good as getting burned. If we actually were burning this amount of money every year, I'd imagine some people would make it their overwhelming mission to ensure we don't (which would likely involve at least a few forum posts). Assuming it costs $5,000 to save a human life, redirecting that money could save up to 8,000 human lives every year. Doesn't seem too bad to me. I'm not claiming posts arguing against animal moral patienthood could lead to redirecting all the money, but the idea that no one is bothering to make the arguments because there's just no point doesn't stack up to me.

I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger.

I'm confused how this works, could you elaborate? 

My usual causal chain linking these would be 'argument is weak' -> '~nobody believes it' -> 'nobody posts it'.

The middle step fails here. Do you have something else in mind? 

FWIW, I thought these two comments were reasonable guesses at what may be going on here.

7
JackM
I'm not sure the middle step does actually fail in the EA community. Do you have evidence that it does? Is there some survey evidence for significant numbers of EAs not believing animals are moral patients?  If there is a significant number of people that think they have strong arguments for animals not counting, they should definitely post these and potentially redirect a great deal of altruistic funding towards global health. Anyway, another possible causal chain might be: 'argument is weak but some people intuitively believe it in part because they want it to be true' -> 'there is no strong post that can really be written' -> 'nobody posts it' Maybe you can ask Jeff Kauffman why he has never provided any actual argument for this (I do apologize if he has and I have just missed it!).

First, want to flag that what I said was at the post level and then defined stronger as:

the points I'm most likely to hear and give most weight to when discussing this with longtime EAs in person

You said:

I haven’t heard any arguments I’d consider strong for prioritizing global health which weren’t mentioned during Debate Week

So I can give examples of what I was referring to, but to be clear we're talking somewhat at cross purposes here:

  • I would not expect you to consider them strong.
    • You are not alone here of course, and I suspect this fact also helps to ans
... (read more)

(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)

Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that "animals don't count at all". I think it's somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]

As @JackM pointed out, ... (read more)

I'm surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails). 

On animals just "not counting" - I've been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.

Jeff because he doesn't seem to have provided any justification (from what I've seen) for the claim that animals don't have relevant experiences that make them moral patients. He simply asserts this as his view. It's not eve... (read more)

Thanks for this post, I was also struggling with how scattered the numbers seemed to be despite many shared assumptions. One thing I would add:

Another thing I want to emphasise: this is an estimate of past performance of the entire animal rights movement. It is not an estimate of the future cost effectiveness of campaigns done by EA in particular. They are not accounting for tractableness, neglectedness, etc of future donations....

In the RP report, they accounted for this probable drop in effectiveness by dropping the effectiveness by a range of 20%-60%. T

... (read more)

I agree-voted this. This post was much more 'This argument in favour of X doesn't work[1]' rather than 'X is wrong', and I wouldn't want anyone to think otherwise. 

  1. ^

    Or more precisely, doesn't work without more background assumptions.

2
CB🔸
Oh, ok. It's just that the first sentence and examples gave a slightly different vibe, but it's more clear now. 

Yeah I think there's something to this, and I did redraft this particular point a few times as I was writing it for reasons in this vicinity. I was reluctant to remove it entirely, but it was close and I won't be surprised if I feel like it was the wrong call in hindsight. It's the type of thing I expect I would have found a kinder framing for given more time.

Having failed to find a kinder framing, one reason I went ahead anyway is that I mostly expect the other post-level pro-GH people to feel similarly. 

5
NickLaing
I agree with @AGB 🔸. I think there was only one seriously pro GH article from @Henry Howard🔸  (which I really appreciated), and a couple of very moderate push backs that could hardly be called strong arguments for GH (including mine). On the other hand there were almost 10 very pro animal-welfare articles.

I’ll leave this thread here, except to clarify that what you say I ‘seem to think’ is a far stronger claim than I intended to make or in fact believe.

2
JackM
Sorry that is fair, I think I assumed too much about your views.

I can try, but honestly I don't know where to start; I'm well-aware that I'm out of my depth philosophically, and this section just doesn't chime with my own experience at all. I sense a lot of inferential distance here. 

Trying anyway: That section felt closer to empirical claim that 'we' already do things a certain way than an argument for why we should do things that way, and I don't seem to be part of the 'we'. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to '... (read more)

3
Michael St Jules 🔸
Thanks, this is helpful! I think what I had in mind was more like the neuroscience and theories of pain in general terms, or in typical cases — hence "typically" —, not very specific cases. So, I'd allow exceptions. Your understanding of the general neuroscience of pain will usually not affect how bad your pain feels to you (especially when you're feeling it). Similarly, your understanding of the general neuroscience of desire won't usually affect how strong (most of) your desires are. (Some people might comfort themselves with this knowledge sometimes, though.) This is what I need, when we think about looking for experiences like ours in other animals. On your specific cases below. ---------------------------------------- The fallible pain memory case could be an exception. I suspect there's also an interpretation compatible with my view without making it an exception: your reasons to prevent a pain that would be like you remember the actual pain you had (or didn't have) are just as strong, but the actual pain you had was not like you remember it, so your reasons to prevent it (or a similar actual pain) are not in fact as strong. In other words, you are valuing your impression of your past pain, or, say, valuing your past pain through your impression of it.[1] That impression can fail to properly track your past pain experience.[2] But, holding your impression fixed, if your past pain or another pain were like your impression, then there wouldn't be a problem. And knowing how long a pain will last probably often does affect how bad/intense the overall experience (including possible stress/fear/anxiety) seems to you in the moment. And either way, how you value the pain, even non-hedonically, can depend on the rest of your impression of things, and as you suggest, contextual factors like "whether it's for worthwhile reasons". This is all part of the experience. 1. ^ The valuing itself is also part of the impression as a whole, but your valuing is a

For you this works in favor of global health, for others it may not.

In theory I of course agree this can go either way; the maths doesn't care which base you use.

In practice, Animal Welfare interventions get evaluated with a Global Health base far more than vice-versa; see the rest of Debate Week. So I expect my primary conclusion/TL;DR[1] to mostly push one way, and didn't want to pretend that I was being 'neutral' here.

For starters I have a feeling that many in the EA community place higher credence on moral theories that would lead to prioritizing

... (read more)
2
Linch
One thing to be careful of re: question framing is to make sure to constrain the set of theories under consideration to altruism-relevant theories. Eg many people will place nontrivial credence in nihilism, egotism, commonsense morality, but most of those theories will not be particularly relevant to the prioritization for altruistic allocation of marginal donations. 
2
JackM
I'm not sure what the scope of "similarly-animal-friendly theories" is in your mind. For me I suppose it's most if not all consequentialist / aggregative theories that aren't just blatantly speciesist. The key point is that the number of animals suffering (and that we can help) completely dwarfs the number of humans. Also, as MichaelStJules says, I'm pretty sure animals have desires and preferences that are being significantly obstructed by the conditions humans impose on them. I took the fact that the forum overwhelmingly voted for animal welfare over global health to mean that people generally favor animal-friendly moral theories. You seem to think that it's because they are making this simple mistake with the multiplier argument, with your evidence being that loads of people are citing the RP moral weights project. I suppose I'm not sure which of us is correct, but I would point out that people may just find the moral weights project important because they have some significant credence in hedonism.

Hi Michael, just quickly: I'm sorry if I misinterpreted your post. For concreteness, the specific claim I was noting was:

I argue that we should fix and normalize relative to the moral value of human welfare, because our understanding of the value of welfare is based on our own experiences of welfare, which we directly value.

In particular, the bolded section seems straightforwardly false for me, and I don't believe it's something you argued for directly? 

7
Michael St Jules 🔸
Could you elaborate on this? I might have worded things poorly. To rephrase and add a bit more, I meant something like (These personal reference point experiences can also be empathetic responses to others, which might complicate things.) The section the summary bullet point you quoted links to is devoted to arguing for that claim. Anticipating and responding to some potential sources of misunderstanding: 1. I didn't intend to claim we're all experientialists and so only care about the contents of experiences, rather than, say, how our desires relate to the actual states of the world. The arguments don't depend on experientialism. 2. I mostly illustrated the arguments with suffering, which may give/reinforce the impression that I'm saying our understanding of value is based on hedonic states only, but I didn't intend that.

Thanks for taking the time to respond.

I think we’re pretty close to agreement, so I’ll leave it here except to clarify that when I’ve talked about engaging/engagement I mean something close to ‘public engagement’; responses that the person who raised the issue sees or could reasonably be expected to see. So what you’re doing here, Zach elsewhere in the comments, etc.

CEA discussing internally is also valuable of course, and is a type of engagement, but is not what I was trying to point at. Sorry for any confusion, and thanks for differentiating.

Thanks for sharing your experience of working on the Forum Sarah. It's good to hear that your internal experience of the Forum team is that it sees feedback as vital.

I hope the below can help with understanding the type of thing which can contribute to an opposing external impression. Perhaps some types of feedback get more response than others?

If you take one thing away from my comment, please remember that we love feedback - there are multiple ways to contact us listed here, including an anonymous option.

AFAICT I have done this twice, once asking a ... (read more)

8
Sarah Cheng 🔸
(Again: only speaking for myself, and here in particular I will avoid speaking about or for other people at CEA when possible.) Yup, I think it’s very reasonable for people outside of CEA to have a different impression than I do. I certainly don’t fault anyone for that. Hopefully hearing my perspective was helpful. I’m really sorry that our team didn’t properly respond to your messages. There are many factors that could affect whether or not any particular message got a response. We currently have a team assistant who has significantly improved how we manage incoming messages, so if you sent yours before she joined, I would guess someone dropped it by accident. As an engineer I know I have not always lived up to my own standards in terms of responding in a timely manner and I do feel bad about that. While I still think we do pretty good for our small size, I’m guessing that overall we are not at where I would personally like for us to be. Hmm I currently don’t recall any post about Forum fundraising. I think we considered fundraising for the Forum, but I don’t remember if any significant progress was made in developing that idea. In my opinion, Ben and Oscar wrote multiple detailed replies to that comment, though I am sympathetic to the take that they did not quite respond to Nuno’s central point. I think this is just a case of, things sometimes fall through the cracks, especially during times of high uncertainty as was the case in this example. I feel optimistic that, with more stability and the ability to plan for longer futures, CEA will do better. I also want to differentiate between public and internal engagement. I read Nuno’s writing and discussed it with my colleagues. At the time I didn’t necessarily think I would have better answers than Ben so I didn’t feel the need to join the public conversation, but at this point I probably do have better answers. I’ll just broadly say that, I agree that marginal value is what matters, as do others on my team. We d

That's fair, I didn't really explain that footnote. Note the original point was in the context of cause prioritisation, and I should probably have linked to this previous comment from Jason which captured my feeling as well:

A name change would be a good start.

By analogy, suppose there were a Center for Medical Studies that was funded ~80% by a group interested in just cardiology. Influenced by the resultant incentives, the CMS hires a bunch of cardiologists, pushes medical students toward cardiology residencies, and devotes an entire instance of its flagsh

... (read more)
4
Ben_West🔸
Thanks! That context is helpful.

Note: I had drafted a longer comment before Arepo's comment, given the overlap I cut parts that they already covered and posted the rest here rather than in a new thread.

...it also presupposes that CEA exists solely to serve the EA community. I view the community as CEA’s team, not its customers. While we often strive to collaborate and to support people in their engagement with EA, our primary goal is having a positive impact on the world, not satisfying community members

I agree with Arepo that both halves of this claim seem wrong. Four of CEA's five prog... (read more)

1
Ben_West🔸
Note: I'm no longer at CEA, thoughts my own. I feel kind of confused about the point you are making here. CEA is the Centre for Effective Altruism, not the Center for Effective Altruists. This is fairly different from many community building organizations; e.g. Berkeley Seniors' mission is to help senior citizens in Berkeley per se (rather than advance some abstract idea which seniors residing in Berkeley happen to support). I can't tell if you  1. Disagree that CEA differs from many community building organizations in this way 2. Agree that it differs but disagree that it should 3. Agree that it differs but feel like this difference is small/pedantic and not worth highlighting 4. Agree that it differs but disagree that "customer vs. team" is a useful way to describe this difference 5. Something else?

I'm sorry you hear it that way, but that's not what it says; I'm making an empirical claim about how norms work / don't work. If you think the situation I describe is tenable, feel free to disagree.

But if we agree it is not tenable, then we need a (much?) narrower community norm than 'no donation matching', such as 'no donation matching without communication around counterfactuals', or Open Phil / EAF needs to take significantly more flak than I think they did. 

I hoped pointing that out might help focus minds, since the discussion so far had focused on the weak players not the powerful ones. 

4
Jeff Kaufman 🔸
While I think a norm of "no donation matching" is where we should be, I think the best we're likely to get is "no donation matching without donors understanding the counterfactual impact". So while I've tried to argue for the former I've limited my criticism of campaigns to ones that don't meet the latter.
2
Ben Millwood🔸
If you're just saying "this other case might inform whether and when we think donation matches are OK", then sure, that seems reasonable, although I'm really more interested in people saying something like "this other case is not bad, so we should draw the distinction in this way" or "this other case is also bad, so we should make sure to include that too", rather than just "this other case exists". If you're saying "we have to be consistent, going forward, with how we treated OpenPhil / EA Funds in the past", then surely no: at a minimum we also have the option of deciding it was a mistake to let them off so lightly, and then we can think about whether we need to do anything now to redress that omission. Maybe now is the time we start having the norm, having accepted we didn't have it before? FWIW having read the post a couple of times I mostly don't understand why using a match seemed helpful to them. I think how bad it was depends partly on how EA Funds communicated to donors about the match: if they said "this match will multiply your impact!" uncritically then I think that's misleading and bad, if they said "OpenPhil decided to structure our offramp funding in this particular way in order to push us to fundraise more, mostly you should not worry about it when donating", that seems fine, I guess. I looked through my e-mails (though not very exhaustively) but didn't find communications from them that explicitly mentioned the match, so idk.
AGB 🔸
76
13
3
1
4

A question I genuinely don’t know the answer to, for the anti-donation-match people: why wasn’t any of this criticism directed at Open Phil or EA funds when they did a large donation match?

I have mixed feelings on donation matching. But I feel strongly that it is not tenable to have a general community norm against something your most influential actors are doing without pushback, and checking the comments on the linked post I’m not seeing that pushback.

Relatedly, I didn’t like the assertion that the increased number of matches comes from the ‘fundraising’... (read more)

I wasn't an enormous fan of the LTFF/OP matching campaign, but I felt it was actually a reasonable mechanism for the exact kind of dynamic that was going on between the LTFF and Open Phil. 

The key component that for me was at stake in the relationship between the LTFF and OP was to reduce Open Phil influence on the LTFF. Thinking through the game theory of donations that are made on the basis of future impact and how that affects power dynamics is very messy, and going into all my thoughts on the LTFF/OP relationship here would be far too much, but wi... (read more)

As Michael says, there was discussion of it, but it was in a different thread and I did push back in one small place against what I saw as misleading phrasing by an EA fund manager. I don't fully remember what I was thinking at the time, so anything else I say here is a bit speculative.

Overall, I would have preferred that OP + EA Funds had instead done a fixed-size exit grant. This would have required much less donor reasoning about how to balance OP having more funding available for other priorities vs these two EA funds having more to work with. How I... (read more)

But I feel strongly that it is not tenable to have a general community norm against something your most influential actors are doing without pushback, and checking the comments on the linked post I’m not seeing that pushback.

I hear this as "you can't complain about FarmKind, because you didn't complain about OpenPhil". But:

  • Jeff didn't complain about the GiveWell match at the time it was offered, because he didn't notice it. I don't think we can draw too much adverse inference from any specific person not commenting on any specific situation.
  • A big par
... (read more)
6
Jason
That's a good question, and certainly the identity of the matching donor may have played a role in the absence of criticism. However, I do find some differences there that are fairly material to me. Given that the target for the EA Funds match was EAs, potential donors were presumably on notice that the money in the match pool was pre-committed to charity and would likely be spent on similar types of endeavors if not deployed in the match. Therefore, there's little reason to think donors would have believed that their participation would have changed the aggregate amounts going toward the long-term future / EA infrastructure. That's unclear with FarmKind and most classical matches. Based on this, standard donors would have understood that the sweetener they were offered was a degree of influence over Open Phil's allocation decisions. That sweetener is also present in the FarmKind offer. It is often not present in classical matching situations, where we assume that the bonus donor would have probably given the same amount to the same charity anyway. The OP match offer feels less . . . contrived? OP had pre-committed to at least sharply reducing the amount it was giving to LTFF/EAIF, and explained those reasons legibly enough to establish that it would not be making up the difference anyway. It seems clear to me that OP's sharp reduction in LTFF/EAIF funding was not motivated by a desire to influence third-party spending though then offering a one-time matching mechanism.  Even OP's decision to offer donor matching has a potential justification other than a desire to influence third-party donations. Specifically, OP (like other big funders) is known to not want to fund too much of an organization, and that desire would be especially strong where OP was planning to cut funding in the near future. If OP's main goal were to influence other donors, they frankly could have done a lot better job than advertising to the community of people who were already giving to LTFF

FWIW, I had started a thread on the EA Funds fundraising post here about Open Phil's counterfactuals, because there was no discussion of it.

I'm not in the anti-donation-match camp, though.

Thanks for clarifying. I agree that Gift Aid eligibility is the key question; HMRC does not expect me to have insight into the administration of every charity I donate to, and it’s not like they care if charities don’t take the ‘free’ money they are entitled to! In other words, whether CEA claims does not matter but whether it could claim does.

However, in order for the charity to be entitled a Gift Aid declaration must be completed:

https://www.gov.uk/government/publications/charities-detailed-guidance-notes/chapter-3-gift-aid#chapter-36-gift-aid-declaratio... (read more)

5
OllieBase
Thanks for your comment.  We’ve mentioned elsewhere that we might revisit the decision not to claim Gift Aid after the event, and so we’re planning to send around Gift Aid Declaration forms to donors soon. We think this would be helpful so we can preserve the option of claiming Gift Aid on these donations in the future.  We think the HMRC guidance to individuals is not very clear, and thanks for your prompt to check this. We have looked into it, and consulted our external advisors and we do not think that it is a requirement of personal tax relief that the individual donor signs a Gift Aid Declaration form. However, as HMRC’s own guidance isn’t consistently clear on this, and as we’d like to keep the option of claiming the Gift Aid afterwards, we’d like to send around GAD forms and will be in touch with donors to do this. This thread has prompted us to pay closer attention here, so thank you (everyone in this thread) for flagging it!
2
OllieBase
Thanks AGB (and Rasool below), I'm looking into this. Again, it seems our language here hasn't been clear enough and I want to make sure I'm as clear as possible when I respond.
2
Rasool
I also don't feel comfortable claiming this as a Gift Aid eligible 'donation' * I can't remember the wording on the registration page, but I think it was phrased around purchasing a ticket, rather than making a donation * And as you said, there wasn't any mention of Gift Aid declarations (regardless of whether CEA was going to do anything with that) * Even the confirmation email I got said 'Date of purchase' (rather than 'Date of donation' or similar) * While it is true that you could have gotten a free ticket meaning that there was no extra benefit in paying (pointed out by domdomegg here) * I'm not sure how it works given that there was an application process and your application could be rejected * More importantly, HMRC seem to be wise to the idea of treating all ticket purchases as donations, in 3.43.6 here it states: And I'm pretty sure the wording on the registration page was something like "£400 lets us recoup the cost from running the event" or similar. So I don't think HMRC would see these payments as 'monies received as fundraising during an event that the charity put on' rather than 'ticket price for an event' (which is not an eligible donation)

How sure are you about this? The boxes on the UK Self Assessment Tax Return (link below, it’s on page 6) where I declare my donations ask for things like “Gift Aid Payments made in the year…”. So I wouldn’t include non-Gift-Aid payments there and I’m not sure where else they would go.

In general, the core tax concept for various reliefs in the UK is Adjusted Net Income. The page defining it (linked below) explicitly calls out Gift Aid donations as reducing it but not anything else.

I’d appreciate a link if I’m wrong about this.

https://assets.publishing.servi... (read more)

2
OllieBase
Thanks for your comment AGB, and sorry I didn’t give enough detail initially. I’ve checked with relevant people internally, and our thoughts are below. HMRC uses "Gift Aid" (in both the form and their Adjusted Net Income page) to mean that the donation was eligible for Gift Aid in the charity’s hands. We don't have to claim Gift Aid for the donation to be eligible (and HMRC does not expect donors to confirm this). If you donated £400 for EAG London and you’re filling out the self-assessment tax return, you would add £400 to boxes 5 and 6 of the Charitable Giving section (as a “one off” donation, because EAG London is not a regular monthly donation). The level of tax relief you receive will depend on your own income levels through the year, and this isn’t something we can comment on. We aren’t currently planning on claiming Gift Aid on the donations to EAG London to reduce administrative overhead, but we might possibly revisit that in future years. We will take another look at our website language, as we’ve had a few people ask about Gift Aid, and so we may not have been clear enough that our decision on whether or not to claim Gift Aid does not actually impact an individual donor’s tax filing position. 

Thanks Arden. I suspect you don't disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself. 

One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:

We are cause neutral[1] – we prioritise x-risk reduction because we think it's most pressing, but it’s possible we co

... (read more)
6
Arden Koehler
Speaking in a personal capacity here -- We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!
AGB 🔸
95
15
7
6
2

Meta note: I wouldn’t normally write a comment like this. I don’t seriously consider 99.99% of charities when making my donations; why single out one? I’m writing anyway because comments so far are not engaging with my perspective, and I hope more detail can help 80,000 hours themselves and others engage better if they wish to do so. As I note at the end, they may quite reasonably not wish to do so.

For background, I was one of the people interviewed for this report, and in 2014-2018 my wife and I were one of 80,000 hours’ largest donors. In recent years it... (read more)

I think it is very clear that 80,000 hours have had a tremendous influence on the EA community... so references to things like the EA survey are not very relevant. But influence is not impact... 80,000 hours prioritises AI well above other cause areas. As a result they commonly push people off paths which are high-impact per other worldviews.

 

Many of the things the EA Survey shows 80,000 Hours doing (e.g. introducing people to EA in the first place, helping people get more involved with EA, making people more likely to remain engaged with EA, introduc... (read more)

3
Chris Leong
ChatGPT is just the tip of the iceberg here. GPT4 is significantly more powerful than 3.5. Google now has a multi-modal model that can take in sound, images and video and a context window of up to a million tokens. Sora can generate amazing realistic videos. And everyone is waiting to see what GPT5 can do. Further, the Center for AI Safety open letter has demonstrated that it isn't just our little community that is worried about these things, but a large number of AI experts. Their 'AI is going to be a big thing' bet seems to have been a wise call, at least at the current point in time. Of course, I'm doing AI Safety movement building, so I'm a bit biased here, and maybe we'll think differently down the line, but right now they're clearly ahead.

Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it's totally reasonable to:

  1. Disagree with 80,000 Hours's views on AI safety being so high priority, in which case you'll disagree with a big chunk of the organisation's strategy.
  2. Disagree with 80k's views on working in AI companies (which, tl;dr, is that it's complicated and depends on the role and your own situation but is sometimes a good idea). I personally worry about this one a lot and think it really is possible we could be wrong h
... (read more)

Not the main point of your post, but tax deductibility is a big deal in the UK as well, at least for higher earners; once you earn more than £50k donations are deductible at a rate of at least 40%, i.e. £60 becomes £100.

CEA has now confirmed that Miri was correct to understand their budget - not EVF's budget - as around $30m.

In terms of things that would have helped when I was younger, I'm pretty on board with GWWC's new community strategy,[1] and Grace's thoughts on why a gap opened up in this space. I was routinely working 60-70 hour weeks at the time, so doing something like an EA fellowship would have been an implausibly large ask and a lot of related things seem vibed in a way I would have found very offputting. My actual starting contact points with the EA community consisted of no-obligation low-effort socials and prior versions of EA Global.

In terms of things now,... (read more)

5
Gemma 🔸
Absolutely agree - although I'm one of the other GWWC London co-leads so I am also biased here. I think low commitment in person socials are really important and tbh the social proof of meeting people like me who donated significantly was the most important factor for me personally.   I'd would like to see people be a lot more public with their pledges. I personally think Linkedin is underutilised here - adding pledges to the volunteering section your profile is low effort but sets a benchmark. I've personally added my pledge to my email signature, but I think this depends a lot on the kind of role you have, the company you work for and if you think the personal reputation risk is worth the potential upside (influencing someone else to donate more to effective charities).  I think this could be especially powerful for senior people who have a lot of influence but equally I've had a few meaningful conversations with people off the back of it.  I've got a half-written post on this for this forum series and Alex from @Giving What We Can has created some fantastic banner images for LinkedIn profiles. Some resources from GWWC: Donating anonymously: Should we be private or public about giving to charity? · Giving What We Can Why you should mention the Pledge in your LinkedIn summary · Giving What We Can

I even explicitly said I am less familiar with BP as a debate format.

The fact that you are unfamiliar with the format, and yet are making a number of claims about it, is pretty much exactly my issue. Lack of familiarity is an anti-excuse for overconfidence.

The OP is about an event conducted in BP. Any future events will presumably also be conducted in BP. Information about other formats is only relevant to the extent that they provide information about BP. 

I can understand not realising how large the differences between formats are initially, and so a... (read more)

Finally, even after a re-read and showing your comment to two other people seeking alternative interpretations, I think you did say the thing you claim not to have said. Perhaps you meant to say something else, in which case I'd suggest editing to say whatever you meant to say. I would suggest an edit myself, but in this case I don't know what it was you meant to say.

I've edited the relevant section. The edit was simply "This is also pretty common in other debate formats (though I don't know how common in BP in particular)".

By contrast, criticisms I think

... (read more)

You did give some responses elsewhere, so a few thoughts on your responses:

But this is really far from the only way policy debate is broken. Indeed, a large fraction of policy debates end up not debating the topic at all, but end up being full of people debating the institution of debating in various ways, and making various arguments for why they should be declared the winner for instrumental reasons. This is also pretty common in other debate formats.

(Emphasis added). This seems like a classic case for 'what do you think you know, and how do you think yo... (read more)

2
Habryka [Deactivated]
To be clear, I think very little of my personal experience played a role in my position on this. Or at least very unlikely in the way you seem to suggest. A good chunk of my thoughts on this were formed talking to Buck Shlegeris and Evan Hubinger at some point and also a number of other discussions about debating with a bunch of EAs and rationalists. I was actually pretty in favor of debate ~4-5 years ago when I remember first discussing this with people, but changed my mind after a bunch of people gave their perspectives and experiences and I thought more through the broader problem of how to fix it. I also want to clarify the following  I didn't say that. I said "This is also pretty common in other debate formats". I even explicitly said I am less familiar with BP as a debate format. It seems pretty plausible to me that BP has less of the problem of meta-debate. But I do think evidence of problems like meta-debate in other formats is evidence of BP also having problems, even if I am specifically less familiar with BP.
Load more