Benjamin_Todd

6625Joined Aug 2014

Comments
759

I think of Shapley values as just one way of assigning credit in a way to optimise incentives, but from what I've seen, it's not obvious it's the best one. (In general, I haven't seen any principled way of assigning credit that always seems best.)

Good point that CFT is a more science-grounded alternative to IFS. Tim LeBon is a therapist in the UK who has seen community members, does remote sessions, and offers CFT.

This is a cool post. Though, I wonder if there's switching between longtermism as a theory of what matters vs the idea you should try to act over long timescales (as with a 200yr foundation).

You could be a longtermist in terms of what you think is of moral value, but believe the best way to benefit the future (instrumentally) is to 'make it to the next rung'. Indeed this seems like what Toby, Will etc. basically think.

Maybe then relevant reference class is more something like 'people motivated to help future generations but who did that by solving certain problems of the day', which seems a very broad and maybe successful reference class - eg encompassing many scientists, activists etc.

PS shouldn't the environmentalism, climate change and anti nuclear movements be part of your reference class?

I agree the basic version of this objection doesn't work, but my understanding is there's a more sophisticated version here: 

https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/

Where he talks about how a the case for an individual  being longtermist rests on a tiny probability of shifting the entire future.

I think the response to this might be that if we aggregate together the longtermist community, then collectively it's no longer pascalian. But this feels a bit arbitrary.

Anyway, partly wanted to post this paper here for further reading, and partly an interested in responses.

Short update on the situation: https://twitter.com/ben_j_todd/status/1561100678654672896

where you can dilute the philosophy more and more, and as you do so, EA becomes "contentless" in that it becomes closer to just "fund cool stuff no one else is really doing.

 

Makes sense. It just seems to me that the diluted version still implies interesting & important things.

Or from the other direction, I think it's possible to move in the direction of taking utilitarianism more seriously, without having to accept all of the most wacky implications.

 

So you just keep going, performing the arbitrage. In other moral theories, which aren’t based on arbitrage, but perhaps rights, or duties (just to throw out an example), they don’t have this maximizing property, so they don’t lead so inexorably to repugnant conclusions

I agree something like trying to maximise might be at the core of the issue (where utilitarianism is just one ethical theory that's into maximising).

However, I don't think it's easy to avoid by switching to a rights or duties. Philosophers focused on rights still think that if you can save 10 lives with little cost to yourself, that's a good thing to do. And that if you can 100 lives with the same cost, that's an even better thing to do. A theory that said all that matters ethically is not violating rights would be really weird.

Or another example is that all theories of population ethics seem to have unpleasant conclusions, even the non-totalising ones.

If one honestly believes that all moral theories end up with uncountable repugnancies, why not be a nihilist, or a pessimist, rather than an effective altruist?

I don't see why it implies nihilism. I think it's shows the moral philosophy is hard, so we should moderate our views, and consider a variety of perspectives, rather than bet everything on a single theory like utilitarianism.

I think once you take account of diminishing returns and the non-robustness of the x-risk estimates, there's a good chance you'd end up estimating the cost per present life saved of GiveWell is cheaper than donating to xrisk. So the claim 'neartermists should donate to xrisk' seems likely wrong.

I agree with Carl the US govt should spend more on x-risk, even just to protect their own citizens.

I think the typical person is not a neartermist, so might well end up thinking x-risk is more  cost-effective than GiveWell if they thought it through. Though it would depend a lot on what considerations you include or not.

From a pure messaging pov, I agree we should default to opening with "there might be an xrisk soon" rather than "there might be trillions of future generations", since it's the most important message and is more likely to be well-received. I see that as the strategy of the Precipice, or of pieces directly pitching AI xrisk. But I think it's also important to promote longtermism independently, and/or mention it as an additional reason to prioritise about xrisk a few steps after opening with it.

Thanks I made some edits!

This seems plausible to me but not obvious, in particular for AI risk the field seems pre-paradigmatic such that there aren't necessarily "low-hanging fruit" to be plucked; and it's unclear whether previous efforts besides field-building have even been net positive in total.

Agree though my best guess is something like diminishing log returns the whole way down. (Or maybe even a bit of increasing returns within the first $100m / 100 people.)

Hi Erik,

I just wanted to leave a very quick comment (sorry I'm not able to engage more deeply).

I think yours is an interesting line of criticism, since it tries to get to the heart of what EA actually is

My understanding of your criticism is that EAs attempts to find an interesting middle ground between full utilitarianism and regular sensible do-gooding, whereas you claim there isn't one. In particular, we can impose limits on utilitarianism, but they're arbitrary and make EA contentless. Does this seem like a reasonable summary?

I think the best argument that an interesting middle ground exists the fact that EAs in practice have come up with ways of doing that that aren't standard (e.g. only a couple of percent of US philanthropy is spent on evidence-backed global health at best, and << 1% on ending factory farming + AI safety + ending pandemics). 

More theoretically, I see EA as being about something like "maximising  global wellbeing while respecting other values". This is different from regular sensible do-gooding in being more impartial, more wellbeing focused and more focused on finding the very best ways to contribute (rather than the merely good). I think another way EA is different is being more skeptical, open to weird ideas and trying harder to take a bayesian, science-aligned approach to finding better ways to help. (Cf the key values of EA.) 

However, it's also different from utilitarianism since you can practice these values without saying maximising hedonic utility is the only  thing that matters, or a moral obligation.

(Another way to understand EA is the claim that we should pay more attention to consequences, given the current state of the world, but not that only consequences matter.)

You could respond that there's arbitrariness in how to adjudicate conflicts between maximising wellbeing and other values. I basically agree.

But I think all moral theories imply crazy things ("poison") if taken to extremes (e.g. not lying to the axe murder as a deontologist; deep ecologists who think we should end humanity to preserve the environment; people who hold the person-affecting view in population ethics who say there's nothing bad about creating a being who's life is only suffering).

So imposing some level of arbitrary cut offs on your moral views is unavoidable. The best we can do is think hard about the tradeoffs between different useful moral positions, and try to come up with an overall course of action that's non-terrible on the balance of them.

I agree thinking xrisk reduction is the top priority likely depends on caring significantly about future people (e.g. thinking the value of future generations is at least 10-100x the present).

A key issue I don't see discussed very much is diminishing returns to x-risk reduction. The first $1bn spent on xrisk reduction is (I'd guess) very cost-effective, but over the next few decades, it's likely that at least tens of billions will be spent on it, maybe hundreds. Additional donations only add at that margin, where the returns are probably 10-100x lower than the first billion. So a strict neartermist could easily think AMF is more cost-effective.

That said, I think it's fair to say it doesn't depend on something like "strong longtermism". Common sense ethics cares about future generations, and I think suggests we should do far more about xrisk and GCR reduction than we do today.

I wrote about this in an 80k newsletter last autumn:

 

Carl Shulman on the common-sense case for existential risk work and its practical implications (#112)

Here’s the basic argument:

  • Reducing existential risk by 1 percentage point would save the lives of 3.3 million Americans in expectation.
  • The US government is typically willing to spend over $5 million to save a life.
  • So, if the reduction can be achieved for under $16.5 trillion, it would pass a government cost-benefit analysis.
  • If you can reduce existential risk by 1 percentage point for under $165 billion, the cost-benefit ratio would be over 100 — no longtermism or cosmopolitanism needed.


Taking a global perspective, if you can reduce existential risk by 1 percentage point for under $234 billion, you would save lives more cheaply than GiveWell’s top recommended charities — again, regardless of whether you attach any value to future generations or not. 

Toby Ord, author of The Precipice, thinks there's a 16% chance of existential risk before 2100. Could we get that down to 15%, if we invested $234 billion?

I think yes. Less than $300 million is spent on the top priorities for reducing risk today each year, so $200 billion would be a massive expansion.

The issue is marginal returns, and where the margin will end up. While it might be possible to reduce existential risk by 1 percentage point now for $10 billion — saving lives 20 times more cheaply than GiveWell's top charities — reducing it by another percentage point might take $100 billion+, which would be under 2x as cost-effective as GiveWell top charities.

I don’t know how much is going to be spent on existential risk reduction over the coming decades, or how quickly returns will diminish. [Edit: But it seems plausible to me it'll be over $100bn and it'll be more expensive to reduce x-risk than these estimates.] Overall I think reducing existential risk is a competitor for the top issue even just considering the cost of saving the life of someone in the present generation, though it's not clear it's the top issue.

My bottom line is that you only need to put moderate weight on longtermism to make reducing existential risk seem like the top priority.

 

(Note: I made some edits to the above in response to Eli's comment.)

Load More