TO

Toby_Ord

3289 karmaJoined

Comments
145

In your piece you focus on artificial sentience. But similar arguments would apply to somewhat broader categories. 

Wellbeing

For example, you could expand it to creating entities that can have wellbeing (or negative elements of wellbeing) even if that wellbeing can be determined by things other than conscious experience. If there were ways of creating millions of beings with negative wellbeing, I'd be very disturbed by that regardless of whether it happened by suffering or some other means. I'm sympathetic to views where suffering is the only form of wellbeing, but am by no means sure they are the correct account of wellbeing, so maybe what I really care about is avoiding creating beings that can have (negative) wellbeing.

Interests

One could also go a step further. Wellbeing is a broad category for all kinds of things that count towards how well your life goes. But on many people's understandings, it might not capture everything about ill treatment. In particular, it might not capture everything to do with deontological wrongs and/or rights violations, which may involve wronging someone in a way that can't be made up for by improvements in wellbeing and can't be cashed out purely in terms of its negative effects on wellbeing. So it may be that creating beings with interests or morally relevant interests is the relevant category.

That said, note that these are both steps towards greater abstraction, so even if they better capture what we really care about, they might still lose out on the grounds of being less compelling, more open to interpretation, and harder to operationalise.

I also wanted to add that as someone who leans more towards valuing happiness and suffering equally, that I still find the moratorium to be a good idea and don't feel pangs of frustration about the lost positive experience if we were to delay. I would be very concerned if humanity chose now to never produce any new kinds of beings who could feel, as that may forever rule out some of the best futures available to us. But as you say, a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity (given that we can see this is a crucial choice for which we are ill prepared). It is important that our rapid decision to avoid doing something before we know what we're doing doesn't build in a bias towards   never doing it, so some care might need to be taken with the end condition for a moratorium. Having it simply expire after 20 to 50 years (but where a new one could be added if desired) seems pretty good in this regard.

I think that thoughtful people who lean towards classical utilitarianism should generally agree with this (i.e. I don't think this is based on my idiosyncrasies). To get it to turn out otherwise would require extreme moral certainty and/or a combination of the total view with impatience in the form of temporal discounting. 

Note that I think it is important to avoid biasing a moratorium towards being permanent even for your 5th option (moratorium on creating beings that suffer). c.f. we have babies despite knowing that their lives will invariably include periods of suffering (because we believe that these will usually be outweighed by other periods of love and joy and comfort). And most people (including me) think that allowing this is a good thing and disallowing it would be disastrous. At the moment, we aren't in a good position to understand the balances of suffering and joy in artificial beings and I'd be inclined to say that a moratorium on creating artificial suffering is a good thing, but when we do understand how to measure this and to tip the scales heavily in favour of positive experience, then a continued ban may be terrible. (That said, we may work also work out how to ensure they have good experiences with zero suffering, in which case a permanent moratorium may well turn out to be a good thing.)

This is an excellent exploration of these issues. One of my favourite things about it is that it shows it is possible to write about these issues in a measured, sensible, warm, and wise way — i.e. it provides a model for others wanting to advance this conversation at this nascent stage to follow.

Re the 5 options, I think there is one that is notably missing, and that would probably be the leading option for many of your opponents. It is the wait-and-see approach — leave the space unregulated until a material (but not excessive) amount of harm has occurred and if/when that happens, regulate from this situation where much more information is available. This is the kind of strategy that the anti-SB 1047 coalition seems to have converged on. And it is the usual way that society proceeds with regulating unprecedented kinds of harm.

As it happens, I think your options 4 and 5 (ban creation of artificial sentience/suffering) are superior to the wait-and-see approach, but it is a harder case to argue. Some key points of the comparison are:

  • in the case of artificial suffering a very large amount of harm may occur very quickly. Many new harms scale up fairly slowly, such that even if it takes a few years to regulate from the time the harms are first clear, the damage done isn't too profound (e.g. it is smaller than or equal to the gains of allowing that early period to be unregulated). But it seems like this could be a case where, say, millions of beings are suffering before the harms are recognised, and  billions by the time the regulation is passed.
  • this is such a profound issue for humanity (whether to bring into existence for the first time in the history of the Earth entirely new kinds of entity that can experience suffering or joy) that it is natural to consider a global conversation about whether to proceed before doing it. Human germline genetic engineering is a similarly grand choice and the scientific and political community indeed chose to have a moritorium on that. Most regulation of new technologies is not like this, so this is an answer to the question of why should we treat this differently to everything else.

Thanks for this excellent piece James. I had thought the trends were more positive than this, and am disheartened to hear that I was wrong. 

One additional set of graphs I think would help set context is on the number of animals subject to some of the worst practices (e.g. battery hens). Many campaigns have been focused on avoiding some of the worst harms of factory farming, so presumably campaigners feel that reducing these practices is a big win. If so, we should be measuring it, celebrating the successes, and also putting them in context of the other big trends.

For longterm trend analysis, it would also be useful to have a geographic breakdown. e.g. if one of the main arguments for a good longterm outcome is a kind of ethical eating Kuznets curve — that as economic development increases, people first cause more harm to animals per capita, but then this decreases again. If so, we would expect to see this first in economically developed countries and measuring this would be helpful for understanding the timescale / income needed to bend that curve back down. And if there isn't any evidence of a Kuznets curve, that would be very important to know too!

We can go to a distant country and observe what is going on there, and make reasonably informed decisions about how to help them.

We can make meaningful decisions about how to help people in the distant future. For example, to allow them to exist at all, to allow them to exist with a complex civilisation that hasn't collapsed, to give them more prosperity that they can use as they choose, to avoid destroying their environment, to avoid collapsing their options by other irreversible choices, etc. Basically, to aim and giving them things near the base of Maslow's Hierarchy of Needs or to give them universal goods — resources or options that can be traded for whatever it is they know they need at the time. And the same is often true for international aid.

In both cases, it isn't always easy to know that our actions will actually secure these basic needs, rather than making things worse in some way. But it is possible. One way to do it for the distant future is to avoid catastrophes that have predictable longterm effects, which is a major reason I focus on that and suggest others do too.

I don't see it as an objection to Longtermism if it recommends the same things as traditional morality — that is just as much a problem for traditional theories, by symmetry. It is especially not a problem when traditional theories might (if their adherents were careful) recommend much more focus on existential risks but in fact almost always neglect the issue substantially. If they admit that Longtermists are right that these are the biggest issues of our time and that the world should massively scale up focus and resources on them, and that they weren't saying this before we came along, then that is a big win for Longtermism. If they don't think it is all that important actually, then we disagree and the theory is quite distinctive in practice. Either way the distinctiveness objection also fails.

I don't have time to look into this in full depth, but it looks like a good paper, making useful good-faith critiques, which I very much appreciate. Note that the paper is principally arguing against 'strong longtermism' and doesn't necessarily disagree with longtermism. For the record, I don't endorse strong longtermism either, and I think that the paper delineating it which came out before any defenses of (non-strong) longtermism has been bad for the ability to have conversations about the form of the view that is much more widely endorsed by 'longtermists'.

My main response to the points in the paper would be by analogy to cosmopolitanism (or to environmentalism or animal welfare). We are saying that something (the lives of people in of future generations) matters a great deal more than most people think (at least judging by their actions). In all cases, this does mean that adding a new priority will mean a reduction in resources going to existing priorities. But that doesn't mean these expansions of the moral circle are in error. I worry that the lines of argument in this paper apply just as well to denying previous steps like cosmopolitanism (caring deeply about people's lives across national borders). e.g. here is the final set of bullets you listed with minor revisions:

  • Human biases and limitations in moral thinking lead to distorted and unreliable judgments, making it difficult to meaningfully care about the far future distant countries.
  • Our moral concern is naturally limited to those close to us, and our capacity for empathy and care is finite. Even if we care about future generations people in distant countries in principle, our resources are constrained.
  • Focusing on the far future distant countries comes at a cost to addressing present-day local needs and crises, such as health issues and poverty.
  • Implementing longtermism cosmopolitanism would require radical changes to human psychology or to social institutions, which is a major practical hurdle.

What I'm trying to show here is that these arguments apply just as well to argue against previous moral circle expansions which most moral philosophers would think were major points of progress in moral thinking. So I think they are suspect, and that the argument would instead need to address things that are distinctive about longtermism, such as arguing positively that future peoples' lives don't matter morally as much as present people.

Thank you so much for everything you've done. You brought such renewed vigour and vision to Giving What We Can that you ushered it into a new era. The amazing team you've assembled and culture you've fostered will put it such good stead for the future.

I'd strongly encourage people reading this to think about whether they might be a good choice to lead Giving What We Can forward from here. Luke has put it in a great position, and you'd be working with an awesome team to help take important and powerful ideas even further, helping so many people and animals, now and across the future. Do check that job description and consider applying!

Great idea Thomas.

I've just sent a letter and encourage others to do so too!

A small correction:

Infamously there was a period where some scientists on the project were concerned that a nuclear bomb would ignite the upper atmosphere and end all life on Earth; fortunately they were able to do some calculations suggesting that showed beyond reasonable doubt that this would not happen before the Trinity test occurred. 

The calculations suggesting the atmosphere couldn't ignite were good, but were definitively not beyond reasonable doubt. Fermi and others kept working to re-check the calculations in case they'd missed something all the way up to the day of the test and wouldn't have done so if they were satisfied by the report. 

The report (published after Trinity) does say:

One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely.

That is often quoted by people who want to suggest the case was closed, but the next (and final) sentence of the report says:

However, the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desireable.

Great piece William — thanks for sharing it here.

I liked your strategy for creating robust principles that would have worked across a broad range of cases, and it would be good to add others to the Manhattan Project example. 

I particularly liked you third principle:

Principle 3: When racing, have an exit strategy 

In the case of the Manhattan project, a key moment was the death of Hitler and surrender of Germany. Given that this was the guiding reason — the greater good with which the scientists justify their creation of a terrible weapon — it is very poor how little changed at that point. Applying your principles, one could require a very special meeting if/when any of the race-justifying conditions disappear, to force reconsideration at that point.

Load more