Matthew_Barnett

Wiki Contributions

Comments

[linkpost] Peter Singer: The Hinge of History

I think failing to act can itself be atrocious. For example, the failure of rich nations to intervene in the Rwandan genocide was an atrocity. Further, I expect Peter Singer to agree that this was an atrocity. Therefore, I do not think that deontological commitments are sufficient to prevent oneself from being party to atrocities.

[linkpost] Peter Singer: The Hinge of History

My interpretation of Peter Singer's thesis is that we should be extremely cautious about acting on a philosophy that claims that an issue is extremely important, since we should be mindful that such philosophies have been used to justify atrocities in the past. But I have two big objections to his thesis.

First, it actually matters whether the philosophy we are talking about is a good one. Singer provides a comparison to communism and Nazism, both of which were used to justify repression and genocide during the 20th century. But are either of these philosophies even theoretically valid, in the sense of being both truth-seeking and based on compassion? I'd argue no. And the fact that these philosophies are invalid was partly why people committed crimes in their name.

Second, this argument proves too much. We could have presented an identical argument to a young Peter Singer in the context of animal farming. "But Peter, if people realize just how many billions of animals are suffering, then this philosophy could be used to justify genocide!" Yet my guess is that Singer would not have been persuaded by that argument at the time, for an obvious reason.

Any moral philosophy which permits ranking issues by importance (and are there any which do not?) can be used to justify atrocities. The important thing is whether the practitioners of the philosophy strongly disavow anti-social or violent actions themselves. And there's abundant evidence that they do in this case, as I have not seen even a single prominent x-risk researcher publicly recommend that anyone commit violent acts of any kind.

Democratising Risk - or how EA deals with critics

I'm happy with more critiques of total utilitarianism here. :) 

For what it's worth, I think there are a lot of people unsatisfied with total utilitarianism within the EA community. In my anecdotal experience, many longtermists (including myself) are suffering focused. This often takes the form of negative utilitarianism, but other variants of suffering focused ethics exist.

I may have missed it, but I didn't see any part of the paper that explicitly addressed suffering-focused longtermists. (One part mentioned, "Preventing existential risk is not primarily about preventing the suffering and termination of existing humans.").

I think you might be interested in the arguments made for caring about the long-term future from a suffering-focused perspective. The arguments for avoiding existential risk are translated into arguments for reducing s-risks

I also think that suffering-focused altruists are not especially vulnerable to your argument about moral pluralism. In particular, what matters to me is not the values of humans who exist now but the values of everyone who will ever exist. A natural generalization of this principle is the idea that we should try to step on as few people's preferences as possible (with the preferences of animals and sentient AI included), which leads to a sort of negative preference utilitarianism.

Against Negative Utilitarianism

Another strange implication is that enough worlds of utopia plus pinprick would be worse than a world of pure torture.

I view this implication as merely the consequence of two facts, (1) utilitarians generally endorse torture in the torture vs. dust specks thought experiment, (2) negative preference utilitarians don't find value in creating new beings just to satisfy their preferences.

The first fact is shared by all non-lexical varieties of consequentialism, so it doesn't appear to be a unique critique of negative preference utilitarianism. 

The second fact doesn't seem counterintuitive to me, personally. When I try to visualize why other people find it counterintuitive, I end up imagining that it would be sad/shameful/disappointing if we never created a utopia. But under negative preference utilitarianism, existing preferences to create and live in a utopia are already taken into account. So, it's not optimal to ignore these people's wishes.

On the other hand, I find it unintuitive that we should build preferenceonium (homogeneous matter optimized to have very strong preferences that are immediately satisfied). So, this objection doesn't move me by much.

A final implication is that for a world of Budhist monks who have rid themselves completely of desires and merely take in the joys of life without having any firm desires for future states of the world, it would be morally neutral to bring their well-being to zero.  

I think if someone genuinely removed themselves of all desire then, yes, I think it would be acceptable to lower their well-being to zero (note that we should also take into account their preferences not to be exploited in such a manner). But this thought experiment seems hollow to me, because of the well-known difficulty of detaching oneself completely from material wants, or empathizing with those who have truly done so. 

The force of the thought experiment seems to rest almost entirely on the intuition that the monks have not actually succeeded -- as you say, they "merely take in the joys of life without having desires". But if they really have no desires, then why are they taking joy in life? Indeed, why would they take any action whatsoever?

Against Negative Utilitarianism

Moving from our current world to utopia + pinprick would be a strong moral improvement under NPU. But you're right that if the universe was devoid of all preference-having beings, then creating a utopia with a pinprick would not be recommended.

Against Negative Utilitarianism

World destruction would violate a ton of people's preferences. Many people who live in the world want it to keep existing. Minimizing preference frustration would presumably give people what they want, rather than killing them (something they don't want).

Against Negative Utilitarianism

I'm curious whether you think your arguments apply to negative preference utilitarianism (NPU): the view that we ought to minimize aggregate preference frustration. It shares many features with ordinary negative hedonistic utilitarianism (NHU), such as,

But NPU also has several desirable properties that are not shared with NHU:

  • Utopia, rather than world-destruction, is the globally optimal solution that maximizes utility.
  • It's compatible with the thesis that value is highly complex. More specifically, the complexity of value under NPU is a consequence of the complexity of individual preferences. People generally prefer to live in a diverse, fun, interesting, and free world, than a homogenous world filled with hedonium.

Moreover,

  • As Brian Tomasik argued, preference utilitarianism can be seen as a generalization of the golden rule.
  • Preference utilitarianism puts primacy on consent, mostly because actions are wrong insofar as they violate someone's consent. This puts it on a firm foundation as an ethical theory of freedom and autonomy.

That said, there are a number of problems with the theory, including the problem of how to define preference frustration, identify agents across time and space, perform interpersonal utility comparisons, idealize individual preferences, and cope with infinite preferences.

Concerning the Recent 2019-Novel Coronavirus Outbreak

For a long time, I've believed in the importance of not being alarmist. My immediate reaction to almost anybody who warns me of impending doom is: "I doubt it". And sometimes, "Do you want to bet?"

So, writing this post was a very difficult thing for me to do. On an object-level, l realized that the evidence coming out of Wuhan looked very concerning. The more I looked into it, the more I thought, "This really seems like something someone should be ringing the alarm bells about." But for a while, very few people were predicting anything big on respectable forums (Travis Fisher, on Metaculus, being an exception), so I stayed silent.

At some point, the evidence became overwhelming. It seemed very clear that this virus wasn't going to be contained, and it was going to go global. I credit Dony Christie and Louis Francini with interrupting me from my dogmatic slumber. They were able to convince me —in the vein of Eliezer Yudkowsky's Inadequate Equilibria —that the reason why no one was talking about this probably had nothing to do whatsoever  with the actual evidence. It wasn't that people had a model and used that model to predict "no doom" with high confidence: it was a case of people not having models at all.

I thought at the time—and continue to think—that the starting place of all our forecasting should be using the outside view. But—and this was something Dony Christie was quite keen to argue—sometimes people just use the "outside view" as a rationalization; to many people, it means just as much, and no more than, "I don't want to predict something weird, even if that weird thing is overwhelmingly determined by the actual evidence."

And that was definitely true here: pandemics are not a rare occassion in human history. They happen quite frequently. I am most thankful for belonging to a community that opened my mind long ago, by having abundant material written about natural pandemics, the Spanish flu, and future bio-risks. That allowed me to enter the mindset of thinking "OK maybe this is real" as opposed to rejecting all the smoke under the door until the social atmosphere became right.

My intuitions, I'm happy to say, paid off. People are still messaging me about this post. Nearly two years later, I wear a mask when I enter a supermarket. 

There are many doomsayers who always get things wrong. A smaller number of doomsayers are occasionally correct—good enough that it might be worth listening to them, but rejecting them, most of the time. 

Yet, I am now entitled to a distinction that I did not think I would ever earn, and one that I perhaps do not deserve (as the real credit goes to Louis and Dony): the only time I've ever put out a PSA asking people to take some impending doom very seriously, was when I correctly warned about the most significant pandemic in one hundred years. And I'm pretty sure I did it earlier than any other effective altruist in the community (though I'm happy to be proven wrong, and congratulate them fully).

That said, there are some parts of this post I am not happy with.  These include,

  • I only had one concrete prediction in the whole post, and it wasn't very well-specified. I said that there was a >2% probability that 50 million people would die within one year. That didn't happen.
  • I overestimated the mortality rate. At the time, I didn't understand which was likely to be a greater factor in biasing the case fatality rate: the selection effect of missed cases, or the time-delay of deaths. It is now safe to say that the former was a greater issue. The infection fatality rate of Covid-19 is less than 1%, putting it into a less dangerous category of disease than I had pictured at the time.

Interestingly, one part I didn't regret writing was the vaccine timeline I implicitly predicted in the post. I said, "we should expect that it will take about a year before a vaccine comes out." Later, health authorities claimed that it would take much longer, with some outlets "fact-checking" the claim that a vaccine could arrive by the end of 2020. I'm pleased to say I outlasted the pessimists on this point, as vaccines started going into people's arms on a wide scale almost exactly one year after I wrote this post.

Overall, I'm happy I wrote this post. I'm even happier to have friends who could trigger me to write it. And I hope, when the next real disaster comes, effective altruists will correctly anticipate it, as they did for Covid-19.

Rowing, Steering, Anchoring, Equity, Mutiny

It was much less disruptive than revolutions like in France, Russia or China, which attempted to radically re-order their governments, economies and societies. In a sense I guess you could think of the US revolution as being a bit like a mutiny that then kept largely the same course as the previous captain anyway.

I agree with the weaker claim here that the US revolution didn't radically re-order "government, economy and society." But I think you might be exaggerating how conservative the US revolution was. 

The United States is widely considered to be one of the first modern constitutional democracies, following literally thousands of years of near-universal despotism throughout the world. Note that while many of its democratic institutions were inherited from the United Kingdom, sources such as Boix et al.'s "A complete data set of political regimes, 1800–2007" (which Our World In Data cites on their page for democray) tend to say that democracy in the United States is older than democracy in the United Kingdom, or Western Europe more generally. 

One of the major disruptive revolutions you mention, the French Revolution, was inspired by the American revolution quite directly. Thomas Jefferson even assisted Marquis de Lafayette draft the Declaration of the Rights of Man and of the Citizen. More generally, the intellectual ideals of both revolutions are regularly compared with each other, and held as prototypical examples of Enlightenment values.

However, I do agree with what is perhaps the main claim, which is that the US constitution, by design, did not try to impose the perfect social order: its primary principle was precisely that of limited government and non-intervention, ie. the government not trying to change as much as possible.

Discussion with Eliezer Yudkowsky on AGI interventions

The main way I could see an AGI taking over the world without being exceedingly superhuman would be if it hid its intentions well enough so that it could become trusted enough to be deployed widely and have control of lots of important infrastructure.

My understanding is that Eliezer's main argument is that the first superintelligence will have access to advanced molecular nanotechnology, an argument that he touches on in this dialogue. 

I could see breaking his thesis up into a few potential steps,

  1. At some point, an AGI will FOOM to radically superhuman levels, via recursive self-improvement or some other mechanism.
  2. The first radically superhuman AGI will have the unique ability to deploy advanced molecular nanomachines, capable of constructing arbitrary weapons, devices, and nanobot swarms.
  3. If some radically smarter-than-human agent has the unique ability to deploy advanced molecular nanotechnology, then it will be able to unilaterally cause an existential catastrophe.

I am unsure which premise you disagree with most. My guess is premise (1), but it sounds a little bit like you're also  skeptical of (2) or (3), given your reply.

It's also not clear to me whether the AGI would be consequentialist?

One argument is that broadly consequentialist AI systems will be more useful, since they allow us to more easily specify our wishes (as we only need to tell it what we want, not how to get it). This doesn't imply that GPT-type AGI will become consequentialist on its own, but it does imply the existence of a selection pressure for consequentialist systems.

Load More