I came across this article from the Carnegie Council's Artificial Intelligence and Equality Initiative and I can't help but feel like they misunderstand longtermism and EA. The article mentions the popularity of William Macaskill's new book "What We Owe the Future" and the case for considering future generations and civilization. I would recommend you read the article before you read my take below, but Carnegie Council made the common fallacious arguments against longtermism.

  1. They make it seem like in order to address longtermism, you have to completely ignore the present. I have never heard an EA argue for disregarding contemporary issues.
  2. They convey that longtermism requires you to "put all your eggs in one basket," the basket being longtermism and not today's problems.
  3. Regulating AI will result in a slowdown in production. Yes, this is true but risking an uncontrollable accelerating risky technology like AI can result in the end of humanity and mass suffering. Therefore, the trade-off should be worth it much like regulating carbon emissions is (better word for worth it).

6

0
0

Reactions

0
0
Comments8
Sorted by Click to highlight new comments since:

I don't think the arguments are fallacious if you look at how strong longtermism is defined: 

Positively influencing the future is not just a moral priority but the moral priority of our time. 

See general discussion here and in depth discussion  here 

Perhaps they should have made that distinction since not all EAs take the strong longtermist view - including MacAskill himself who doesn't seem certain. 

The article was on MacAskill’s book which doesn’t argue for strong longtermism but longtermism.

However, I think the acknowledgement and critique of strong longtermism is necessary.

To me it seems they understood longtermism just fine and just so happen to disagree with strong longtermism's conclusions. We have limited resources and if you are a longtermist you think some to all of those resources should be spent ensuring the far future goes well. That means not spending those resources on pressing neartermist issues.

If EAs, or in this case the UN, push for more government spending on the future the question everyone should ask is where that spending should come from. If it's from our development aid budgets, that potentially means removing funding for humanitarian projects that benefit the worlds poorest.

This might be the correct call, but I think it's a reasonable thing to disagree with.

They understand the case for longtermism but don’t understand the proposals or solutions for longtermism aspirations.

One of the UN’s main goals is sustainable development. You can still address today’s issues while having these solutions have the future in consideration.

Therefore, you don’t have to spend most funds only addressing the long term future. You can tackle both problems simultaneously.

You can only spend your resources once. Unless you argue that there is a free lunch somewhere, any money and time spent by UN inevitably has to come from somewhere else. Arguing that longtermist concerns should be prioritized necessarily requires arguing that other concerns should be de-prioritized.

If EA's or the UN argue that longtermism should be a priority, it's reasonable for the authors to question from where those resources are going to come.

For what it's worth I think it's a no-brainer that the UN should spend more energy on ensuring the future goes well, but we shouldn't pretend that it's not at the expense of those who currently exist.

In the early 2000's when climate change started seriously getting onto the multilateral agenda, there were economists like Bjørn Lomborg arguing that we instead should spend our resources on cost-effective poverty alleviation.

He was widely criticized for this, for example by Michael Grubb, an economist and lead author for several IPCC reports, who argued:

To try and define climate policy as a trade-off against foreign aid is thus a forced choice that bears no relationship to reality. No government is proposing that the marginal costs associated with, for example, an emissions trading system, should be deducted from its foreign aid budget. This way of posing the question is both morally inappropriate and irrelevant to the determination of real climate mitigation policy.

Yet today, much (if not most) multilateral climate mitigation, is funded by countries' foreign aid budgets. The authors of this article, like Lomborg was almost two decades ago, are reasonable to worry that multilateral organizations adopting new priorities comes at the expense of the existing.

I believe we should spend much more time and money ensuring the future goes well, but we shouldn't pretend that this isn't at the expense of other priorities. 

"If the basic idea of long-termism—giving future generations the same moral weight as our own—seems superficially uncontroversial, it needs to be seen in a longer-term philosophical context. Long-termism is a form of utilitarianism or consequentialism, the school of thought originally developed by Jeremy Bentham and John Stuart Mill.

The utilitarian premise that we should do whatever does the most good for the most people also sounds like common sense on the surface, but it has many well-understood problems. These have been pointed out over hundreds of years by philosophers from the opposing schools of deontological ethics, who believe that moral rules and duties can take precedence over consequentialist considerations, and virtue theorists, who assert that ethics is primarily about developing character. In other words, long-termism can be viewed as a particular position in the time-honored debate about inter-generational ethics.

The push to popularize long-termism is not an attempt to solve these long-standing intellectual debates, but to make an end run around it. Through attractive sloganeering, it attempts to establish consequentialist moral decision-making that prioritizes the welfare of future generations as the dominant ethical theory for our times."

This strikes me as a very common class of confusion. I have seen many EAs say that what they hope for out of "What We Owe the Future" is that it will act as a sort of "Animal Liberation for future people". You don't see a ton of people saying something like "caring about animals seems nice and all, but you have to view this book in context. Secretly being pro-animal liberation is about being a utilitarian sentientist with an equal consideration of equal interests welfarist approach, that awards secondary rights like life based on personhood". This would seem either like a blatant failure of reading comprehension, or a sort of ethical paranoia that can't picture any reason someone would argue for an ethical position that didn't come with their entire fundamental moral theory tacked on.

On the one hand I think pieces like this are making a more forgivable mistake, because the basic version of the premise just doesn't look controversial enough to be what MacAskill actually is hoping for. Indeed I personally think the comparison isn't fantastic, in that MacAskill probably hopes the book will have more influence on inspiring further action and discussion than on changing minds about the fundamental issue (which again is less controversial, and which he spends less time in the book on).

On the other hand, he has been at special pains to emphasize in his book, interviews, and secondary writings, that he is highly uncertain about first order moral views, and is specifically, only arguing for longtermism as a coalition around these broad issues and ways of making moral decisions on the margins. Someone like MacAskill who is specifically arguing for a period where we hold off from irreversible changes as long as possible in order to get these moral discussions right really doesn't fit the bill or someone trying to "make an end run around" these issues.

Will is promoting longtermism as a key moral priority - merely one of our priorities, not the sole priority. He'll say things like (heavily paraphrased from my  memory) "we spend so little on existential risk reduction - I don't know how much we should spend, but maybe once we're spending 1% of GDP we can come back and revisit the question".

It's therefore disappointing to me when people write responses like this, responding to the not-widely-promoted idea that longtermism should be the only priority.

What about this quote: 

This strategy is particularly evident in discussions of artificial intelligences risks and benefits. Developers and investors hope that by persuading the public that the really "big" threat is being addressed, they will be sanguine about more immediate problem and shortcomings. They hope to create the impression that harms being done today are worth enduring because they will be far outweighed by the benefits promised for tomorrow when the technology matures. Such a strategy masks the possibility that the longer term risks will far outweigh the short term benefits of specific applications.

It is no coincidence that institutes working, for example, to anticipate the existential risks of artificial general intelligence get much of their funding from the very same billionaires who are enthusiastically pursuing the development of cutting-edge AI systems and applications. Meanwhile, it is much harder—if not impossible—to get funding for research on those cutting-edge applications that are being applied today in ways that boost profits but harm society.

The well-intentioned philosophy of long-termism, then, risks becoming a Trojan horse for the vested interests of a select few. Therefore we were surprised to see this philosophical position run like a red thread through "Our Common Agenda," the new and far-reaching manifesto of United Nations Secretary-General António Guterres.

The authors are suggesting that AI safety risk research and sponsorship is similar to greenwashing, just a facade to hide the short-term goals of AI technology developers.

More from Jeff A
Curated and popular this week
Relevant opportunities