I wrote to my friend Georgia in response to this Tumblr post.

Ben: It feels increasingly sketchy to me to call tiny countries surrounded by hostile regimes "threatening" for developing nuclear capacity, when US official policy for decades has been to threaten the world with nuclear genocide.

Strong recommendation to read Daniel Ellsberg's The Doomsday Machine.

Georgia: Book review: The Doomsday Machine

So I get that the US' nuclear policy was and probably is a nightmare that's repeatedly skirted apocalypse. That doesn't make North Korea's program better.

Ben [feeling pretty sheepish, having just strongly recommended a book my friend just reviewed on her blog]: "Threatening" just seems like a really weird word for it. This isn't about whether things cause local harm in expectation - it's about the frame in which agents trying to organize to defend themselves are the aggressors, rather than the agent insisting on global domination. 

Georgia: I agree that it's not the best word to describe it. I do mean "threatening the global peace" or something rather than "threatening to the US as an entity." But, I do in fact think that North Korea building nukes is pretty aggressive. (The US is too, for sure!)

Maybe North Korea would feel less need to defend itself from other large countries if it weren't a literal dictatorship - being an oppressive dictatorship with nukes is strictly worse.

Ben: What's the underlying thing you're modeling, such that you need a term like "aggression" or "threatening," and what role does it play in that model?

Georgia: Something like destabilizing to the global order and not-having-nuclear-wars, increases risk to people, makes the world more dangerous. With "aggressive" I was responding to to your "aggressors" but may have misunderstood what you meant by that.

Ben: This feels like a frame that fundamentally doesn't care about distinguishing what I'd call aggression from what I'd call defense - if they do a thing that escalates a conflict, you use the same word for it regardless. There's some sense in which this is the same thing as being "disagreeable" in action.

Georgia: You're right. The regime is building nukes at least in large part because they feel threatened and as an active-defense kind of thing. This is also terrible for global stability, peace, etc.

Ben: If I try to ground out my objection to that language a bit more clearly, it's that a focus on which agent is proximately escalating a conflict, without making distinctions about the kinds of escalation that seem like they're about controlling others' internal behavior vs preventing others from controlling your internal behavior is an implicit demand that everyone immediately submit completely to the dominant player.

Georgia: It's pretty hard to make those kind of distinctions with a single word choice, but I agree that's an important distinction.

Ben: I think this is exactly WHY agents like North Korea see the need to develop a nuclear deterrent. (Plus the dominant player does not have a great track record for safety.) Do you see how from my perspective that amounts to "North Korea should submit to US domination because there will be less fighting that way," and why I'd find that sketchy?

Maybe not sketchy coming from a disinterested Martian, but very sketchy coming from someone in one of the social classes that benefit the most from US global dominance?

Georgia: Kind of, but I believe this in the nuclear arena in particular, not in general conflict or sociopolitical tensions or whatever. Nuclear war has some very specific dynamics and risks.

Ben: The obvious thing from an EA perspective would be to try to establish diplomatic contact between Oxford EAs and the North Koreans, to see if there's a compromise version of Utilitarianism that satisfies both parties such that NK is happy being folded into the Anglosphere, and then push that version of Utilitarianism in academia.

Georgia: That's not obvious. Wait, are you proposing that?

Ben: It might not work, but "stronger AI offers weaker AI part of its utility function in exchange for conceding instead of fighting" is the obvious way for AGIs to resolve conflicts, insofar as trust can be established. (This method of resolving disputes is also probably part of why animals have sex.)

Georgia: I don't think academic philosophy has any direct influence on like political actions. (Oh, no, you like Plato and stuff, I probably just kicked a hornet's nest.) Slightly better odds on the Oxford EAs being able to influence political powers in some major way.

Ben: Academia has hella indirect influence. I think Keynes was right when he said that "practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back." Though usually on longer timescales.

FHI is successfully positioning itself as an advisor to the UK government on AI safety.

Georgia: Yeah, they are doing some cool stuff like that, do have political ties, etc, which is why I give them better odds.

Ben: Utilitarianism is nominally moving substantial amounts of money per year, and quite a lot if you count Good Ventures being aligned with GiveWell due to Peter Singer's recommendation.

Georgia: That's true.

Ben: The whole QALY paradigm is based on Utilitarianism. And it seems to me like you either have to believe

(a) that this means academic Utilitarianism has been extremely influential, or

(b) the whole EA enterprise is profiting from the impression that it's Utilitarian but then doing quite different stuff in a way that if not literally fraud is definitely a bait-and-switch.

Georgia: I'm persuaded that EA has been pretty damn influential and influenced by academic utilitarianism. Wouldn't trying to convince EAs directly or whatever instead of routing through academia be better?

Ben: Good point, doesn't have to be exclusively academic - you'd want a mixture of channels since some are longer-lived than others, and you don't know which ones the North Koreans are most interested in. Money now vs power within the Anglo coordination mechanism later.

Georgia: The other half of my incredulity is that fusing your value functions does not seem like a good silver bullet for conflicts.

Ben: It worked for America, sort of. I think it's more like, rarely tried because people aren't thinking systematically about this stuff. Nearly no one has the kind of perspective that can do proper diplomacy, as opposed to clarity-opposing power games.

Georgia: But saying that an academic push to make a fused value function is obviously the most effective solution for a major conflict seems ridiculous on its face.

Ben: I think the perspective in which this doesn't work, is one that thinks modeling NK as an agent that can make decisions is fundamentally incoherent, and also that taking claims to be doing utilitarian reasoning at face value is incoherent. Either there are agents with utility functions that can and do represent their preferences, or there aren't.

Georgia: Surely they can be both - like, conglomerations of human brains aren't really perfectly going to follow any kind of strategy, but it can still make sense to identify entities that basically do the decisionmaking and act more-or-less in accordance to some values, and treat that as a unit.

It is both true that "the North Korean regime is composed of multiple humans with their own goals and meat brains " and that "the North Korean regime makes decisions for the country and usually follows self-preservationist decisionmaking."

Ben:I'm not sure which mode of analysis is correct, but I am sure that doing the reconciliation to clarify what the different coherent perspectives are, is a strong step in the right direction.

Georgia: Your goal seems good!

Ben: Maybe EA/Utilitarianism should side with the Anglo empire against NK, but if so, it should probably account for that choice internally, if it wants to be and be construed as a rational agent rather than a fundamentally political actor cognitively constrained by institutional loyalties.

Thanks for engaging with this - I hadn't really thought through the concrete implications of the fact that any system of coordinated action is a "side" or agent in a decision-theoretic landscape with the potential for conflict.

That's the conceptual connection between my sense that calling North Korea's nukes "threatening" is mainly just shoring up America's rhetorical position as the legitimate world empire, and my sense that reasoning about ends that doesn't concern itself with the reproduction of the group doing the reasoning is implicitly totalitarian in a way that nearly no one actually wants.

Georgia: "With the reproduction of the group doing the reasoning" - like spreading their values/reasoning-generators or something?

Ben: Something like that

If you want philosopher kings to rule, you need a system adequate to keep them in power, when plenty of non-philosophers have an incentive to try to get in on the action, and then that ends up constraining most of your choices, so you don't end up benefiting much from the philosophers' competence!

So you build a totalitarian regime to try to hold onto this extremely fragile arrangement, and it fails anyway.

The amount of narrative control they have to exert to prevent people from subverting the system by which they're in charge ends up being huge.

(There's some ambiguity, since part of the reason for control is education into virtue - but if you're not doing that, there's not really much of a point of having philosophers in charge anyway.)

I'm definitely giving you a summary run through a filter, but that's true of all summaries, and I don't think mine is less true than the others -just, differently slanted.

-14

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since: Today at 9:49 PM
kbog
5y17
0
0

There are many problems here:

  • There is not a clear distinction between preparations for offense and preparations for defense. The absence of this distinction is precisely what gives rise to threats and instability in cases like North Korea. The ambiguity is due to structural problems with limited information and the nature of military forces, not ideologies in the current milleu.
  • The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it's possible for alternative or backchannel efforts to be positive, they are far from being the "obvious" choice.
  • Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
  • The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.
  • The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it's not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
  • The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don't vanish just because you rephrase it in the language of utilitarianism and AGI.
  • Academia has influence on policymakers when it can help them achieve their goals, that doesn't mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
  • The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.
  • Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You're also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.
  • Talking about people or countries as rational agents with utility functions does not mean we have to pretend that they act on the basis of moral theories like utilitarianism.
Your perception that the EA community profits from the perception of utilitarianism is the opposite of the reality; utilitarianism is more likely to have a negative perception in popular and academic culture, and we have put nontrivial effort into arguing that EA is safe and obligatory for non-utilitarians. You're also ignoring the widely acknowledged academic literature on how axiology can differ from decision methods; sequence and cluster thinking are the latter.

I've talked with few people who seemed under the impression that the EA orgs making recommendations were performing some sort of quantitative optimization to maximize some sort of goodness metric, and used those recommendations on that basis, because they themselves accepted some form of normative utilitarianism.

It is perceived, that doesn't mean the perception is beneficial. It's better if people perceive EA as having weaker philosophical claims, like maximizing welfare in the context of charity, as opposed to taking on the full utilitarian theory and all it says about trolleys and torturing terrorists and so on. Quantitative optimization should be perceived as a contextual tool that comes bottom-up to answer practical questions, not tied to a whole moral theory. That's really how cost-benefit analysis has already been used.

The most obvious way for EAs to fix the deterrence problem surrounding North Korea is to contribute to the mainstream discourse and efforts which already aim to improve the situation on the peninsula. While it's possible for alternative or backchannel efforts to be positive, they are far from being the "obvious" choice.
Backchannel diplomacy may be forbidden by the Logan Act, though it has not really been enforced in a long time.
The EA community currently lacks expertise and wisdom in international relations and diplomacy, and therefore does not currently have the ability to reliably improve these things on its own.

All these seem like straightforward objections to supporting things like GiveWell or the global development EA Fund (vs joining or supporting establishment aid orgs or states which have more competence in meddling in less powerful countries' internal affairs).

It wasn't obvious to make GiveWell, until people noticed a systematic flaw (lack of serious impact analysis) that warranted a new approach. In this case, we would need to identify a systematic flaw in the way that regular diplomacy and deterrence efforts are approaching things. Professionals do regard North Korea as a threat, but not in a naive "oh they're just evil and crazy aggressors" sort of sense, they already know that deterrence is a mutual problem. I can see why one might be cynical about US government efforts, but there are more players besides the US government.

The Logan Act doesn't present an obstacle to aid efforts. You're not intervening in a dispute with the US government, you're just supporting the foreign country's local programs.

EAs have a perfectly good working understanding of the microeconomic impacts of aid. At least, Givewell etc do. Regarding macroeconomic and institutional effects, OK not as much, but I still feel more confident there than I do when it comes to international relations and strategic policy. We have lots of economists, very few international relations people. And I think EAs show more overconfidence when they talk about nuclear security and foreign policy.

Academia has influence on policymakers when it can help them achieve their goals, that doesn't mean it always has influence. There is a huge difference between the practical guidance given by regular IR scholars and groups such as FHI, and ivory tower moral philosophy which just tells people what moral beliefs they should have. The latter has no direct effect on government business, and probably very little indirect effect.
The QALY paradigm does not come from utilitarianism. It originated in economics and healthcare literature, to meet the goals of policymakers and funders who already had utilitarian-ish goals.

I agree with Keynes on this, you disagree, and neither of us has really offered much in the way of an argument or evidence, you've just asserted a contrary position.

The idea of making a compromise by coming up with a different version of utilitarianism is absurd. First, the vast majority of the human race does not care about moral theories, this is something that rarely makes a big dent in popular culture let alone the world of policymakers and strategic power. Second, it makes no sense to try to compromise with people by solving every moral issue under the sun when instead you could pursue the much less Sisyphean task of merely compromising on those things that actually matter for the dispute at hand. Finally, it's not clear if any of the disputes with North Korea can actually be cruxed to disagreements of moral theory.
The idea that compromising with North Korea is somehow neglected or unknown in the international relations and diplomacy communities is false. Compromise is ubiquitously recognized as an option in such discourse. And there are widely recognized barriers to it, which don't vanish just because you rephrase it in the language of utilitarianism and AGI.

So, no one should try this, it would be crazy to try, and besides we don't know whether it's possible because we haven't tried, and also competent people who know what they're doing are working on it already so we shouldn't reinvent the wheel? It doesn't seem like you tried to understand the argument before trying to criticize it, it seems like you're just throwing up a bunch of contradictory objections.

It's different because they have the right approach on how to compromise. They work on compromises that are grounded in political interests rather than moral values, and they work on compromises that solve the task at hand rather than setting the record straight on everything. And while they have failures, the reasons for those failures are structural (problems of commitment, honesty, political constraints, uncertainty) so you cannot avoid them just by changing up the ideologies.

Curated and popular this week
Relevant opportunities