Y

yefreitor

163 karmaJoined Mar 2023

Posts
1

Sorted by New
3
· 1y ago · 1m read

Comments
33

I'm curious what you all think about EA's extreme left-leaning political skew

It's an artifact. What's really being measured here is EA's skew towards educated urban professionals, for whom identifying as "left" or "center left" is a signal of group membership. The actual dominant ideology is, broadly speaking, technocratic liberalism: the same cluster as "center-right" Rockefeller Republicans, "centrist" New Democrats, or "center-left" LibDems, but qualitatively different from the "center-right" CDU or "center-left" New Dealers. 

and whether it represents a problem that needs some attention

Getting Christian conservatives to allocate their donations more effectively would be an extremely good thing but it will never ever happen under the aegis of EA. You would need a totally separate cultural infrastructure, built for (and probably by) communitarians instead of liberal universalists. 

Claim: Credible plans for a "pivotal act" may drive AI race dynamics

(Epistemic status: I had mathematica do all the grunt work and did not check the results carefully)

Consider a simple normal-form game with two equally capable agents A and B, each of which is deciding whether to aggressively pursue AI development, and three free parameters:

  • the probability  that accelerating AI development results in an existential catastrophe (with utility -1 for both agents, versus a utility 0 status quo). 
  • the utility  of developing the first friendly AI
  • the utility  of the other agent developing friendly AI

We'll first assume the coin only gets flipped once: developing a friendly AI lets you immediately control all other AI development.

Since our choice of parameterization was in retrospect one that requires a lot of typing, we'll define  and then rescale to get something more readable

  • (Accelerate, Accelerate) is always a Nash equilibrium, no matter how trivial the differences  captures are.
  • (Don't, Don't) is a Nash equilibrium when , as you would expect
  • (Don't, Don't) is never a trembling-hand equilibrium, since (Don't) does not weakly dominate (Accelerate) for either player.
  • When  (Accelerate) weakly dominates (Don't) and (Accelerate, Accelerate) is a trembling-hand equilibrium.

Now consider the case where (Accelerate, Accelerate) instead flips two coins.

This is potentially a much safer situation:

  • (Accelerate, Accelerate) is only a Nash equilibrium when 
  • (Don't, Don't) is still a Nash equilibrium when 
  • (Don't, Don't) is a trembling-hand equilibrium if it's a Nash equilibrium and (Accelerate, Accelerate) is not.

A BOTEC indicates that Open AI might have been valued at >200x their annual recurring revenue, which is by far the highest multiple I can find evidence of for a company of that size. This seems consistent with the view that markets think OAI will be exceptional, but not “equivalent to the Industrial Revolution” exceptional, and a variety of factors make this multiple hard to interpret.

I would be very cautious about trying to extract information from private valuations, especially at this still somewhat early stage. Private markets are far less efficient than public ones, large funding rounds are less efficient still, and large funding rounds led by the organization that controls your infrastructure might as well be poker games. 

Clearly you find me arrogant, but there's not much I can do about that - I've tried to be as polite as I can, but clearly that was insufficient.

You come across as arrogant for a few reasons which are in principle fixable.

1: You seem to believe people who don't share your values are simply ignorant of them, and not in a deep "looking for a black cat in an unlit room through a mirror darkly" sort of way. If you think your beliefs are prima facie correct, fine, most people do - but you still have to argue for them. 

2: You mischaracterize utilitarianism in ways that are frankly incomprehensible, and become evasive when those characterizations are challenged. At the risk of reproducing exactly that pattern, here's an example:

In humanitarian action effectiveness is an instrumental value not an intrinsic value

...

EA is a form of utilitarianism, and when the word effective is used it has generally been in the sense of "cost effective". If you are not an effective altruist (which I am not), then cost effectiveness - while important - is an instrumental value rather than an intrinsic value. 

...

I'm not a utilitarian, so I reject the premise of this question when presented in the abstract as it is here. Effectiveness for me is an instrumental value

As you have been more politely told many times in this comment section already: claiming that utilitarians assign intrinsic value to cost-effectiveness is absurd. Utilitarians value total well-being (though what exactly that means is a point of contention) and nothing else. I would happily incinerate all the luxury goods humanity has ever produced if it meant no one ever went hungry again. Others would go much further.

What I suspect you're actually objecting to is aggregation of utility across persons - since that, plus the grossly insufficient resources available to us, is what makes cost-effectiveness a key instrumental concern in almost all situations - but if so the objection is not articulated clearly enough to engage with. 

3: Bafflingly, given (1), you also don't seem to feel the need to explain what your values are! You name them (or at least it seems these are yours) and move on, as if we all understood

humanity, impartiality, neutrality, and independence

in precisely the same way. But we don't. For example: utilitarianism is clearly "impartial" and "neutral" as I understand them (i.e. agent-neutral and impartial with respect to different moral patients) whereas folk-morality is clearly not.

I'm guessing, having just googled that quote, that you mean something like this

Humanity means that human suffering must be addressed wherever it is found, with particular attention to the most vulnerable.

Neutrality means that humanitarian aid must not favour any side in an armed conflict or other dispute.

Impartiality means that humanitarian aid must be provided solely on the basis of need, without discrimination.

Independence means the autonomy of humanitarian objectives from political, economic, military or other objectives.

in which case there's a further complication:  you're almost certainly using "intrinsic value" and "instrumental value" in a very different sense from the people you're talking to. The above versions of "independence" and "neutrality" are, by my lights, obviously instrumental - these are cultural norms for one particular sort of organization at one particular moment in human history, not universal moral law. 

Critics like Srinivasan, Crary, etc., pretty explicitly combine a political stance with criticism of EA's "utilitarian" foundations

Yes, they're hostile to utilitarianism and to some extent agent-neutrality in general, but the account of "EA principles" you give earlier in the paper is much broader.

Effective altruism is sometimes confused with utilitarianism. It shares with utilitarianism the innocuous claim that, all else equal,it’s better to do more good than less. But EA does not entail utilitarianism’s more controversial claims. It does not entail hegemonic impartial maximizing: the EA project may just be one among many in a well-rounded life ...

I’ve elsewhere described the underlying philosophy of effective altruism as“beneficentrism”—or “utilitarianism minus the controversial bits”—that is, “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” Utilitarians take beneficentrism to be exhaustive of fundamental morality, but others may freely combine it with other virtues, prerogatives, or pro tanto duties.

Critics like Crary and Srinivasan (and this particular virtue-ethicsy line of critique should not be conflated with "political" critique in general) are not interested in discussing "EA principles" in this sense. When they say something like "I object to EA principles" they're objecting to what they judge to be the actual principles animating EA discourse, not the ones the community "officially" endorses. 

They might be wrong about what those principles are - personally my impression is that EA is very strongly committed to consequentialism but in practice not all that utilitarian - but it's an at least partially empirical question, not something that can be resolved in the abstract. 

In this paper, I’ve argued that there are no good intellectual critiques of effective altruist principles. We should all agree that the latter are straightforwardly correct. But it’s always possible that true claims might be used to ill effect in the world. Many objections to effective altruism, such as the charge that it provides “moral cover” to the wealthy, may best be understood in these political terms.

I don’t think philosophers have any special expertise in adjudicating such empirical disagreements, so will not attempt to do so here. I’ll just note two general reasons for being wary of such politicized objections to moral claims.

First, I think we should have a strong default presumption in favour of truth and transparency. While it’s always conceivable that esotericism or “noble lies” could be justified, we should generally be very skeptical that lying about morality would actually be for the best. In this particular case, it seems especially implausible that discouraging people from trying to do good effectively is a good idea. I can’t rule it out—it’s a logical possibility—but it sure would be surprising. So there’s a high bar for allowing political judgments to override intellectual ones.

 

This is pretty uncharitable. Someone somewhere has probably sincerely argued for claiming helping people is bad on the grounds that doing so helps people, but "political" critics of EA are critics of EA, the particular subculture/professional network/cluster of organizations that exists right now, not "EA principles". This is somewhat obscured by the fact that the loudest and best-networked ones come from "low decoupling" intellectual cultures, and often don't take talk of principles qua principles seriously enough to bother indicating that they're talking about something else - but it's not obscure to them, and they're not going to give you any partial credit here. 

However, internationalist strains of thought are currently far less influential within conservatism...

Crudely, about half the time, conservatives are in power. To help promote internationalist policy, which is vital to tackle global catastrophic risk and extreme poverty, we need to promote and strengthen existing internationalist schools of thought within conservatism, or even create new internationalist schools of thought within conservatism.

 

I think this gets the causation backwards. 

Conservatism, like any other ideological cluster, does not exist independently of the real political movements that constitute it. There is no law of nature that says "conservatives", in some abstract ideal sense, are going to be in power half of the time. Whatever level of success contemporary conservative parties have is a fact about actually existing conservatives, not "conservatism" as such. They're not inherently traditionalists and only accidentally nationalists; they just have whatever traits they have, any combination of which might explain their success. 

Do you disagree with any? 

Treating "good future" and "irreversibly messed up future" as exhaustive seems clearly incorrect to me. 

Consider for instance the risk of an AI-stabilized personalist dictatorship, in which literally all political power is concentrated in a single immortal human being.[1]Clearly things are not going great at this point. But whether they're irreversibly bad hinges on a lot of questions about human psychology - about the psychology of one particular human, in fact - that we don't have answers to. 

  • There's some evidence Big-5 Agreeableness increases slowly over time. Would the trend hold out to thousands of years?
  • How long-term are long-term memories (augmented to whatever degree human mental architecture permits)? 
  • Are value differences between humans really insurmountable or merely very very very hard to resolve? Maybe spending ten thousand years with the classics really would cultivate virtue. 
  • Are normal human minds even stable in the very long run? Maybe we all wirehead ourselves eventually, given the chance. 

So it seems to me that if we're not ruling out permanent dystopia we shouldn't rule out "merely" very long lived dystopia either. 

This is clearly not a "good future", in the sense that the right response to "100% chance of a good future" is to rush towards it as fast as possible, and the right response to "10% chance of utopia 'till the stars go cold, 90% chance of spending a thousand years beneath Cyber-Caligula's sandals followed by rolling the dice again"[2] is to slow down and see if you can improve the odds a bit. But it doesn't belong in the "irreversibly messed up" bin either: even after Cyber-Caligula takes over, the long-run future is still almost certainly utopian. 

 

  1. ^

    Personally I think this is far less likely than AI-stabilized oligarchy (which, if not exactly a good future, is at least much less likely to go off into rotating-golden-statue-land) but my impression is that it's the prototypical "irreversible dystopia" for most people.

  2. ^

    Obviously our situation is much worse than this

Some projects take humans much longer (e.g. proving Fermat's last theorem) but they can almost always be decomposed into subtasks that don't require full global context (even tho that's often helpful for humans).

At least for math, I don't think this is the right way to frame things: finding the right decomposition is often the hard part! "Average math undergrad"-level mathematical reasoning at vastly superhuman speed probably gets you a 1-year artificial mathematician, but I doubt it gets you a 50-year one.

Competitive markets can involve some behaviour which is not directly productive, but does help companies get a leg-up on one another (such that many or all companies involved would prefer if that behaviour weren't an option for anyone). One example is advertising (advertising is useful for other reasons, I mostly have in mind "Pepsi vs Coke" style advertising).

It's worth noting that this is a case of (and cause of) imperfect competition: perfect competition and the attendant efficiency results require the existence of perfect substitutes for any one producer's goods. 

Load more