James_Banks

Comments

What do we do if AI doesn't take over the world, but still causes a significant global problem?
In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim "if we don't currently know how to align/control AIs, it's inevitable there'll eventually be significantly non-aligned AIs someday"?

Yes, I agree that there's a difference.

I wrote up a longer reply to your first comment (the one marked "Answer'), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.

What do we do if AI doesn't take over the world, but still causes a significant global problem?

Yeah, I wasn't being totally clear with respect to what I was really thinking in that context. I was thinking "from the point of view of people who have just been devastated by some not-exactly superintelligent but still pretty smart AI that wasn't adequately controlled, people who want to make that never happen again, what would they assume is the prudent approach to whether there will be more non-aligned AI someday?", figuring that they would think "Assume that if there are more, it is inevitable that there will be some non-aligned ones at some point". The logic being that if we don't know how to control alignment, there's no reason to think there won't someday be significantly non-aligned ones, and we should plan for that contingency.

Objections to Value-Alignment between Effective Altruists

A few things this makes me think of:

explore vs. exploit: The first part of your life (the first 37%?), you gather information, then the last part, you use that information, maximizing and optimizing according to it. Humans have definite lifespans, but movements don't. Perhaps a movement's life depends somewhat on how much exploration they continue to do.

Christianity: I think maybe the only thing all professed Christians have in common is attraction to Jesus, who is vaguely or definitely understood. You could think of Christianity as a movement of submovements (denominations). The results are these nicely homogenous groups. There's a Catholic personality or personality-space, a Methodist, Church of Christ, Baptist, etc. Within them are more, or less, autonomous congregations. Congregations die all the time. Denominations wax and wane. Over time, what used to divide people into denominations (doctrinal differences) has become less relevant (people don't care about doctrine as much anymore), and new classification criteria connect and divide people along new lines (conservative vs. evangelical vs. mainline vs. progressive). An evangelical Christian family who attend a Baptist church might see only a little problem in switching to a Reformed church that was also evangelical. A Church of Christ member, at a church that would have considered all Baptists to not really be Christians 50 or 100 years ago, listens to some generic non-denominational nominally Baptist preacher who says things he likes to hear, while also hearing the more traditional Church of Christ sermons on Sunday morning.

The application of that example to EA could be something like: Altruism with a capital-A is something like Jesus, a resonant image. Any Altruist ought to be on the same side as any other Altruist, just like any Christian ought to be on the same side as any other Christian, because they share Altruism, or Jesus. Just as there is an ecosystem of Christian movements, submovements, and semiautonomous assemblies, there could be an ecosystem of Altruistic movements, submovements, and semiautonomous groups. It could be encouraged or expected of Altruists that they each be part of multiple Altruistic movements, and thus be exposed to all kinds of outside assumptions, all within some umbrella of Altruism. In this way, within each smaller group, there can be homogeneity. The little groups that exploit can run their course and die while being effective tools in the short- or medium-term, but the overall movement or megamovement does not, because overall it keeps exploring. And, as you point out, continuing to explore improves the effectiveness of altruism. Individual movements can be enriched and corrected by their members' memberships in other movements.

A Christian who no longer likes being Baptist can find a different Christianity. So it could be the same with Altruists. EAs who "value drift" might do better in a different Altruism, and EA could recruit from people in other Altruisms who felt like moving on from those.

Capital-A Altruism should be defined in a minimalist way in order to include many altruistic people from different perspectives. EAs might think of whatever elements of their altruism that are not EA-specific as a first approximation of Altruism. Once Altruism is defined, it may turn out that there are already a number of existing groups that are basically Altruistic, though having different cultures and different perspectives than EA.

Little-a altruism might be too broad for compatibility with EA. I would think that groups involved in politicizing go against EA's ways. But then, maybe having connection even with them is good for Altruists.

In parallel to Christianity, when Altruism is at least somewhat defined, then people will want to take the name of it, and might not even be really compliant with the N Points of Altruism, whatever value of N one could come up with -- this can be a good and a bad thing, better for diversity, worse for brand strength. But also in parallel to Christianity, there is generally a similarity within professed Christians which is at least a little bit meaningful. Experienced Christians have some idea of how to sort each other out, and so it could be with Altruists. Effective Altruism can continue to be as rigorously defined as it might want to be, allowing other Altruisms to be different.

What values would EA want to promote?

A few free ideas occasioned by this:

1. The fact that this is a government paper makes me think of "people coming together to write a mission statement." To an extent, values are agreed-upon by society, and it's good to bear that in mind. (Working with widespread values instead of against them, accepting that to an extent values are socially-constructed (or aren't, but the crowd could be objectively right and you wrong) and adjusting to what's popular instead of using a lot of energy to try to change things.)

2. My first reaction when reading the "Champion democracy,..." list is "everybody knows about those things... boring", but if you want to do good, you shouldn't be dissuaded by the "unsexiness" of a value or pursuit. That could be a supporting value to the practice of altruism.

What values would EA want to promote?

I'm basically an outsider to EA, but "from afar", I would guess that some of the values of EA are 1) against politicization, 2) for working and building rather than fighting and exposing ("exposing" being "saying the unhealthy truth for truth's sake", I guess), 3) for knowing and self-improvement (your point), 4) concern for effectiveness (Gordon's point). And of course, the value of altruism.

These seem like they are relatively safe to promote (unless I'm missing something.)

Altruism is composed of 1) other-orientation / a relative lack of self-focus (curiosity is an intellectual version of this), 2) something like optimism, 3) openness to evidence (you could define "hope" as a certain combination of 2 and 3), 4) personal connection with reality (maybe a sense of moral obligation, a connection with other being's subjective states, or a taste for a better world), 5) inclination to work, 6...) probably others. So if you value altruism, you have to value whatever subvalues it has.

These also seem fairly safe to promote.

Altruism is supported by 1) "some kind of ambition is good", 2) "humility is good but trying to maximize humility is bad" (being so humble you don't have any confidence in your knowledge prevents action), 3) "courage is good but not foolhardiness", 4) "will is good, if it stays in touch with reality", 5) "being 'real' is good" (following through on promises, really having intentions), 6) "personal sufficiency is good" (you have enough or are enough to dare reach into someone else's reality), 7...) probably others.

These are riskier. I think one thing to remember is that ideas are things in people's minds, that culture is really embodied in people, not in words. A lot of culture is in interpersonal contact, which forms the context for ideas. So ideally, if you promote values, you shouldn't just say things, but should instruct people (or be in relationship with people) such that they really understand what you're saying. (Advice I've seen on this forum.) Genes become phenotype through epigenetics, and concepts become emotions, attitudes, and behaviors through the "epiconceptual". The epiconceptual could be the cultural background that informs how people hear a message (like "yes, this is the moral truth, but we don't actually expect people to live up to the moral truth"), or it could be the subcultural background from a relationship or community that makes it make sense. The practices and expectations of culture / subculture. So values are a thing which are not promoted just by communicators, but also by community-builders, and good communities help make risky but productive words safe to spread.