All of James_Banks's Comments + Replies

I may not have understood all of you what you said, but I was left with a few thoughts after finishing this.

1. Creating Bob to have values: if Bob is created to be able to understand that he was created to have values, and to be able to then, himself, reject those values and choose his own, then I say he is probably more free than if he wasn't.  But, having chosen his own values, he now has to live in society, a society possibly largely determined by an AI.  If society is out of tune with him, he will have limited ability to live out his values, ... (read more)

You didn't mention the Long Reflection, which is another point of contact between EA and religion.  The Long Reflection is about figuring out what values are actually right, and I think it would be odd to not do deep study of all the cultures available to us to inform that, including religious ones.  Presumably, EA is all about acting on the best values (when it does good, it does what is really good), so maybe it needs input from the Long Reflection to make big decisions.

4
Geoffrey Miller
2y
James -- I agree.  Human values as they currently are -- in all their messy, hypocritical, virtue-signaling, partisan, sectarian glory --  might NOT be what we want to upload into powerful AI systems. A Long Reflection might be advisable.

I've wondered if it's easier to align AI to something simple rather than complex (or if it's more like "aligning things at all is really hard, but adding complexity is relatively easy once you get there").  If simplicity is more practical, then training an AI to do something libertarian might be simpler than to pursue any other value.  The AI could protect "agency" (one version of that being "ability of each human to move their bodies as they wish, and the ability to secure their own decision-making ability").  Or, it might turn out to be ea... (read more)

This is sort of a loose reply to your essay.  (The things I say about "EA" are just my impressions of the movement as a whole.)

I think that EA has aesthetics, it's just that the (probably not totally conscious) aesthetic value behind them is "lowkeyness" or "minimalism".  The Forum and logo seems simple and minimalistically warm, classy, and functional to me.

Your mention of Christianity focuses more on medieval-derived / Catholic elements.   Those lean more "thick" and "nationalistic".  ("Nationalistic" like "building up a people group ... (read more)

8
Étienne Fortier-Dubois
2y
It seems true that aesthetics provide an extra dimension that can lead to disagreement, conflict, misunderstanding, etc. So I agree that we'd want to be careful about it.  On the other hand that's kind of why so much of everything is bland today, from architecture to politics. Sometimes you do want to present a bold vision that will alienate some people but perhaps rally even more. In a sense, EA already does this (it rallies a certain kind of person and puts off other kinds), and I think adding a layer of good aesthetics would make it possibly more effective at doing that. But it is a risk.

I hadn't heard of When the Wind Blows before.  From the trailer, I would say Testament may be darker, although a lot of that has to do with me not responding to animation (or When the Wind Blows' animation) as strongly as to live-action.  (And then from the Wikipedia summary, it sounds pretty similar.)

I would recommend Testament  as a reference for people making X-risk movies.  It's about people dying out from radiation after a nuclear war, from the perspective of a mom with kids.  I would describe it as emotionally serious, and also it presents a woman's and "ordinary person's" perspective.  I guess it could be remade if someone wanted to, or it could just be a good influence on other movies.

-2
HaydnBelfield
2y
Hell yeah, I can't wait to watch this and get really depressed. Have you read or watched When The Wind Blows? Seems a similar tone.

Existential risk might be worth talking about because of normative uncertainty.  Not all EAs are necessarily hedonists, and perhaps the ones who are shouldn't be, for reasons to be discovered later.  So, if we don't know what "value" is, or, as a movement, EA doesn't "know" what "value" is, a priori, we might want to keep our options open, and if everyone is dead, then we can't figure out what "value" really is or ought to be.

If EA has a lot of extra money, could that be spent on incentivizing AI safety research?  Maybe offer a really big bounty for solving some subproblem that's really worth solving.  (Like if somehow we could read  and understand neural networks directly instead of them being black boxes.)

Could EA (and fellow travelers) become the market for an AI safety industry?

I wonder if there are other situations where a person has a "main job" (being a scientist, for instance) and is then presented with a "morally urgent situation" that comes up (realizing your colleague is probably a fraud and you should do something about it).  The traditional example is being on your way to your established job and seeing someone beaten up on the side of the road whom you could take care of.  This "side problem" can be left to someone else (who might take responsibility, or not) and if taken on, may well be an open-ended and ener... (read more)

The plausibility of this depends on exactly what the culture of the elite is.  (In general, I would be interested in knowing what all the different elite cultures in the world actually are.)  I can imagine there being some tendency toward thinking of the poor / "low-merit", as being  superfluous, but I can also imagine superrich people not being that extremely elitist and thinking "why not? The world is big, let the undeserving live."  or even things which are more humane than that.

But also, despite whatever humaneness there might be in... (read more)

This is kind of like my comment at the other post, but it's what I could think of as feedback here.

--

I liked your point IV, that inefficiency might not go away.  One reason it might not is because humans (even digital ones) would have something like free will, or caprice, or random preferences, in the same way that they do now.    Human values may not behave according to our concept of "reasonable rational values" over time, as they evolve.  In human history, there have been impulses toward the rational and the irrational.   So the... (read more)

This isn't a very direct response to your questions, but is relevant, and is a case for why there might be a risk of factory farming in the long-term future.  (This doesn't address the scenarios from your second question.) [Edit: it does have an attempt at answering your third question at the end.]

--

It may be possible that if plant-based meat substitutes are cheap enough and taste like (smell like, have mouth feel of, etc.) animal-derived meat, then it won't make economic sense to keep animals for that purpose.

That's the hopeful take, and I'm guessing... (read more)

3
Fai
2y
Hi James (Banks), I wrote a post on why PB/CM might not eliminate factory farming. Would be great if you can give me some feedback there.

This is SUPER interesting. And it's amazing that you have put so much thought into this exact issue!

Also, I love that everybody who responded is named James! :-) 

I don't think your dialogue seems creepy, but I would put it in the childish/childlike category.   The more mature way to love is to value someone in who they are (so you are loving them, a unique personal being, the wholeness of who they are rather than the fact that they offer you something else) and to be willing to pay a real cost for them.  

I use the terms "mature" and "childish/childlike" because (while children are sometimes more genuinely loving than adults), I think there is a natural tendency to lose some of your taste for the flavors, ... (read more)

Would it be possible for some kind of third party to give feedback on applications?  That way people can get feedback even if hiring organizations find it too costly.   Someone who was familiar with how EA organizations think / with hiring processes specifically, or who was some kind of career coach, to be able to say "You are in the nth percentile of EAs I counsel.  It's likely/unlikely that if you are rejected it's because you're unqualified overall." or "Here are your general strengths and weaknesses as someone applying to this position, ... (read more)

4
IanDavidMoss
3y
As another option to get feedback, many colleges and universities' career development offices offer counseling to their schools' alumni, and resume review (often in the context of specific applications to specific jobs) is one of the standard services they provide at no extra charge.

Suppose there is some kind of new moral truth, but only one person knows it.  (Arguably, there will always be a first person.  New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what "harm" means. ) 

This person may well adopt an affectively expensive point of view, which won't make any sense to their peers (or may make all too much sense).  Their peers m... (read more)

2
Cullen
3y
These are all great considerations! However, I think that it's perfectly consistent with my framework to analyze the total costs to avoiding a harm, including harms to society from discouraging true beliefs or chilling the reasoned exchange of ideas. So in the case you imagine, there's a big societal moral cost from the peers' reactions, which they therefore have good reason to try to minimize. This generalizes to the case where we don't know whose moral ideas are true by "penalizing" (or at least failing to indulge) psychological frameworks that impede moral discourse and reasoning (perhaps this is one way of understanding the First Amendment).

The $100 an item market sounds like fair trade.  So you might compete with fair trade and try to explain why your approach is better.

The $50,000 an item market sounds harder but more interesting.  I'm not sure I would ever buy a $50,000 hoodie or mug, no matter how much money I had or how nice the designs on them were.  But I could see myself (if I was rolling in money and cared about my personal appearance) buying a tailored suit for $50,000, and explaining that it only cost $200 to make (or whatever it really does) and the rest went to cha... (read more)

1
Ariel Pontes
3y
Honestly at this point the $50,000 item is more like a joke, something to make a point. I'd have a listing there to raise awareness of the fact that people really are spending that kind of money on stupid stuff all the time, which I think is a moral scandal. But of course, I'd be happy if somebody actually bought it! But in any case, although people usually don't buy t-shirts or hoodies for $50,000, they buy  them for a few hundred dollars all the time. This shop idea is largely inspired by this Patriot Act episode: 

This kind of pursuit is something I am interested in, and I'm glad to see you pursue it.

One thing you could look for, if you want, is the "psychological constitution" being written by a text.  People are psychological beings and the ideas they hold or try to practice shape their overall psychological makeup, affecting how they feel about things and act.  So, in the Bhagavad-Gita, we are told that it is good to be detached from the fruits of action, but to act anyway.  What effect would that idea have if EAs took it (to the extent that they h... (read more)

One possibility that maybe you didn't close off (unless I missed it) is "death by feature creep" (more likely "decline by feature creep").  It's somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI,  also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).

 Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligen... (read more)

One thought that re-occurs to me is that there could be two, related EA movements, which draw from each other.    No official barrier to participating in both (like being on LessWrong and EA Forum at the same time).  Possible to be a leader in both at the same time (if you have time/energy for it).  One of them emphasizes the "effective" in "effective altruists", the other the "altruists".  The first more like current EA, the second more focused on increasing the (lasting) altruism of the greatest number of people.  Human reso... (read more)

"King Emeric's gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years."

(source: https://sancrucensis.wordpress.com/2019/07/10/king-emeric-of-hungary/ )

It seems to me like longtermists could learn something from people like this.  (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)

(Also a short blog post by me occasioned by these m... (read more)

Moral realism can be useful in letting us know what kind of things should be considered moral.

For instance, if you ground morality in God, you might say: Which God? Well, if we know which one, we might know his/her/its preferences, and that inflects our morality.  Also, if God partially cashes out to "the foundation of trustworthiness, through love", then we will approach knowing and obligation themselves (as psychological realities) in a different way (less obsessive? less militant? or, perhaps, less rigorously responsible?).

Sharon Hewitt Rawlette (i... (read more)

I can see the appeal in having one ontological world.  What is that world, exactly?  Is it that which can be proven scientifically (in the sense of, through the scientific method used in natural science)?  I think what can be proven scientifically is perhaps what we are most sure is real or true.  But things that we are less certain of being real can still exist, as part of the same ontological world.  The uncertainty is in us, not in the world.  One simplistic definition of natural science is that it is simply rigorous empiri... (read more)

Here are some ideas:

The rich have too much money relative to poor:

Taking money versus eliciting money.

Taking via

  • revolution
  • taxation

Eliciting via

  • shame, pressure, guilt
  • persuasion, psychological skill
  • friendship

Change of culture

  • culture in general
  • elite culture

Targeting elite money

  • used to be stewards of investments
  • used for personal spending

--

Revolutions are risky and can lead to worse governments.

Taxation might work better. (Closing tax haven loopholes.) Building political will for higher taxes on wealthy. There are people in the US who don't want the... (read more)

1. I don't know much about probability and statistics, so forgive me if this sounds completely naive (I'd be interested in reading more on this problem, if it's as simple for you as saying "go read X").

Having said that, though, I may have an objection to fanaticism, or something in the neighborhood of it:

  • Let's say there are a suite of short-term payoff, high certainty bets for making things better.
  • And also a suite of long-term payoff, low certainty bets for making things better. (Things that promise "super-great futur
... (read more)

(The following is long, sorry about that. Maybe I should have written it up already as a normal post. A one sentence abstract could be: "Social media algorithms could be dangerous as a part of the overall process of leading people to 'consent' to being lesser forms of themselves to further elite/AI/state goals, perhaps threatening the destruction of humanity's longterm potential.")

It seems plausible to me that something like algorithmic behavior modification (social media algorithms are algorithms designed to modify human behav... (read more)

I like the idea of coming up with some kind of practice to retrain yourself to be more altruistic. There should be some version of that idea that works, and maybe exposing yourself to stories / imagery / etc. about people / animals who can be helped would be part of that.

One possibility is that such images could become naturally compelling for people (and thus would tend to be addictive or obsession-producing, because of their awful compellingness) -- for such people, this practice is probably bad, sometimes (often?) a net bad. But for other people, the... (read more)

1
James_Banks
4y
OK, this person on the EA subreddit uses a kind of meditation to reduce irrational/ineffective guilt.

Also, this makes me curious: have things changed any since 2007? Does the promotion of 1 still seem as necessary? What role has the letter (or similar ideas/sentiments) played in whatever has happened with charities and funders over the last 13 years?

I think there's a split between 1) "I personally will listen to brutal advice because I'm not going to let my feelings get in the way of things being better" and 2) "I will give brutal advice because other people's feelings shouldn't get in the way of things being better". Maybe Holden wanted people to internalize 1 at the risk of engaging in 2. 2 may have been his way of promoting 1, a way of invalidating the feelings of his readers, who would go on to then be 1 people.

I'm pretty sure that there's a wa... (read more)

1
James_Banks
4y
Also, this makes me curious: have things changed any since 2007? Does the promotion of 1 still seem as necessary? What role has the letter (or similar ideas/sentiments) played in whatever has happened with charities and funders over the last 13 years?

It looks like some people downvoted you, and my guess is that it may have to do with the title of the post. It's a strong claim, but also not as informative as it could be, doesn't mention anything to do with climate change or GHGs, for instance.

Similarly, one could be concerned that the rapid economic growth that AI are expected to bring about could cause a lot of GHG emissions unless somehow we (or they) figure out how to use clean energy instead.

Here's a related quote from Eccentrics by David Weeks and Jamie James (pp. 67 - 68) (I think it's Weeks speaking in the following quote:)

My own concept of creativity is that it is effective, empathic problem-solving. The part that empathy plays in this formulation is that it represents a transaction between the individual and the problem. (I am using the word "problem" loosely, as did Ghiselin: for an artist, the problem might be how to depict an apple.) The creative person displaces his point of view into the problem, investing it
... (read more)

Thinking back on books that have made a big effect on me, I think they were things which spoke to something already in me, maybe something genetic, to a large extent. It's like I was programmed from birth to have certain life movements, and so I could immediately recognize what I read as the truth when it came to me -- "that's what I was always wanting to say, but didn't know how!" I think that probably explains HP:MOR to a large extent (but I haven't read HP:MOR).

My guess is that a large part of Yudkowsky's motivation ... (read more)

4
James_Banks
4y
Here's a related quote from Eccentrics by David Weeks and Jamie James (pp. 67 - 68) (I think it's Weeks speaking in the following quote:) This makes me think: "You become the problem, and then at high stakes are forced to solve yourself, because now it's a life or death situation for you."

Interesting. A point I could get out of this is: "don't take your own ideology too seriously, especially when the whole point of your ideology is to make yourself happy."

An extreme hedonism (a really faithful one) is likely to produce outcomes like:

"I love you."

"You mean, I give you pleasure?"

"Well, yeah! Duh!"

Which is a funny thing to say, kind of childish or childlike. (Or one could make the exchange be creepy: "Yeah, you mean nothing more to me than the pleasure you give me.")

Do people really ex... (read more)

2
Hank_B
2y
"I really love you!" "You mean you enjoy my company a lot?" "Well of course, and I want you to be happy." "I enjoy your company and want you to be happy as well, so I guess I love you too!"   That doesn't seem creepy to me. In fact, I've had this discussion with myself before (about what it means to love someone) and (1) liking them and (2) wishing them happiness, are about what I got. As for people existing, I think the first 2 levels are clearly true regardless of axiology. As for 3, I think a hedonist could say something like "Person X gives me great pleasure, a good thing" and "Person X is happy, another good thing". All 4 of those statements (1, 2, and my revised versions of 3) seem totally fair and non-weird to me, but perhaps I'm misunderstanding you.

I can see a scenario where BCI totalitarianism sounds like a pretty good thing from a hedonic utilitarian point of view:

People are usually more effective workers when they're happy. So a pragmatic totalitarian government (like Brave New World) rather than a sadistic one or sadistic/pragmatic (1984, maybe) would want its people to be happy all the time, and would stimulate whatever in the brain makes them happy. To suppress dissent it would just delete thoughts and feelings in that direction as painlessly as possible. Competing governments would hav... (read more)

In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim "if we don't currently know how to align/control AIs, it's inevitable there'll eventually be significantly non-aligned AIs someday"?

Yes, I agree that there's a difference.

I wrote up a longer reply to your first comment (the one marked "Answer'), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.

Yeah, I wasn't being totally clear with respect to what I was really thinking in that context. I was thinking "from the point of view of people who have just been devastated by some not-exactly superintelligent but still pretty smart AI that wasn't adequately controlled, people who want to make that never happen again, what would they assume is the prudent approach to whether there will be more non-aligned AI someday?", figuring that they would think "Assume that if there are more, it is inevitable that there will be some non-alig... (read more)

3
MichaelA
4y
I at least approximately agree with that statement.  I think there'd still be some reasons to think there won't someday be significantly non-aligned AIs. For example, a general argument like: "People really really want to not get killed or subjugated or deprived of things they care about, and typically also want that for other people to some extent, so they'll work hard to prevent things that would cause those bad things. And they've often (though not always) succeeded in the past."  (Some discussions of this sort of argument can be found in the section on "Should we expect people to handle AI safety and governance issues adequately without longtermist intervention?" in Crucial questions.) But I don't think those arguments make significantly non-aligned AIs implausible, let alone impossible. (Those are both vague words. I could maybe operationalise that as something like a 0.1-50% chance remaining.) And I think that that's all that's required (on this front) in order for the rest of your ideas in this post to be relevant. In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim "if we don't currently know how to align/control AIs, it's inevitable there'll eventually be significantly non-aligned AIs someday"? 

A few things this makes me think of:

explore vs. exploit: The first part of your life (the first 37%?), you gather information, then the last part, you use that information, maximizing and optimizing according to it. Humans have definite lifespans, but movements don't. Perhaps a movement's life depends somewhat on how much exploration they continue to do.

Christianity: I think maybe the only thing all professed Christians have in common is attraction to Jesus, who is vaguely or definitely understood. You could think of Christianity as a movement... (read more)

3
jkmh
4y
This is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. "long-termists interested in things like AI" vs. "short-termists who place significantly more weight on current living things" - OR - "human-centered" vs. "those who place significant weight on non-human lives." Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a "meta-value" that all sub-groups hold, which is that we should use our resources to do the most good that we can. It is vague enough to be interpreted in many ways, but specific enough to keep the community organized. I think the fact that I could come up with (vaguely-defined) examples of sub-groups indicates that, in some way, the EA community already has sub-communities. I agree with the original post that there is risk of too much value-alignment that could lead to stagnation or other negative consequences. However, in my 2 years of reading/learning about EA, I've never thought that EAs were unaware or overly-confident in their beliefs, i.e. it seems to me that EAs are self-critical and self-aware enough to consider many viewpoints. I personally never felt that just because I don't want (nor can I imagine) an AI singleton that brings stability to humanity meant that I wasn't an EA.

A few free ideas occasioned by this:

1. The fact that this is a government paper makes me think of "people coming together to write a mission statement." To an extent, values are agreed-upon by society, and it's good to bear that in mind. (Working with widespread values instead of against them, accepting that to an extent values are socially-constructed (or aren't, but the crowd could be objectively right and you wrong) and adjusting to what's popular instead of using a lot of energy to try to change things.)

2. My first reaction ... (read more)

I'm basically an outsider to EA, but "from afar", I would guess that some of the values of EA are 1) against politicization, 2) for working and building rather than fighting and exposing ("exposing" being "saying the unhealthy truth for truth's sake", I guess), 3) for knowing and self-improvement (your point), 4) concern for effectiveness (Gordon's point). And of course, the value of altruism.

These seem like they are relatively safe to promote (unless I'm missing something.)

Altruism is composed of 1) other... (read more)